• Skip to main content
  • Skip to primary sidebar

Teague Hopkins

Mindful Product Management

  • Home
  • About
  • Blog
  • Contact

Teague Hopkins

Mar 13 2026

The Familiar Crisis

On displacement, class, and the discovery that it was never unprecedented

My previous two essays in this series argued that the AI debate is trapped in a false binary and that the trap is partly psychological: identity threats activating the nervous system before the prefrontal cortex can engage. Both generated thoughtful responses, but one pattern in the conversation kept nagging at me.

The loudest voices in the AI debate are overwhelmingly educated, credentialed, and professionally comfortable. Writers, academics, consultants, journalists. People with platforms, with op-ed access, with the vocabulary and institutional connections to make their anxiety visible at scale. The conversation about AI displacement is dominated by the people for whom displacement is novel.

The Volume Is the Story

The fear of AI-driven job loss is widespread; roughly three quarters of Americans report concern about permanent displacement. But when Gallup tracked who became more anxious after ChatGPT launched, the spike occurred almost exclusively among college graduates. Workers without degrees, many of whom have lived through multiple waves of automation, barely moved.

That asymmetry matters less for what it says about the technology than for what it says about the conversation. Educated workers don’t just worry differently; they worry louder. They write the op-eds, deliver the keynotes, testify before Congress, and publish the Substack posts. When autoworkers in Flint lost their livelihoods in the 1980s, the displacement was structurally devastating, but the affected workers didn’t have media platforms. The cultural volume of that crisis never matched its scale. When journalists and professors feel threatened, the volume is immediate and immense, because the threatened class is the class that controls the microphone.

This doesn’t mean the anxiety is illegitimate. Entry-level hiring for AI-exposed white-collar jobs has already dropped measurably, and job postings for corporate roles are declining while applications surge. The harm is real. But the ratio of attention to harm is historically anomalous: democracies allocate policy attention in proportion to the narrative power of the affected group, not the magnitude of the disruption.

First Time for Everything

The deeper issue is not volume; it’s novelty. Knowledge workers are encountering a specific experience for the first time: the discovery that structural economic forces don’t care about your credentials. That years of training and carefully built expertise can be devalued not because you did anything wrong, but because the economics shifted beneath you.

This experience is genuinely new to this class, but it is not remotely new.

Textile workers knew it in the 1810s. Switchboard operators knew it in the 1970s. Manufacturing workers knew it through the entire back half of the twentieth century. Coal miners, retail workers, truck drivers watching autonomous vehicles in development: millions of people have lived inside this experience, and most of them navigated it without media access, professional networks, or cultural sympathy.

The knowledge worker’s AI panic is parochial. It treats as unprecedented a pattern that has repeated for two centuries. What’s unprecedented is not the displacement; it’s who is being displaced.

The Irony No One Wants to Name

The professional class now threatened by AI is, in many cases, the same class that built and benefited from the intellectual framework being used against them. “Disruption” was a term of art in business schools and consulting firms long before it showed up in conversations about ChatGPT. Knowledge workers wrote the McKinsey reports on automation, taught the MBA courses on creative destruction, and consulted on the restructurings that eliminated manufacturing jobs. The prevailing theory, stated or implied, was that technological displacement was inevitable, that the market would reallocate labor efficiently, and that the affected workers needed to “reskill” and adapt.

That theory felt coherent when it described someone else’s problem. “Reskill” is easy advice to give when you’re not the one whose skills just lost their market value.

I am not arguing that knowledge workers deserve what’s happening to them; that framing misses the structural point entirely. I am arguing that the experience of being on the receiving end is producing, in real time, a recognition that the framework was always incomplete.

I suspect there are people in previously displaced communities watching this unfold with a weary, unsurprised acknowledgment: now you know what it feels like.

What the Displaced Already Knew

The people who’ve been through this before learned things the knowledge class is only now beginning to discover. They learned that individual merit does not protect you from structural forces. They learned that the market does not share its gains voluntarily. And they learned that the only reliable protection against displacement is collective: unions, mutual aid networks, transition funds, political coalitions that force redistribution rather than hoping for it.

Knowledge workers have almost none of this infrastructure. We have professional associations, but those are networking organizations, not bargaining units. We have LinkedIn, but that is a marketplace, not a coalition. We have cultural prestige, but prestige doesn’t negotiate severance packages or mandate transition support.

The scaffolding that would protect knowledge workers from displacement does not exist, largely because knowledge workers never believed they would need it. But the scaffolding that protected other workers did work for a time. It has now been eroding for decades, since at least the 1980s: union membership has fallen from a third of the workforce to roughly ten percent, transition programs are chronically underfunded, and the regulatory frameworks that once mandated the distribution of gains, including taxes on the wealthy, have been systematically weakened. This was not an accident. Deregulation, tax reform, right-to-work legislation, the hollowing out of labor protections: these were policy choices, often supported or at least tolerated by the professional class that benefited from cheaper goods and services and frictionless markets. The tools that worked are still the tools that work. We have just spent forty years dismantling them because they weren’t working for us.

The communities that survived previous waves of automation could have told us what we’d need. Some of them did. We weren’t listening, because we couldn’t imagine that the lesson applied to us.

The Opportunity in the Recognition

The three-quarters of Americans who fear permanent job loss from AI are not making a technical prediction about artificial intelligence. They are making a social prediction about power. The fear is not “a machine will take my job.” It is “the people who benefit from this technology will not share the gains with me, and nothing in our current institutional landscape will make them.”

That fear is well-calibrated to history. It is also, for the knowledge class, a new sensation: the discovery that you are not exempt from the forces you’ve been theorizing about from a safe distance.

But novelty carries an advantage. The knowledge class has something previously displaced workers did not: cultural influence, institutional access, and the ability to shape narratives at scale. The question is what they do with those advantages now that displacement is personal rather than theoretical.

One option is to treat this as an unprecedented crisis requiring novel solutions, build protections for the professional class specifically, and continue ignoring the structural pattern. This is the path of least resistance, and it is the one the discourse is currently on.

The other option is harder. Recognize that the experience is not new, that the people who went through it before you are not cautionary tales but teachers, and use the cultural power this class possesses to build protective infrastructure that serves everyone: collective bargaining, transition support, governance frameworks that mandate the distribution of technological gains. Not because it’s noble, but because it’s what actually works, and because the people who learned that lesson the hard way have been trying to tell us for decades.

The factory workers could have told us this was coming. Some of them did. The question is whether we’re finally ready to listen.

Written by Teague Hopkins · Categorized: Main

Feb 16 2026

Come Back Online

Someone challenged a point I’d made about AI recently, and I noticed something unsettling: I was composing my rebuttal before I’d finished reading their argument. Not weighing it; defending against it. My chest was tight, the sympathetic nervous system fully online before my prefrontal cortex had a chance to weigh in.

I caught it that time, though I don’t always.

That reaction is what this essay is about, not because it was unusual, but because I keep watching the same thing happen to people I respect, and the pattern has become impossible to ignore.

Smart, thoughtful people who reason carefully about hard problems in every other area of their lives encounter the topic of AI and go rigid. Their thinking narrows. They reach for analogies instead of evidence, collapse complex questions into binary positions, and defend those positions with an intensity that has nothing to do with the strength of the underlying argument.

A few weeks ago, I wrote about the false choice in the AI debate: the idea that we’re forced to pick between uncritical acceleration and categorical opposition, when neither position survives contact with reality. The response was telling. People who agreed mostly engaged with the argument; people who disagreed mostly didn’t. Someone raises a specific point, and the response leapfrogs past it to a broader grievance: factory labor analogies, environmental impact, copyright, corporate exploitation narratives, sweeping claims about the death of critical thinking. Not wrong, necessarily, but not engaged with the thing that I actually said. Pre-loaded, as if the conclusion was reached before the conversation started and the reasoning exists only to justify it.

I keep catching myself tempted to do the same thing, and the pattern is so consistent, so unlike how these same people engage with other difficult topics, that I’ve stopped believing this is a thinking problem. Something else is going on, and I think naming it matters, because the thing driving this reaction isn’t just making the conversation unproductive; it’s actively dangerous.

What Fear Looks Like When It’s Wearing a Suit

Here’s what I’ve come to believe is happening, and I include myself in this.

For knowledge workers, educators, and writers, AI doesn’t just represent a new technology to evaluate. It represents a direct challenge to the thing that earns us our seat at the table.

If you spent years learning to write clearly, and a tool now helps anyone write clearly, the tool hasn’t just changed the landscape; it’s devalued your specific investment. If your professional identity is built on credentialed expertise and the ability to articulate complex ideas in polished prose, a technology that commoditizes articulation feels less like disruption and more like erasure. I’ve built a career on being the person who sees systems clearly and helps organizations navigate complexity, and if AI gets good enough at that, I’m not sure what I still have to offer. I’ve sat with that question, and it’s not comfortable.

This isn’t a rational calculation any of us are making consciously. It’s an identity threat, and identity threats activate the nervous system before they reach the prefrontal cortex. When your brain perceives a threat to who you are, it doesn’t give you your best thinking. It gives you pattern-matching, where this looks like past exploitation and so it must be exploitation. It gives you position-defending, interpreting new information through a filter you’ve already locked in. It gives you analogies that reach for historical parallels not because they’re analytically apt but because they feel right, and feeling right is enough when your nervous system is running the show.

The result looks like a principled argument and sounds like one, but it has a tell: it doesn’t update. You can present new evidence, draw finer distinctions, point to specific use cases where the concerns don’t apply, and the response doesn’t shift; it just reasserts, often with more emotional intensity. That’s not reasoning; that’s defense.

I know what it feels like from the inside, because I’ve done it. When you’re in it, it feels like clarity, like you’re the one seeing the situation for what it really is, which is exactly what makes it so hard to catch.

Ancient Hardware, Modern Problem

The fear is legitimate; if you’ve built your career on skills that are genuinely being commoditized, you should feel unsettled, because that’s an appropriate response to a real situation.

The problem is that the response is running on ancient hardware. Pattern-matching, rapid threat assessment, in-group loyalty, the tendency to act first and analyze later: these heuristics kept our ancestors alive, and they’re brilliantly adapted for a world where threats are physical, immediate, and binary. A rustle in the grass really is either a predator or nothing; the cost of a false positive is a wasted sprint, while the cost of a false negative is death. Of course we evolved to run first and ask questions later.

But AI isn’t a predator in the grass. It’s a complex, evolving system with unevenly distributed risks and benefits that will play out over decades, and the heuristics that protected us from lions are worse than useless here; they actively prevent us from seeing the situation clearly. When the sympathetic nervous system takes over, we lose access to exactly the cognitive tools the moment demands: nuance, uncertainty tolerance, the ability to hold competing truths simultaneously.

What separates productive engagement from defensive reaction is deceptively simple: the ability to notice when the alarm is firing and choose not to let it drive. This is, at its core, a mindfulness problem; not in the scented-candle sense, but in the most practical sense imaginable. Can you observe your own nervous system responding, recognize that the response is not the same thing as the reality, and create enough space between stimulus and reaction to engage with what’s actually in front of you?

The people I know who are engaging most productively with AI, whether as enthusiasts or as critics, share this: they’ve learned to notice when they’re reacting and pause long enough to start thinking instead. What they see when they do is worth paying attention to.

What Clear Eyes See

When you get past the reflexive fear and look at the actual landscape, the stakes are larger and more urgent than anything we’ve been arguing about.

The concentration of AI capabilities in the hands of a few companies and nations is accelerating. Biological and cybersecurity risks scale with model capability. The potential for economic disruption severe enough to destabilize democracies is growing, and we have no precedent for managing it. The window for building governance frameworks is finite, and it’s narrowing.

Dario Amodei, the CEO of Anthropic, recently described this moment as the adolescence of technology: a period where humanity will have access to almost unimaginable power before we’ve developed the maturity to wield it. Yes, he runs an AI company, and that’s a conflict of interest worth noting, but the risks he catalogs don’t become less real because of who’s describing them. If anything, the fact that someone inside the industry is sounding these alarms should sharpen our attention.

The “adolescence” metaphor works in a second direction, one I don’t think Amodei intended. The discourse about AI is adolescent too, in a specific psychological sense; not because it lacks intelligence, but because it lacks the ability to regulate emotional responses long enough to engage with complexity. It personalizes everything, collapses nuance into loyalty tests, and confuses the intensity of feeling with the validity of position.

Whether someone used AI to edit a social media post is not among the real dangers. The real dangers are playing out at a scale and speed that demands the best thinking we have, and that thinking is largely absent from the conversation.

Somewhere right now, decisions are being made about compute infrastructure, data rights, economic transition policy, and the acceptable boundaries of autonomous AI systems, decisions that will shape the next several decades. The table where they’re being made has empty chairs. The people who should be filling those chairs—educators who understand how humans actually learn, writers who understand the relationship between language and thought, labor advocates who understand economic displacement, ethicists who’ve spent careers on exactly these questions—are standing outside the room, arguing about whether the room should exist.

I get it: the room shouldn’t have been built this way, the power dynamics are wrong, and the incentives are misaligned. All true. But the room exists, the decisions are being made, and refusing to engage isn’t the principled stand it feels like. The people already at the table are happy to proceed without us.

Here’s the part that nobody on the sidelines wants to hear, the part I think matters more than anything else in this essay: if you aren’t engaging with the technology directly, learning how it actually works, what it can and can’t do, where it breaks, where it surprises you, then you are forfeiting the ability to have an informed opinion about it, and an uninformed opinion, no matter how principled its origins, is just noise.

I understand the impulse to boycott; it feels like integrity, like refusing to be complicit. But the practical effect is this: the only people developing deep, hands-on knowledge of what these systems actually do are the people building and selling them. If humanists, educators, ethicists, and labor advocates cede that ground, if the only people who truly understand the technology are the ones motivated by market share, then those are the people who will write the policies, set the defaults, and define what “responsible AI” means. We will get a future shaped entirely by the people with the least incentive to protect what we care about most.

This is the central paradox of principled disengagement: by refusing to touch the technology, you surrender the exact knowledge you would need to hold it accountable. You can’t effectively critique what you don’t understand, propose alternatives to systems you’ve never examined, or spot the gaps in a governance framework if you don’t know what the technology is actually capable of today—not what someone told you six months ago, not what you extrapolated from a headline, but what it does right now when you sit with it and push.

The critics I find most useful aren’t the ones who’ve decided AI is irredeemable; they’re the ones who’ve gotten specific: this model was trained on this data without this consent, that deployment in that context produced those measurable harms, this governance framework has these gaps. That kind of opposition actually changes outcomes, and every single one of those critics got there by engaging with the technology closely enough to know where the real leverage points are.

Come Back Online

The tables where AI’s future is being shaped, from congressional halls to conference calls, need more than technologists and investors. They need people who understand learning, language, labor, equity, and power: people who’ve spent their careers on those problems and whose expertise is not diminished by AI but made more essential by it. That expertise only counts, though, if it’s grounded in firsthand knowledge of the thing you’re trying to shape. The world doesn’t need more people with strong opinions about AI and no experience of it. It needs people who’ve done the work to understand it and have the values to insist it serve everyone, not just the people who built it.

This is an invitation, but it’s also an invocation. Our nervous systems are telling us to fight or flee, and the moment calls for neither. It calls for the hard, unglamorous work of showing up, getting our hands dirty with the actual technology, paying close attention to what it’s doing and what it isn’t, and insisting that the people building the future account for its consequences. We need you in this conversation—not your fear, not your reflexes, you: your expertise, your values, your willingness to grapple with something genuinely hard.

The fear is real, and the threat is real, but the threat isn’t the tool. It’s that we’ll spend the critical window for shaping it arguing about the wrong things, or worse, refusing to engage with it at all, while the people with the fewest scruples shape it without opposition.

You are too important to sit this out. The stakes are too high for anything less than our best thinking.

Written by Teague Hopkins · Categorized: Main

Nov 17 2025

The False Choice

Why History Teaches Us to Reject Faith-Based Arguments on Both Sides of the AI Debate

In the era of AI, there are extremists on both sides of the issue. One side, which I have taken to calling the “accelerationists,” thinks AI will solve major societal problems and that we shouldn’t make rules, because that will just slow things down. The other side, “critics,” issues doomsday warnings about environmental harm, stolen content, lost jobs, and even the end of humanity, often arguing that AI development should be paused indefinitely.

Both sides are deploying a sort of faith-based reasoning: treating unknowns as certainties and prediction as destiny. But lessons from history show us that technologies’ effects are driven by how we choose to use them, not necessarily by the inherent qualities of the technology itself. I advocate for rejecting both unchecked optimism and doom in favor of a practical approach: measuring what actually happens and figuring out how to put in place parameters that encourage the benefits and mitigate the harms.

The composition is built from simple geometric shapes rather than realistic people: a horizontal balance beam or tightrope stretches across the image, with one end tinted in cool blues and teals and the other in warm reds and oranges, representing two opposing extremes.

Critics’ Concerns

When accelerationists dismiss AI critics as “doomers,” they’re missing what’s actually happening. The opposition includes multiple groups with distinct, valid concerns:​

Environmental critics note that training one large language model emits as much CO2 as five cars over their lifetimes, and AI data centers are projected to consume 3-4% of global electricity by 2030.​

Artists and creators watched companies scrape their copyrighted work without permission or payment.

Practical skeptics tried the tools, found them underwhelming, and see another Silicon Valley bubble. Sometimes this comes from not knowing how to use AI well, but often it reflects genuine disappointment when capabilities don’t match the marketing.​

Workers see a real risk of displacement as automation obviates the need for some specific jobs.

Finally, those worried about existential risk have concerns ranging from AI-enabled authoritarianism to extinction scenarios.​

What brings these groups together is shared anxiety about rapid deployment without governance. But many arguments rest on projections rather than measured impacts.​

The Pattern: Resistance Has Never Stopped Valuable Technology

Every major automation wave has faced opposition.

When the printing press arrived in Venice in the 1470s, the monk Filippo de Strata wrote to the Doge comparing the pen to a “virgin” and the printing press to a “whore.” Some scribes destroyed presses. 

In the 1810s, the Luddites opposed mechanized looms because automation genuinely threatened their economic position. 

When electric lighting threatened the gas industry in the 1880s, lamplighters in Belgium smashed electric lamps. The gas industry mounted aggressive campaigns to discredit electricity. Edison even arranged public electrocutions of animals to demonstrate how dangerous AC current was.

In 1970, telecommunications employed 421,000 switchboard operators, compared to about 78,000 today with automation. Operators in cities that transitioned to mechanical switching were substantially less likely to have any job 10 years later. Older workers were 7% less likely to be working at all.​

Critics of past technologies often correctly identified harms that society then struggled to address. Automobiles did contribute to climate change. The Internet did enable surveillance capitalism. Early warnings proved prescient, even when they didn’t stop deployment. Uncritical acceleration created locked-in harms because we chose speed over governance.​

But in every case, the technology became ubiquitous and now shapes our modern lives.

Here’s the crucial lesson: resistance alone has never stopped technologies offering substantial economic advantages. If AI provides genuine value—and evidence increasingly suggests it does in specific domains—categorical opposition won’t prevent deployment. But the other side of that lesson is that policies shape how technology affects people. The impact on phone operators was less severe in cities that offered transition programs than in cities that did not—even though the technology was exactly the same everywhere.​

Accelerationists’ Hopes – and the Reality

Accelerationists make equally unsupported claims. They promise AI will cure cancer, solve climate change, and create widespread wealth—all based on speculation about future capabilities.​

Accelerationists imagine that everyone will have an AI therapist that exactly matches their needs – but AI sycophancy has posed a challenge and some AI interactions have even encouraged suicidal ideation. 

They envision a future where AI can manage large-scale farming and production, delivering us everything we need with no oversight and dramatically reducing costs – but AI has only been able to take over very specific, concrete tasks within production processes.

They see AI transforming education and health care by putting experts in everyone’s pockets, without large-scale successes in either realm.

They hope that the AI revolution will bring us to the technological singularity, dramatically changing every aspect of human life and allowing us to transcend our current conditions – but we don’t know if we’re days or centuries away from something that could be called artificial general intelligence, or if we’ll ever get there.

Despite massive investments, we don’t see clear evidence of economy-wide productivity improvements. And we certainly haven’t cured cancer, solved climate change, or erased income inequality yet. Individual case studies show promise, but broad transformation requires multiple leaps that we haven’t yet made.​

The accelerationist position treats beneficial outcomes as automatic and dismisses concerns as obstacles.​

What We Should Be Asking: The Pragmatic Path

While critics extrapolate disaster, and accelerationists promise transformation, answerable questions aren’t getting attention:​

  • What are the actual, measured productivity gains from AI adoption in different contexts?
  • Which specific jobs face displacement, on what timeline, and with what support systems?
  • What are realistic energy consumption trajectories, accounting for both scaling and efficiency improvements?
  • Which governance frameworks balance innovation with harm mitigation?
  • How do we measure value creation versus value extraction?

These require data, experimentation, and measurement—not speculation.​

Societies that navigated previous technological transitions best measured actual displacement, funded concrete transition support, and adjusted based on outcomes. The post-WWII GI Bill and 1950s automation programs at companies like General Electric showed this pragmatic path.​

We can do the same with AI:​

  • Measure real effects. Focus on what AI actually does to jobs, energy consumption, and productivity. Use that data to inform policy.​
  • Support those who get hurt. History shows poorer and older workers suffer most. Experiment with different support mechanisms—retraining programs, financial assistance, even UBI pilots—and measure what works.​
  • Build adaptive governance. Create regulatory frameworks that evolve as AI capabilities change. Require transparency, mandate fairness testing, and ensure workers have a voice in deployment decisions.​
  • Recognize limits. Deploy AI for specific tasks where it genuinely excels. Keep humans in the loop for high-stakes decisions.​
  • Address energy consumption. Invest aggressively in efficiency improvements and clean power, and track actual results.​
  • Compensate creators fairly. Experiment with models for compensating artists and writers when AI trains on their work while still enabling useful applications.​

Critical Thinking Over Faith

The choice between acceleration and opposition is false. The real choice is between making decisions based on imagined futures, or building the governance infrastructure that lets us learn and adapt as we go.​

This isn’t the exciting narrative of revolutionary transformation or existential threat. It’s the boring work of measuring impacts, building adaptive systems, and making policy choices based on what we actually observe. Technologies don’t determine outcomes—we can, and we should.​

Written by Teague Hopkins · Categorized: Main

Nov 04 2025

The Three Stages of AI Adoption

How to Actually Deliver Value

Most leadership teams I talk to are wrestling with the same question: How fast can we leverage AI? There’s pressure, sometimes self-imposed, sometimes from the board, to deliver AI transformation yesterday. That pressure creates a temptation to build something ambitious and fully automated right now.

I’ve watched this play out, and it almost never works the way people hope.

There’s a better path, borrowing from how product teams that actually move the needle operate. It has three stages. Each stage teaches you what you need to know before the next one. Start with giving your teams tools and watch what they do with them. Next, automate the workflows that prove repetitive. Finally, move to full autonomy – if and only if the math justifies it.
Three white panels on a dark background show a progression from ‘Tool Usage’ with a wrench icon, to ‘Automate with HITL’ with a human figure, to ‘Automate without HITL’ with a simple robot icon; beneath, a left‑to‑right gradient arrow labeled ‘Control’ on the left and ‘Agency’ on the right illustrates increasing autonomy.
I’ve seen this work for product development teams, marketing teams, and customer care teams. The pattern is the same.

Stage 1: Give Them Tools, Then Watch

This first stage is so simple it feels like you’re not doing enough. You’re not building anything. You’re equipping and enabling.

  • A marketing team copy-pastes product info into ChatGPT to generate social posts. 
  • A customer care team uses Claude to draft responses. 
  • Sales researches prospects.
  • Finance extracts invoice data.
  • Engineering uses Cursor to help with code comments and documentation.

Humans decide everything: When to use the tool; how to prompt it; whether the output is usable; what to do with the result. The AI is an assistant inside a manual workflow.

It can feel slow, but it’s valuable learning time. What actually happens is something more useful: you discover which tasks are genuinely repetitive, which workflows jam people up, which problems your teams actually want solved. You get real data to work from instead of guessing.

There’s a confidence piece too. Your teams develop intuition about what these systems can and can’t do. They learn how to prompt effectively. They hit the edge cases and see the failure modes in real time, and that ground truth matters because it shapes everything that follows.

Stage 2: Automate, But Keep a Human in the Loop

The next thing you look for is repetition. When someone’s running the same prompt over and over, or they’re sharing a working prompt with teammates who have the same problem, it’s time to start building.

This stage is where you create automation that runs the AI process and surfaces the result for human review and approval. 

  • A support system auto-drafts responses for agents to edit. 
  • A marketing platform generates social variants for the team to choose from. 
  • A finance tool categorizes invoices and flags unusual spend for review.
  • A sales system scores leads and suggests follow-up timing for reps to execute.

You’re shifting from one person making decisions to many people making decisions faster. AI handles the repetitive thinking. Humans handle judgment, context, and protection. You’re also building a feedback loop. Each approval, and each rejection, teaches the system what good looks like inside your organization.

The jump from Stage 1 to Stage 2 takes engineering work, but now it’s justified because you already validated that the problem is real.

Stage 3: Fully Autonomous

At stage three, we finally try out some full automation. This is the first time we consider having no human in the loop. The system can run end-to-end, with output flowing straight into the business process.

  • A marketing platform auto-generates, schedules, and publishes social posts.
  • A finance tool auto-categorizes transactions and settles approved invoices.
  • A sales system auto-scores leads, assigns them to reps, and schedules outreach.
  • A support system auto-routes tickets to specialists and responds directly to customers.
  • An inventory system monitors stock levels and automatically places supplier orders when thresholds are breached.

But “Ready for Stage 3” depends on your function and how much risk you can tolerate.

For low-stakes, 80% accuracy might be fine. An auto-generated social post that occasionally needs a tweak? That might be acceptable. Higher-stakes decisions might require 95% or 99% accuracy. If you’re giving medical diagnoses, detecting fraud, or making decisions that could damage customers, you might never remove the human, or only when AI itself advances far beyond where we are now.

Stage 3 isn’t a universal destination. It’s an option, available when the risk-reward equation justifies it.

The Counterintuitive Speed Argument

Here’s what challenges most people’s instincts: when you’re pressured to show AI value fast, the reflex is to skip Stages 1 and 2 entirely. You build ambitious systems immediately with heavy infrastructure investment to reach your vision.

This approach delays real value.

Without the learning that comes from Stage 1, you’re automating problems you’ve guessed at, not problems people have. You burn engineering cycles on workflows that might not create the value you expected. You ship systems where you haven’t actually seen their failure modes in your business.

The paradox: starting with Stage 1, just giving teams tools and watching, is faster in the end. It cuts straight to problems worth solving. It creates organizational conviction about AI through observation, not assertion. It identifies which automation bets are high-leverage before you spend resources on them.

Stage 2 then becomes a focused, high-confidence engineering effort. Stage 3 happens naturally when the data supports it, not when the calendar demands it.

Moving Forward

The three-stage approach isn’t flashy. You don’t get to write a press release about “fully autonomous AI agents.” What you get is value creation anchored in real problems, that scales because your teams understand it, and compounds because you understood what actually matters before you built it.

Start with tools, observe, and automate what works. Move to autonomy only when it makes sense.

This path to value is faster than it appears, and vastly faster than sprinting to Stage 3 without first understanding what’s worth automating.

Written by Teague Hopkins · Categorized: Main

Oct 26 2025

Bits, Atoms, and Neurons

For years, we talked about the friction between the digital and physical worlds; between bits and atoms. Bits moved at the speed of light through fiber optic cables, while atoms plodded along in trucks and on conveyor belts. The promise of technology kept hitting the wall of physical logistics. But then something changed: we largely solved the atoms problem. Warehouse automation has doubled processing speeds and reduced errors by 99%.¹ AI-powered route optimization has reduced delivery times by 20-30% and improved on-time delivery rates by up to 40%.² Same-day delivery is now routine, not miraculous.

The new constraint isn’t physical anymore. Wetware has become the limiting factor.

The Hierarchy of Change

Consider the speed at which different substrates can change. Computer processors running at 3.2 GHz execute 3.2 billion cycles per second with each cycle taking roughly 0.3 nanoseconds. Data transmission across networks operates on millisecond timescales, with good internet latency ranging from 30-100ms. Sending data takes seconds, maybe less.³

Physical packages take days. Standard domestic shipping requires 2-5 business days. Express services can manage overnight delivery, but we’re still measuring in days, not seconds. The atoms are slower than the bits.⁴

But neural rewiring? That takes weeks to months. Research shows habit formation averages 66 days, with individual experiences ranging from 18 to 254 days depending on complexity and consistency. Creating new neural pathways requires repeated activation, gradual strengthening of synaptic connections, and shifting from deliberate prefrontal cortex control to more automatic processing. This biological rewiring cannot be rushed. It requires time for structural changes in brain tissue.⁵

The more physical and biological the substrate, the slower the change. Digital bits rearrange nearly instantaneously. Physical atoms must be transported through space. Living neurons require metabolic processes, protein synthesis, and structural remodeling that takes time.

The Recursive Acceleration Problem

Here’s where it gets really challenging: AI systems, virtual neurons, are adapting faster than human neurons can adapt to those improvements. This creates a recursive acceleration problem that previous technological revolutions didn’t face.

The evidence is stark. Fewer than 10% of U.S. companies actively use AI in production, and 68% of organizations move 30% or fewer AI experiments into full deployment. Yet 75% of knowledge workers globally use AI at work, with employees at over 90% of companies using personal AI tools, often without official approval.⁶ The technology exists. People want to use it. But institutions cannot adapt fast enough.

Even if AI development stopped today, it would take us 5-10 years to learn all the habits and new ways of working to take advantage of the capabilities we already have. Organizations require 18-24 months just to develop new upskilling programs and by the time they’re deployed, organizational needs have often changed. The World Economic Forum projects that 59% of the global workforce will require reskilling by 2030. Half the workforce needs reskilling within five years.⁷

Recent research from the St. Louis Fed reveals a troubling correlation: occupations with higher AI exposure experienced larger unemployment increases between 2022 and 2025, with a 0.47 correlation coefficient. This isn’t hypothetical anymore. The wetware bottleneck is producing structural unemployment right now.⁸

Cyborgs Walk Among Us

I think we need to recognize that we’re already post-human cyborgs (or at least humans partnered with computer sidekicks). I use my computer to calculate things, remember things, and now even summarize and organize unstructured data with LLMs. I’ve long been a proponent of “Computers should do what computers are good at so humans can do what humans are good at.”

The problem is that we have relatively low-bandwidth interfaces with our digital extensions. We type. We click. We read screens. These are slow, sequential processes compared to the speed at which our silicon partners operate. The mismatch between silicon speed (nanoseconds) and synaptic speed (weeks to months) represents a fundamental constraint on 21st-century progress.

A Qualitative Difference

The difference between automating the movement of atoms and accelerating the rewiring of neurons is qualitative rather than quantitative. We don’t have a real concept of the solution.

Is it educational innovation? Brain-computer interfaces like Neuralink? Becoming post-human cyborgs with higher bandwidth? Nootropic drugs or genetic engineering for higher IQ? Is it a societal change that shifts the balance of learning and work? Modern work already requires more schooling than in the past on average. Do we move to a world with universal basic income supporting an ever-learning workforce?

Previous technological revolutions had clearer adaptation paths. The Industrial Revolution was about moving atoms in new ways: we built factories, trained workers for specific tasks, and created new economic structures around manufacturing. The Internet was about moving bits in new ways (as was the printing press before it): we learned to browse websites, send emails, and eventually work remotely.

But this? This is about virtual neurons adapting faster than biological neurons. That’s a different category of challenge entirely.

The Cultural Obstacle

The social and biological consequences feel bigger than previous innovations, particularly in American culture. We have such a deep concept of tying worth to work, thanks to the Protestant work ethic. Work provides not just income but identity, status, purpose, and self-worth. Research shows unemployment causes severe mental health decline.⁹

We’ll have to figure out how to overcome this if we want to survive the mental health hit of a post-work society. Productivity will increasingly be about building and adapting autonomous systems instead of doing repetitive tasks. That could lead to massive unemployment. We don’t know what to do with that, but we’ll need ways for people to create, not just be consumers. Pure consumption doesn’t lead to lasting happiness and could be a major pitfall for our collective mental health.

A Hopeful Vision

Yet there is cause for hope. When people have UBI, most continue to work, except three groups: students, parents of young children, and the retired or chronically ill. The possibility of lives made of more learning, caretaking, and recovery seems incredibly human and humane. That’s a world I want to live in.¹⁰

Meaningful work provides psychological benefits that extend far beyond income. But “work” doesn’t have to mean what it meant in the 20th century. It can mean learning. Caregiving. Creating. Recovering. Building community. All the things that make us human but that we’ve been too busy earning a living to fully embrace.¹¹

The Path Forward

We’re not going to get there without a combination of forces. We need policy (UBI in particular, or something like it) to provide the economic foundation. It would be nice to avoid massive unemployment, but I don’t think we can adapt fast enough for that. Market forces will likely force the issue through displacement. Grassroots movements will be crucial in changing attitudes toward work that have persisted for centuries.

The transition will be chaotic. Neural rewiring takes weeks to months. Organizational adaptation takes years. Cultural shifts around work identity could take generations. There will be a debate about whether there’s value in remaining purely human or whether we should embrace better brain-computer interfaces and biohacking. Does genetic engineering for higher IQ help us as a species? I don’t know.

I prefer to think that we have a solution through reconceptualizing work. But that doesn’t mean it will be easy.

Individual Agency in an Era of Structural Change

The good news for individuals is that focusing on your own adaptation will help you in either circumstance. Either you are part of moving us toward that beautiful vision of the future, or you are positioning yourself to be part of a small elite who still have marketable skills in a dystopian future.

Skills that remain valuable include creative problem-solving, complex communication, ethical reasoning, emotional intelligence, and adaptability itself. But more importantly, the ability to reconfigure your neural pathways, even if it takes weeks, is the meta-skill that enables everything else.¹²

I hope for the former scenario. And because everyone preparing and adapting would lead us toward that more humane future, I’m fundamentally optimistic. I’m committing to helping as many motivated people as I can successfully learn and transition.

Conclusion

We’ve moved from “bits vs. atoms” to “bits vs. brains.” The technology adoption curve now reveals a stark gap: the distance between digital capabilities and human ability to use those capabilities is growing and accelerating. Unlike logistics, we cannot simply automate human learning and organizational adaptation at scale.¹³

But recognizing the constraint is the first step toward addressing it. If wetware is the bottleneck, then investing in human adaptation, through education, through cultural change, and through new economic structures that support lifelong learning, becomes the most important work of our time.

The question isn’t whether we’ll face this transition. We’re already in it. The question is whether we’ll navigate it thoughtfully, building systems and cultures that support human flourishing, or whether we’ll let market forces and technological momentum carry us into a future we didn’t choose.

I believe we can choose. But we need to start now, with clear eyes about the challenge we face and the biological constraints we’re working with. The bits will keep accelerating. The atoms are largely solved. The neurons—our neurons—will adapt at their own pace.

Our job is to create the conditions where that pace is enough.


¹ https://www.linkedin.com/pulse/logistics-automation-breakthrough-intelligent-supply-chain-akabot-koa9c; Karadex data on 99.9% pick accuracy; MHI data showing up to 85% productivity increases from warehouse automation

² Artech Digital: 20% delivery time reduction, 40% on-time rate improvement; DHL India case study; Various studies showing 20-30% delivery time improvements

³ https://www.intel.com/content/www/us/en/gaming/resources/cpu-clock-speed.html; Network latency data

⁴ https://redstagfulfillment.com/fedex-ups-usps-delivery-times/

⁵ https://www.mendi.io/blogs/brain-health/how-long-does-it-take-to-rewire-your-brain-for-better-mental-health; UCL study on habit formation; Systematic review and meta-analysis on habit formation timing

⁶ https://www.glean.com/perspectives/benefits-and-challenges-ai-adoption; Microsoft Work Trend Index 2024; MIT study on shadow AI economy

⁷ https://www.ere.net/articles/rapid-reskilling-at-scale-why-the-future-of-work-depends-on-it; WEF Future of Jobs Report 2025

⁸ https://www.stlouisfed.org/on-the-economy/2025/aug/is-ai-contributing-unemployment-evidence-occupational-variation

⁹ https://www.insidehighered.com/opinion/columns/higher-ed-gamma/2024/05/28/how-work-and-career-became-central-americans-identity; APA research on unemployment and mental health; PMC study on unemployment and mental health

¹⁰ https://globalaffairs.org/commentary-and-analysis/blogs/multiple-countries-have-tested-universal-basic-income-and-it-works; GiveDirectly UBI study results; German Basic Income experiment

¹¹ https://peakpsych.com.au/resources-for-individuals/the-health-benefits-of-meaningful-work/; Research on meaningful work and well-being

¹² https://www.paybump.com/resources/6-future-proof-job-skills-in-the-age-of-ai

¹³ https://whatfix.com/blog/technology-adoption-curve/

Written by Teague Hopkins · Categorized: Main

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 12
  • Go to Next Page »

Primary Sidebar

Copyright © 2026 Teague Hopkins
 

Loading Comments...