
Someone challenged a point I’d made about AI recently, and I noticed something unsettling: I was composing my rebuttal before I’d finished reading their argument. Not weighing it; defending against it. My chest was tight, the sympathetic nervous system fully online before my prefrontal cortex had a chance to weigh in.
I caught it that time, though I don’t always.
That reaction is what this essay is about, not because it was unusual, but because I keep watching the same thing happen to people I respect, and the pattern has become impossible to ignore.
Smart, thoughtful people who reason carefully about hard problems in every other area of their lives encounter the topic of AI and go rigid. Their thinking narrows. They reach for analogies instead of evidence, collapse complex questions into binary positions, and defend those positions with an intensity that has nothing to do with the strength of the underlying argument.
A few weeks ago, I wrote about the false choice in the AI debate: the idea that we’re forced to pick between uncritical acceleration and categorical opposition, when neither position survives contact with reality. The response was telling. People who agreed mostly engaged with the argument; people who disagreed mostly didn’t. Someone raises a specific point, and the response leapfrogs past it to a broader grievance: factory labor analogies, environmental impact, copyright, corporate exploitation narratives, sweeping claims about the death of critical thinking. Not wrong, necessarily, but not engaged with the thing that I actually said. Pre-loaded, as if the conclusion was reached before the conversation started and the reasoning exists only to justify it.
I keep catching myself tempted to do the same thing, and the pattern is so consistent, so unlike how these same people engage with other difficult topics, that I’ve stopped believing this is a thinking problem. Something else is going on, and I think naming it matters, because the thing driving this reaction isn’t just making the conversation unproductive; it’s actively dangerous.
What Fear Looks Like When It’s Wearing a Suit
Here’s what I’ve come to believe is happening, and I include myself in this.
For knowledge workers, educators, and writers, AI doesn’t just represent a new technology to evaluate. It represents a direct challenge to the thing that earns us our seat at the table.
If you spent years learning to write clearly, and a tool now helps anyone write clearly, the tool hasn’t just changed the landscape; it’s devalued your specific investment. If your professional identity is built on credentialed expertise and the ability to articulate complex ideas in polished prose, a technology that commoditizes articulation feels less like disruption and more like erasure. I’ve built a career on being the person who sees systems clearly and helps organizations navigate complexity, and if AI gets good enough at that, I’m not sure what I still have to offer. I’ve sat with that question, and it’s not comfortable.
This isn’t a rational calculation any of us are making consciously. It’s an identity threat, and identity threats activate the nervous system before they reach the prefrontal cortex. When your brain perceives a threat to who you are, it doesn’t give you your best thinking. It gives you pattern-matching, where this looks like past exploitation and so it must be exploitation. It gives you position-defending, interpreting new information through a filter you’ve already locked in. It gives you analogies that reach for historical parallels not because they’re analytically apt but because they feel right, and feeling right is enough when your nervous system is running the show.
The result looks like a principled argument and sounds like one, but it has a tell: it doesn’t update. You can present new evidence, draw finer distinctions, point to specific use cases where the concerns don’t apply, and the response doesn’t shift; it just reasserts, often with more emotional intensity. That’s not reasoning; that’s defense.
I know what it feels like from the inside, because I’ve done it. When you’re in it, it feels like clarity, like you’re the one seeing the situation for what it really is, which is exactly what makes it so hard to catch.
Ancient Hardware, Modern Problem
The fear is legitimate; if you’ve built your career on skills that are genuinely being commoditized, you should feel unsettled, because that’s an appropriate response to a real situation.
The problem is that the response is running on ancient hardware. Pattern-matching, rapid threat assessment, in-group loyalty, the tendency to act first and analyze later: these heuristics kept our ancestors alive, and they’re brilliantly adapted for a world where threats are physical, immediate, and binary. A rustle in the grass really is either a predator or nothing; the cost of a false positive is a wasted sprint, while the cost of a false negative is death. Of course we evolved to run first and ask questions later.
But AI isn’t a predator in the grass. It’s a complex, evolving system with unevenly distributed risks and benefits that will play out over decades, and the heuristics that protected us from lions are worse than useless here; they actively prevent us from seeing the situation clearly. When the sympathetic nervous system takes over, we lose access to exactly the cognitive tools the moment demands: nuance, uncertainty tolerance, the ability to hold competing truths simultaneously.
What separates productive engagement from defensive reaction is deceptively simple: the ability to notice when the alarm is firing and choose not to let it drive. This is, at its core, a mindfulness problem; not in the scented-candle sense, but in the most practical sense imaginable. Can you observe your own nervous system responding, recognize that the response is not the same thing as the reality, and create enough space between stimulus and reaction to engage with what’s actually in front of you?
The people I know who are engaging most productively with AI, whether as enthusiasts or as critics, share this: they’ve learned to notice when they’re reacting and pause long enough to start thinking instead. What they see when they do is worth paying attention to.
What Clear Eyes See
When you get past the reflexive fear and look at the actual landscape, the stakes are larger and more urgent than anything we’ve been arguing about.
The concentration of AI capabilities in the hands of a few companies and nations is accelerating. Biological and cybersecurity risks scale with model capability. The potential for economic disruption severe enough to destabilize democracies is growing, and we have no precedent for managing it. The window for building governance frameworks is finite, and it’s narrowing.
Dario Amodei, the CEO of Anthropic, recently described this moment as the adolescence of technology: a period where humanity will have access to almost unimaginable power before we’ve developed the maturity to wield it. Yes, he runs an AI company, and that’s a conflict of interest worth noting, but the risks he catalogs don’t become less real because of who’s describing them. If anything, the fact that someone inside the industry is sounding these alarms should sharpen our attention.
The “adolescence” metaphor works in a second direction, one I don’t think Amodei intended. The discourse about AI is adolescent too, in a specific psychological sense; not because it lacks intelligence, but because it lacks the ability to regulate emotional responses long enough to engage with complexity. It personalizes everything, collapses nuance into loyalty tests, and confuses the intensity of feeling with the validity of position.
Whether someone used AI to edit a social media post is not among the real dangers. The real dangers are playing out at a scale and speed that demands the best thinking we have, and that thinking is largely absent from the conversation.
Somewhere right now, decisions are being made about compute infrastructure, data rights, economic transition policy, and the acceptable boundaries of autonomous AI systems, decisions that will shape the next several decades. The table where they’re being made has empty chairs. The people who should be filling those chairs—educators who understand how humans actually learn, writers who understand the relationship between language and thought, labor advocates who understand economic displacement, ethicists who’ve spent careers on exactly these questions—are standing outside the room, arguing about whether the room should exist.
I get it: the room shouldn’t have been built this way, the power dynamics are wrong, and the incentives are misaligned. All true. But the room exists, the decisions are being made, and refusing to engage isn’t the principled stand it feels like. The people already at the table are happy to proceed without us.
Here’s the part that nobody on the sidelines wants to hear, the part I think matters more than anything else in this essay: if you aren’t engaging with the technology directly, learning how it actually works, what it can and can’t do, where it breaks, where it surprises you, then you are forfeiting the ability to have an informed opinion about it, and an uninformed opinion, no matter how principled its origins, is just noise.
I understand the impulse to boycott; it feels like integrity, like refusing to be complicit. But the practical effect is this: the only people developing deep, hands-on knowledge of what these systems actually do are the people building and selling them. If humanists, educators, ethicists, and labor advocates cede that ground, if the only people who truly understand the technology are the ones motivated by market share, then those are the people who will write the policies, set the defaults, and define what “responsible AI” means. We will get a future shaped entirely by the people with the least incentive to protect what we care about most.
This is the central paradox of principled disengagement: by refusing to touch the technology, you surrender the exact knowledge you would need to hold it accountable. You can’t effectively critique what you don’t understand, propose alternatives to systems you’ve never examined, or spot the gaps in a governance framework if you don’t know what the technology is actually capable of today—not what someone told you six months ago, not what you extrapolated from a headline, but what it does right now when you sit with it and push.
The critics I find most useful aren’t the ones who’ve decided AI is irredeemable; they’re the ones who’ve gotten specific: this model was trained on this data without this consent, that deployment in that context produced those measurable harms, this governance framework has these gaps. That kind of opposition actually changes outcomes, and every single one of those critics got there by engaging with the technology closely enough to know where the real leverage points are.
Come Back Online
The tables where AI’s future is being shaped, from congressional halls to conference calls, need more than technologists and investors. They need people who understand learning, language, labor, equity, and power: people who’ve spent their careers on those problems and whose expertise is not diminished by AI but made more essential by it. That expertise only counts, though, if it’s grounded in firsthand knowledge of the thing you’re trying to shape. The world doesn’t need more people with strong opinions about AI and no experience of it. It needs people who’ve done the work to understand it and have the values to insist it serve everyone, not just the people who built it.
This is an invitation, but it’s also an invocation. Our nervous systems are telling us to fight or flee, and the moment calls for neither. It calls for the hard, unglamorous work of showing up, getting our hands dirty with the actual technology, paying close attention to what it’s doing and what it isn’t, and insisting that the people building the future account for its consequences. We need you in this conversation—not your fear, not your reflexes, you: your expertise, your values, your willingness to grapple with something genuinely hard.
The fear is real, and the threat is real, but the threat isn’t the tool. It’s that we’ll spend the critical window for shaping it arguing about the wrong things, or worse, refusing to engage with it at all, while the people with the fewest scruples shape it without opposition.
You are too important to sit this out. The stakes are too high for anything less than our best thinking.