• Skip to main content
  • Skip to primary sidebar

Teague Hopkins

Mindful Product Management

  • Home
  • About
  • Blog
  • Contact

Nov 17 2025

The False Choice

Why History Teaches Us to Reject Faith-Based Arguments on Both Sides of the AI Debate

In the era of AI, there are extremists on both sides of the issue. One side, which I have taken to calling the “accelerationists,” thinks AI will solve major societal problems and that we shouldn’t make rules, because that will just slow things down. The other side, “critics,” issues doomsday warnings about environmental harm, stolen content, lost jobs, and even the end of humanity, often arguing that AI development should be paused indefinitely.

Both sides are deploying a sort of faith-based reasoning: treating unknowns as certainties and prediction as destiny. But lessons from history show us that technologies’ effects are driven by how we choose to use them, not necessarily by the inherent qualities of the technology itself. I advocate for rejecting both unchecked optimism and doom in favor of a practical approach: measuring what actually happens and figuring out how to put in place parameters that encourage the benefits and mitigate the harms.

The composition is built from simple geometric shapes rather than realistic people: a horizontal balance beam or tightrope stretches across the image, with one end tinted in cool blues and teals and the other in warm reds and oranges, representing two opposing extremes.

Critics’ Concerns

When accelerationists dismiss AI critics as “doomers,” they’re missing what’s actually happening. The opposition includes multiple groups with distinct, valid concerns:​

Environmental critics note that training one large language model emits as much CO2 as five cars over their lifetimes, and AI data centers are projected to consume 3-4% of global electricity by 2030.​

Artists and creators watched companies scrape their copyrighted work without permission or payment.

Practical skeptics tried the tools, found them underwhelming, and see another Silicon Valley bubble. Sometimes this comes from not knowing how to use AI well, but often it reflects genuine disappointment when capabilities don’t match the marketing.​

Workers see a real risk of displacement as automation obviates the need for some specific jobs.

Finally, those worried about existential risk have concerns ranging from AI-enabled authoritarianism to extinction scenarios.​

What brings these groups together is shared anxiety about rapid deployment without governance. But many arguments rest on projections rather than measured impacts.​

The Pattern: Resistance Has Never Stopped Valuable Technology

Every major automation wave has faced opposition.

When the printing press arrived in Venice in the 1470s, the monk Filippo de Strata wrote to the Doge comparing the pen to a “virgin” and the printing press to a “whore.” Some scribes destroyed presses. 

In the 1810s, the Luddites opposed mechanized looms because automation genuinely threatened their economic position. 

When electric lighting threatened the gas industry in the 1880s, lamplighters in Belgium smashed electric lamps. The gas industry mounted aggressive campaigns to discredit electricity. Edison even arranged public electrocutions of animals to demonstrate how dangerous AC current was.

In 1970, telecommunications employed 421,000 switchboard operators, compared to about 78,000 today with automation. Operators in cities that transitioned to mechanical switching were substantially less likely to have any job 10 years later. Older workers were 7% less likely to be working at all.​

Critics of past technologies often correctly identified harms that society then struggled to address. Automobiles did contribute to climate change. The Internet did enable surveillance capitalism. Early warnings proved prescient, even when they didn’t stop deployment. Uncritical acceleration created locked-in harms because we chose speed over governance.​

But in every case, the technology became ubiquitous and now shapes our modern lives.

Here’s the crucial lesson: resistance alone has never stopped technologies offering substantial economic advantages. If AI provides genuine value—and evidence increasingly suggests it does in specific domains—categorical opposition won’t prevent deployment. But the other side of that lesson is that policies shape how technology affects people. The impact on phone operators was less severe in cities that offered transition programs than in cities that did not—even though the technology was exactly the same everywhere.​

Accelerationists’ Hopes – and the Reality

Accelerationists make equally unsupported claims. They promise AI will cure cancer, solve climate change, and create widespread wealth—all based on speculation about future capabilities.​

Accelerationists imagine that everyone will have an AI therapist that exactly matches their needs – but AI sycophancy has posed a challenge and some AI interactions have even encouraged suicidal ideation. 

They envision a future where AI can manage large-scale farming and production, delivering us everything we need with no oversight and dramatically reducing costs – but AI has only been able to take over very specific, concrete tasks within production processes.

They see AI transforming education and health care by putting experts in everyone’s pockets, without large-scale successes in either realm.

They hope that the AI revolution will bring us to the technological singularity, dramatically changing every aspect of human life and allowing us to transcend our current conditions – but we don’t know if we’re days or centuries away from something that could be called artificial general intelligence, or if we’ll ever get there.

Despite massive investments, we don’t see clear evidence of economy-wide productivity improvements. And we certainly haven’t cured cancer, solved climate change, or erased income inequality yet. Individual case studies show promise, but broad transformation requires multiple leaps that we haven’t yet made.​

The accelerationist position treats beneficial outcomes as automatic and dismisses concerns as obstacles.​

What We Should Be Asking: The Pragmatic Path

While critics extrapolate disaster, and accelerationists promise transformation, answerable questions aren’t getting attention:​

  • What are the actual, measured productivity gains from AI adoption in different contexts?
  • Which specific jobs face displacement, on what timeline, and with what support systems?
  • What are realistic energy consumption trajectories, accounting for both scaling and efficiency improvements?
  • Which governance frameworks balance innovation with harm mitigation?
  • How do we measure value creation versus value extraction?

These require data, experimentation, and measurement—not speculation.​

Societies that navigated previous technological transitions best measured actual displacement, funded concrete transition support, and adjusted based on outcomes. The post-WWII GI Bill and 1950s automation programs at companies like General Electric showed this pragmatic path.​

We can do the same with AI:​

  • Measure real effects. Focus on what AI actually does to jobs, energy consumption, and productivity. Use that data to inform policy.​
  • Support those who get hurt. History shows poorer and older workers suffer most. Experiment with different support mechanisms—retraining programs, financial assistance, even UBI pilots—and measure what works.​
  • Build adaptive governance. Create regulatory frameworks that evolve as AI capabilities change. Require transparency, mandate fairness testing, and ensure workers have a voice in deployment decisions.​
  • Recognize limits. Deploy AI for specific tasks where it genuinely excels. Keep humans in the loop for high-stakes decisions.​
  • Address energy consumption. Invest aggressively in efficiency improvements and clean power, and track actual results.​
  • Compensate creators fairly. Experiment with models for compensating artists and writers when AI trains on their work while still enabling useful applications.​

Critical Thinking Over Faith

The choice between acceleration and opposition is false. The real choice is between making decisions based on imagined futures, or building the governance infrastructure that lets us learn and adapt as we go.​

This isn’t the exciting narrative of revolutionary transformation or existential threat. It’s the boring work of measuring impacts, building adaptive systems, and making policy choices based on what we actually observe. Technologies don’t determine outcomes—we can, and we should.​

Written by Teague Hopkins · Categorized: Main

Primary Sidebar

Copyright © 2025 Teague Hopkins
 

Loading Comments...