• Skip to main content
  • Skip to primary sidebar

Teague Hopkins

Mindful Product Management

  • Home
  • About
  • Blog
  • Contact

Teague Hopkins

Nov 17 2025

The False Choice

Why History Teaches Us to Reject Faith-Based Arguments on Both Sides of the AI Debate

In the era of AI, there are extremists on both sides of the issue. One side, which I have taken to calling the “accelerationists,” thinks AI will solve major societal problems and that we shouldn’t make rules, because that will just slow things down. The other side, “critics,” issues doomsday warnings about environmental harm, stolen content, lost jobs, and even the end of humanity, often arguing that AI development should be paused indefinitely.

Both sides are deploying a sort of faith-based reasoning: treating unknowns as certainties and prediction as destiny. But lessons from history show us that technologies’ effects are driven by how we choose to use them, not necessarily by the inherent qualities of the technology itself. I advocate for rejecting both unchecked optimism and doom in favor of a practical approach: measuring what actually happens and figuring out how to put in place parameters that encourage the benefits and mitigate the harms.

The composition is built from simple geometric shapes rather than realistic people: a horizontal balance beam or tightrope stretches across the image, with one end tinted in cool blues and teals and the other in warm reds and oranges, representing two opposing extremes.

Critics’ Concerns

When accelerationists dismiss AI critics as “doomers,” they’re missing what’s actually happening. The opposition includes multiple groups with distinct, valid concerns:​

Environmental critics note that training one large language model emits as much CO2 as five cars over their lifetimes, and AI data centers are projected to consume 3-4% of global electricity by 2030.​

Artists and creators watched companies scrape their copyrighted work without permission or payment.

Practical skeptics tried the tools, found them underwhelming, and see another Silicon Valley bubble. Sometimes this comes from not knowing how to use AI well, but often it reflects genuine disappointment when capabilities don’t match the marketing.​

Workers see a real risk of displacement as automation obviates the need for some specific jobs.

Finally, those worried about existential risk have concerns ranging from AI-enabled authoritarianism to extinction scenarios.​

What brings these groups together is shared anxiety about rapid deployment without governance. But many arguments rest on projections rather than measured impacts.​

The Pattern: Resistance Has Never Stopped Valuable Technology

Every major automation wave has faced opposition.

When the printing press arrived in Venice in the 1470s, the monk Filippo de Strata wrote to the Doge comparing the pen to a “virgin” and the printing press to a “whore.” Some scribes destroyed presses. 

In the 1810s, the Luddites opposed mechanized looms because automation genuinely threatened their economic position. 

When electric lighting threatened the gas industry in the 1880s, lamplighters in Belgium smashed electric lamps. The gas industry mounted aggressive campaigns to discredit electricity. Edison even arranged public electrocutions of animals to demonstrate how dangerous AC current was.

In 1970, telecommunications employed 421,000 switchboard operators, compared to about 78,000 today with automation. Operators in cities that transitioned to mechanical switching were substantially less likely to have any job 10 years later. Older workers were 7% less likely to be working at all.​

Critics of past technologies often correctly identified harms that society then struggled to address. Automobiles did contribute to climate change. The Internet did enable surveillance capitalism. Early warnings proved prescient, even when they didn’t stop deployment. Uncritical acceleration created locked-in harms because we chose speed over governance.​

But in every case, the technology became ubiquitous and now shapes our modern lives.

Here’s the crucial lesson: resistance alone has never stopped technologies offering substantial economic advantages. If AI provides genuine value—and evidence increasingly suggests it does in specific domains—categorical opposition won’t prevent deployment. But the other side of that lesson is that policies shape how technology affects people. The impact on phone operators was less severe in cities that offered transition programs than in cities that did not—even though the technology was exactly the same everywhere.​

Accelerationists’ Hopes – and the Reality

Accelerationists make equally unsupported claims. They promise AI will cure cancer, solve climate change, and create widespread wealth—all based on speculation about future capabilities.​

Accelerationists imagine that everyone will have an AI therapist that exactly matches their needs – but AI sycophancy has posed a challenge and some AI interactions have even encouraged suicidal ideation. 

They envision a future where AI can manage large-scale farming and production, delivering us everything we need with no oversight and dramatically reducing costs – but AI has only been able to take over very specific, concrete tasks within production processes.

They see AI transforming education and health care by putting experts in everyone’s pockets, without large-scale successes in either realm.

They hope that the AI revolution will bring us to the technological singularity, dramatically changing every aspect of human life and allowing us to transcend our current conditions – but we don’t know if we’re days or centuries away from something that could be called artificial general intelligence, or if we’ll ever get there.

Despite massive investments, we don’t see clear evidence of economy-wide productivity improvements. And we certainly haven’t cured cancer, solved climate change, or erased income inequality yet. Individual case studies show promise, but broad transformation requires multiple leaps that we haven’t yet made.​

The accelerationist position treats beneficial outcomes as automatic and dismisses concerns as obstacles.​

What We Should Be Asking: The Pragmatic Path

While critics extrapolate disaster, and accelerationists promise transformation, answerable questions aren’t getting attention:​

  • What are the actual, measured productivity gains from AI adoption in different contexts?
  • Which specific jobs face displacement, on what timeline, and with what support systems?
  • What are realistic energy consumption trajectories, accounting for both scaling and efficiency improvements?
  • Which governance frameworks balance innovation with harm mitigation?
  • How do we measure value creation versus value extraction?

These require data, experimentation, and measurement—not speculation.​

Societies that navigated previous technological transitions best measured actual displacement, funded concrete transition support, and adjusted based on outcomes. The post-WWII GI Bill and 1950s automation programs at companies like General Electric showed this pragmatic path.​

We can do the same with AI:​

  • Measure real effects. Focus on what AI actually does to jobs, energy consumption, and productivity. Use that data to inform policy.​
  • Support those who get hurt. History shows poorer and older workers suffer most. Experiment with different support mechanisms—retraining programs, financial assistance, even UBI pilots—and measure what works.​
  • Build adaptive governance. Create regulatory frameworks that evolve as AI capabilities change. Require transparency, mandate fairness testing, and ensure workers have a voice in deployment decisions.​
  • Recognize limits. Deploy AI for specific tasks where it genuinely excels. Keep humans in the loop for high-stakes decisions.​
  • Address energy consumption. Invest aggressively in efficiency improvements and clean power, and track actual results.​
  • Compensate creators fairly. Experiment with models for compensating artists and writers when AI trains on their work while still enabling useful applications.​

Critical Thinking Over Faith

The choice between acceleration and opposition is false. The real choice is between making decisions based on imagined futures, or building the governance infrastructure that lets us learn and adapt as we go.​

This isn’t the exciting narrative of revolutionary transformation or existential threat. It’s the boring work of measuring impacts, building adaptive systems, and making policy choices based on what we actually observe. Technologies don’t determine outcomes—we can, and we should.​

Written by Teague Hopkins · Categorized: Main

Nov 04 2025

The Three Stages of AI Adoption

How to Actually Deliver Value

Most leadership teams I talk to are wrestling with the same question: How fast can we leverage AI? There’s pressure, sometimes self-imposed, sometimes from the board, to deliver AI transformation yesterday. That pressure creates a temptation to build something ambitious and fully automated right now.

I’ve watched this play out, and it almost never works the way people hope.

There’s a better path, borrowing from how product teams that actually move the needle operate. It has three stages. Each stage teaches you what you need to know before the next one. Start with giving your teams tools and watch what they do with them. Next, automate the workflows that prove repetitive. Finally, move to full autonomy – if and only if the math justifies it.
Three white panels on a dark background show a progression from ‘Tool Usage’ with a wrench icon, to ‘Automate with HITL’ with a human figure, to ‘Automate without HITL’ with a simple robot icon; beneath, a left‑to‑right gradient arrow labeled ‘Control’ on the left and ‘Agency’ on the right illustrates increasing autonomy.
I’ve seen this work for product development teams, marketing teams, and customer care teams. The pattern is the same.

Stage 1: Give Them Tools, Then Watch

This first stage is so simple it feels like you’re not doing enough. You’re not building anything. You’re equipping and enabling.

  • A marketing team copy-pastes product info into ChatGPT to generate social posts. 
  • A customer care team uses Claude to draft responses. 
  • Sales researches prospects.
  • Finance extracts invoice data.
  • Engineering uses Cursor to help with code comments and documentation.

Humans decide everything: When to use the tool; how to prompt it; whether the output is usable; what to do with the result. The AI is an assistant inside a manual workflow.

It can feel slow, but it’s valuable learning time. What actually happens is something more useful: you discover which tasks are genuinely repetitive, which workflows jam people up, which problems your teams actually want solved. You get real data to work from instead of guessing.

There’s a confidence piece too. Your teams develop intuition about what these systems can and can’t do. They learn how to prompt effectively. They hit the edge cases and see the failure modes in real time, and that ground truth matters because it shapes everything that follows.

Stage 2: Automate, But Keep a Human in the Loop

The next thing you look for is repetition. When someone’s running the same prompt over and over, or they’re sharing a working prompt with teammates who have the same problem, it’s time to start building.

This stage is where you create automation that runs the AI process and surfaces the result for human review and approval. 

  • A support system auto-drafts responses for agents to edit. 
  • A marketing platform generates social variants for the team to choose from. 
  • A finance tool categorizes invoices and flags unusual spend for review.
  • A sales system scores leads and suggests follow-up timing for reps to execute.

You’re shifting from one person making decisions to many people making decisions faster. AI handles the repetitive thinking. Humans handle judgment, context, and protection. You’re also building a feedback loop. Each approval, and each rejection, teaches the system what good looks like inside your organization.

The jump from Stage 1 to Stage 2 takes engineering work, but now it’s justified because you already validated that the problem is real.

Stage 3: Fully Autonomous

At stage three, we finally try out some full automation. This is the first time we consider having no human in the loop. The system can run end-to-end, with output flowing straight into the business process.

  • A marketing platform auto-generates, schedules, and publishes social posts.
  • A finance tool auto-categorizes transactions and settles approved invoices.
  • A sales system auto-scores leads, assigns them to reps, and schedules outreach.
  • A support system auto-routes tickets to specialists and responds directly to customers.
  • An inventory system monitors stock levels and automatically places supplier orders when thresholds are breached.

But “Ready for Stage 3” depends on your function and how much risk you can tolerate.

For low-stakes, 80% accuracy might be fine. An auto-generated social post that occasionally needs a tweak? That might be acceptable. Higher-stakes decisions might require 95% or 99% accuracy. If you’re giving medical diagnoses, detecting fraud, or making decisions that could damage customers, you might never remove the human, or only when AI itself advances far beyond where we are now.

Stage 3 isn’t a universal destination. It’s an option, available when the risk-reward equation justifies it.

The Counterintuitive Speed Argument

Here’s what challenges most people’s instincts: when you’re pressured to show AI value fast, the reflex is to skip Stages 1 and 2 entirely. You build ambitious systems immediately with heavy infrastructure investment to reach your vision.

This approach delays real value.

Without the learning that comes from Stage 1, you’re automating problems you’ve guessed at, not problems people have. You burn engineering cycles on workflows that might not create the value you expected. You ship systems where you haven’t actually seen their failure modes in your business.

The paradox: starting with Stage 1, just giving teams tools and watching, is faster in the end. It cuts straight to problems worth solving. It creates organizational conviction about AI through observation, not assertion. It identifies which automation bets are high-leverage before you spend resources on them.

Stage 2 then becomes a focused, high-confidence engineering effort. Stage 3 happens naturally when the data supports it, not when the calendar demands it.

Moving Forward

The three-stage approach isn’t flashy. You don’t get to write a press release about “fully autonomous AI agents.” What you get is value creation anchored in real problems, that scales because your teams understand it, and compounds because you understood what actually matters before you built it.

Start with tools, observe, and automate what works. Move to autonomy only when it makes sense.

This path to value is faster than it appears, and vastly faster than sprinting to Stage 3 without first understanding what’s worth automating.

Written by Teague Hopkins · Categorized: Main

Oct 26 2025

Bits, Atoms, and Neurons

For years, we talked about the friction between the digital and physical worlds; between bits and atoms. Bits moved at the speed of light through fiber optic cables, while atoms plodded along in trucks and on conveyor belts. The promise of technology kept hitting the wall of physical logistics. But then something changed: we largely solved the atoms problem. Warehouse automation has doubled processing speeds and reduced errors by 99%.¹ AI-powered route optimization has reduced delivery times by 20-30% and improved on-time delivery rates by up to 40%.² Same-day delivery is now routine, not miraculous.

The new constraint isn’t physical anymore. Wetware has become the limiting factor.

The Hierarchy of Change

Consider the speed at which different substrates can change. Computer processors running at 3.2 GHz execute 3.2 billion cycles per second with each cycle taking roughly 0.3 nanoseconds. Data transmission across networks operates on millisecond timescales, with good internet latency ranging from 30-100ms. Sending data takes seconds, maybe less.³

Physical packages take days. Standard domestic shipping requires 2-5 business days. Express services can manage overnight delivery, but we’re still measuring in days, not seconds. The atoms are slower than the bits.⁴

But neural rewiring? That takes weeks to months. Research shows habit formation averages 66 days, with individual experiences ranging from 18 to 254 days depending on complexity and consistency. Creating new neural pathways requires repeated activation, gradual strengthening of synaptic connections, and shifting from deliberate prefrontal cortex control to more automatic processing. This biological rewiring cannot be rushed. It requires time for structural changes in brain tissue.⁵

The more physical and biological the substrate, the slower the change. Digital bits rearrange nearly instantaneously. Physical atoms must be transported through space. Living neurons require metabolic processes, protein synthesis, and structural remodeling that takes time.

The Recursive Acceleration Problem

Here’s where it gets really challenging: AI systems, virtual neurons, are adapting faster than human neurons can adapt to those improvements. This creates a recursive acceleration problem that previous technological revolutions didn’t face.

The evidence is stark. Fewer than 10% of U.S. companies actively use AI in production, and 68% of organizations move 30% or fewer AI experiments into full deployment. Yet 75% of knowledge workers globally use AI at work, with employees at over 90% of companies using personal AI tools, often without official approval.⁶ The technology exists. People want to use it. But institutions cannot adapt fast enough.

Even if AI development stopped today, it would take us 5-10 years to learn all the habits and new ways of working to take advantage of the capabilities we already have. Organizations require 18-24 months just to develop new upskilling programs and by the time they’re deployed, organizational needs have often changed. The World Economic Forum projects that 59% of the global workforce will require reskilling by 2030. Half the workforce needs reskilling within five years.⁷

Recent research from the St. Louis Fed reveals a troubling correlation: occupations with higher AI exposure experienced larger unemployment increases between 2022 and 2025, with a 0.47 correlation coefficient. This isn’t hypothetical anymore. The wetware bottleneck is producing structural unemployment right now.⁸

Cyborgs Walk Among Us

I think we need to recognize that we’re already post-human cyborgs (or at least humans partnered with computer sidekicks). I use my computer to calculate things, remember things, and now even summarize and organize unstructured data with LLMs. I’ve long been a proponent of “Computers should do what computers are good at so humans can do what humans are good at.”

The problem is that we have relatively low-bandwidth interfaces with our digital extensions. We type. We click. We read screens. These are slow, sequential processes compared to the speed at which our silicon partners operate. The mismatch between silicon speed (nanoseconds) and synaptic speed (weeks to months) represents a fundamental constraint on 21st-century progress.

A Qualitative Difference

The difference between automating the movement of atoms and accelerating the rewiring of neurons is qualitative rather than quantitative. We don’t have a real concept of the solution.

Is it educational innovation? Brain-computer interfaces like Neuralink? Becoming post-human cyborgs with higher bandwidth? Nootropic drugs or genetic engineering for higher IQ? Is it a societal change that shifts the balance of learning and work? Modern work already requires more schooling than in the past on average. Do we move to a world with universal basic income supporting an ever-learning workforce?

Previous technological revolutions had clearer adaptation paths. The Industrial Revolution was about moving atoms in new ways: we built factories, trained workers for specific tasks, and created new economic structures around manufacturing. The Internet was about moving bits in new ways (as was the printing press before it): we learned to browse websites, send emails, and eventually work remotely.

But this? This is about virtual neurons adapting faster than biological neurons. That’s a different category of challenge entirely.

The Cultural Obstacle

The social and biological consequences feel bigger than previous innovations, particularly in American culture. We have such a deep concept of tying worth to work, thanks to the Protestant work ethic. Work provides not just income but identity, status, purpose, and self-worth. Research shows unemployment causes severe mental health decline.⁹

We’ll have to figure out how to overcome this if we want to survive the mental health hit of a post-work society. Productivity will increasingly be about building and adapting autonomous systems instead of doing repetitive tasks. That could lead to massive unemployment. We don’t know what to do with that, but we’ll need ways for people to create, not just be consumers. Pure consumption doesn’t lead to lasting happiness and could be a major pitfall for our collective mental health.

A Hopeful Vision

Yet there is cause for hope. When people have UBI, most continue to work, except three groups: students, parents of young children, and the retired or chronically ill. The possibility of lives made of more learning, caretaking, and recovery seems incredibly human and humane. That’s a world I want to live in.¹⁰

Meaningful work provides psychological benefits that extend far beyond income. But “work” doesn’t have to mean what it meant in the 20th century. It can mean learning. Caregiving. Creating. Recovering. Building community. All the things that make us human but that we’ve been too busy earning a living to fully embrace.¹¹

The Path Forward

We’re not going to get there without a combination of forces. We need policy (UBI in particular, or something like it) to provide the economic foundation. It would be nice to avoid massive unemployment, but I don’t think we can adapt fast enough for that. Market forces will likely force the issue through displacement. Grassroots movements will be crucial in changing attitudes toward work that have persisted for centuries.

The transition will be chaotic. Neural rewiring takes weeks to months. Organizational adaptation takes years. Cultural shifts around work identity could take generations. There will be a debate about whether there’s value in remaining purely human or whether we should embrace better brain-computer interfaces and biohacking. Does genetic engineering for higher IQ help us as a species? I don’t know.

I prefer to think that we have a solution through reconceptualizing work. But that doesn’t mean it will be easy.

Individual Agency in an Era of Structural Change

The good news for individuals is that focusing on your own adaptation will help you in either circumstance. Either you are part of moving us toward that beautiful vision of the future, or you are positioning yourself to be part of a small elite who still have marketable skills in a dystopian future.

Skills that remain valuable include creative problem-solving, complex communication, ethical reasoning, emotional intelligence, and adaptability itself. But more importantly, the ability to reconfigure your neural pathways, even if it takes weeks, is the meta-skill that enables everything else.¹²

I hope for the former scenario. And because everyone preparing and adapting would lead us toward that more humane future, I’m fundamentally optimistic. I’m committing to helping as many motivated people as I can successfully learn and transition.

Conclusion

We’ve moved from “bits vs. atoms” to “bits vs. brains.” The technology adoption curve now reveals a stark gap: the distance between digital capabilities and human ability to use those capabilities is growing and accelerating. Unlike logistics, we cannot simply automate human learning and organizational adaptation at scale.¹³

But recognizing the constraint is the first step toward addressing it. If wetware is the bottleneck, then investing in human adaptation, through education, through cultural change, and through new economic structures that support lifelong learning, becomes the most important work of our time.

The question isn’t whether we’ll face this transition. We’re already in it. The question is whether we’ll navigate it thoughtfully, building systems and cultures that support human flourishing, or whether we’ll let market forces and technological momentum carry us into a future we didn’t choose.

I believe we can choose. But we need to start now, with clear eyes about the challenge we face and the biological constraints we’re working with. The bits will keep accelerating. The atoms are largely solved. The neurons—our neurons—will adapt at their own pace.

Our job is to create the conditions where that pace is enough.


¹ https://www.linkedin.com/pulse/logistics-automation-breakthrough-intelligent-supply-chain-akabot-koa9c; Karadex data on 99.9% pick accuracy; MHI data showing up to 85% productivity increases from warehouse automation

² Artech Digital: 20% delivery time reduction, 40% on-time rate improvement; DHL India case study; Various studies showing 20-30% delivery time improvements

³ https://www.intel.com/content/www/us/en/gaming/resources/cpu-clock-speed.html; Network latency data

⁴ https://redstagfulfillment.com/fedex-ups-usps-delivery-times/

⁵ https://www.mendi.io/blogs/brain-health/how-long-does-it-take-to-rewire-your-brain-for-better-mental-health; UCL study on habit formation; Systematic review and meta-analysis on habit formation timing

⁶ https://www.glean.com/perspectives/benefits-and-challenges-ai-adoption; Microsoft Work Trend Index 2024; MIT study on shadow AI economy

⁷ https://www.ere.net/articles/rapid-reskilling-at-scale-why-the-future-of-work-depends-on-it; WEF Future of Jobs Report 2025

⁸ https://www.stlouisfed.org/on-the-economy/2025/aug/is-ai-contributing-unemployment-evidence-occupational-variation

⁹ https://www.insidehighered.com/opinion/columns/higher-ed-gamma/2024/05/28/how-work-and-career-became-central-americans-identity; APA research on unemployment and mental health; PMC study on unemployment and mental health

¹⁰ https://globalaffairs.org/commentary-and-analysis/blogs/multiple-countries-have-tested-universal-basic-income-and-it-works; GiveDirectly UBI study results; German Basic Income experiment

¹¹ https://peakpsych.com.au/resources-for-individuals/the-health-benefits-of-meaningful-work/; Research on meaningful work and well-being

¹² https://www.paybump.com/resources/6-future-proof-job-skills-in-the-age-of-ai

¹³ https://whatfix.com/blog/technology-adoption-curve/

Written by Teague Hopkins · Categorized: Main

Jun 12 2025

Lessons for AI Adoption: What SaaS Taught Us About Enablement

The recent surge in GenAI usage reminds me of the SaaS boom of the early 2000s. SaaS introduced a host of new tools; AI is doing the same today. In both cases, end users started to bypass traditional gatekeepers.

Early SaaS adoption taught us a lesson. Departments bought tools on their own, often without IT’s knowledge. A team might sign up for Basecamp to manage projects because they could just put in on a department credit card, creating a sort of “shadow IT.” That ad-hoc approach sparked innovation but also bred waste, security gaps, and duplication of effort. Companies soon learned to that if they wanted IT to be involved in choosing and deploying SaaS products, they had to move faster to keep up with users’ new expectations of speed and experience.

Digital illustration of workers in orange vests building a glowing blue, blocky bridge across a dark chasm, assisted by a crane, drone, and holographic data displays, symbolizing bridging a knowledge gap.AI is following a similar path, with a twist. Many AI tools are “back office.” One person can use a model to draft a memo, sift data, or write code, and nobody else may notice. Unlike team-oriented SaaS apps, this solo use stays hidden.

Hidden use discourages sharing. A worker who doubles output with AI may keep quiet to look like a star. That secrecy blocks collective learning. Most folks struggle while a few reap the benefits.

Without a plan, staff go underground. They feed sensitive data into unvetted models, without the protections of enterprise accounts. Many companies hand out a generic chatbot but skip the training and don’t even consider function-specific tools like Cursor (for writing code) or Jasper (for marketing). Some employees will start to build internal clones of those tools because they don’t have access to the ones they really want to use.

We need comprehensive AI enablement. More than access, more than the shiniest household name model, and more than individual usage. A solid program should:

  • Educate: Show employees and leaders what AI can do and how to use it well.
  • Choose tools wisely: Select the right tools for the job-to-be-done. It’s not one-size fits all.
  • Share knowledge: Promote open talk about wins, learnings, and best practices.
  • Govern: Set rules that guard data, privacy, and ethics.

Without enablement, AI stays fragmented, results lag, and teams fall behind. As with SaaS, the winners will be the firms that embrace and empower their people. The field moves fast; only continuous learning backed by strong enablement will keep you ahead.

Written by Teague Hopkins · Categorized: Main

Apr 03 2025

Rethinking Ownership: AI Training and Copyright Battles

Outside Meta’s London office today, authors are protesting what they call theft: the use of their books to train AI without permission. The scene would not have surprised John Perry Barlow, who nearly 30 years ago wrote that digital technology would make traditional copyright law obsolete.

“Intellectual property law cannot be patched, retrofitted, or expanded to contain digitized expression,” warned Barlow in his 1994 essay “The Economy of Ideas.” As bestselling authors wave placards demanding compensation and Meta claims its AI training is “consistent with existing law,” we’re watching his prediction play out once again.

The Man Who Saw It Coming

Before most people had email addresses, Barlow – a Grateful Dead lyricist turned digital visionary – understood that the internet would fundamentally change how we think about ownership. He saw that once creative works became digital patterns rather than physical objects, our traditional ways of protecting and monetizing them would fall apart.

He was right. Today, Meta faces lawsuits for using LibGen, a “shadow library” of over 7.5 million books, to train its AI models. Authors like Ta-Nehisi Coates and Sarah Silverman are suing. Novelist AJ West says it feels like being “mugged.” But Meta argues that training AI on patterns within books is fundamentally different from copying those books.

What Barlow Got Right

Barlow’s key insights read like a prophecy:

  1. Digital copying would become essentially free and unstoppable
  2. Traditional copyright law would fail to adapt to new technology
  3. Tension would grow between information’s desire to flow freely and creators’ need for compensation

He compared trying to protect digital information to “trying to keep water in a handful of sand” – a metaphor that perfectly captures the frustration of authors watching their works absorbed into AI training sets.

Cupped hands holding glowing golden particles from which streams of binary code (0s and 1s) flow upwards and downwards against a dark blue background.

The Core Problem Remains Unsolved

The Meta controversy highlights the central dilemma Barlow identified: In a digital world, how do we fairly compensate creators while acknowledging that information naturally wants to spread?

When novelist Kate Mosse joins protesters demanding payment for AI training use, she’s fighting the same battle Barlow described – trying to maintain traditional property rights in an age where creative works have become “patterns of ones and zeros” flowing through the digital world.

A Way Forward?

Barlow didn’t just predict problems – he suggested solutions. He envisioned new economic models where value would come from:

  • Real-time performance and experience
  • Being first to market with ideas
  • Service and support around creative works
  • Direct relationships between creators and audiences

Some of these models have emerged, like how musicians now earn more from concerts than recordings, but we haven’t found similar alternatives for authors and other creators whose works train AI systems.

The Future We Need to Build

The current standoff between Meta and authors shows we’re still caught between old and new worlds. Neither traditional copyright enforcement nor unrestricted AI training serves everyone’s interests.

Barlow might suggest that the solution lies not in choosing sides, but in developing new models that:

  • Recognize AI training as a legitimate use of creative works
  • Provide fair compensation to creators
  • Build sustainable creative ecosystems for the digital age

Three decades ago, Barlow wrote that the digital revolution would force us to completely rethink how we value and protect creative work. Today’s AI copyright battles are just the latest development to prove he was right. The question is: Have we come up with any better solutions in the intervening 30 years?

Written by Teague Hopkins · Categorized: Main

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 12
  • Go to Next Page »

Primary Sidebar

Copyright © 2025 Teague Hopkins
 

Loading Comments...