• Skip to main content
  • Skip to primary sidebar

Teague Hopkins

Mindful Product Management

  • Home
  • About
  • Blog
  • Contact

Blog

Apr 03 2025

Rethinking Ownership: AI Training and Copyright Battles

Outside Meta’s London office today, authors are protesting what they call theft: the use of their books to train AI without permission. The scene would not have surprised John Perry Barlow, who nearly 30 years ago wrote that digital technology would make traditional copyright law obsolete.

“Intellectual property law cannot be patched, retrofitted, or expanded to contain digitized expression,” warned Barlow in his 1994 essay “The Economy of Ideas.” As bestselling authors wave placards demanding compensation and Meta claims its AI training is “consistent with existing law,” we’re watching his prediction play out once again.

The Man Who Saw It Coming

Before most people had email addresses, Barlow – a Grateful Dead lyricist turned digital visionary – understood that the internet would fundamentally change how we think about ownership. He saw that once creative works became digital patterns rather than physical objects, our traditional ways of protecting and monetizing them would fall apart.

He was right. Today, Meta faces lawsuits for using LibGen, a “shadow library” of over 7.5 million books, to train its AI models. Authors like Ta-Nehisi Coates and Sarah Silverman are suing. Novelist AJ West says it feels like being “mugged.” But Meta argues that training AI on patterns within books is fundamentally different from copying those books.

What Barlow Got Right

Barlow’s key insights read like a prophecy:

  1. Digital copying would become essentially free and unstoppable
  2. Traditional copyright law would fail to adapt to new technology
  3. Tension would grow between information’s desire to flow freely and creators’ need for compensation

He compared trying to protect digital information to “trying to keep water in a handful of sand” – a metaphor that perfectly captures the frustration of authors watching their works absorbed into AI training sets.

Cupped hands holding glowing golden particles from which streams of binary code (0s and 1s) flow upwards and downwards against a dark blue background.

The Core Problem Remains Unsolved

The Meta controversy highlights the central dilemma Barlow identified: In a digital world, how do we fairly compensate creators while acknowledging that information naturally wants to spread?

When novelist Kate Mosse joins protesters demanding payment for AI training use, she’s fighting the same battle Barlow described – trying to maintain traditional property rights in an age where creative works have become “patterns of ones and zeros” flowing through the digital world.

A Way Forward?

Barlow didn’t just predict problems – he suggested solutions. He envisioned new economic models where value would come from:

  • Real-time performance and experience
  • Being first to market with ideas
  • Service and support around creative works
  • Direct relationships between creators and audiences

Some of these models have emerged, like how musicians now earn more from concerts than recordings, but we haven’t found similar alternatives for authors and other creators whose works train AI systems.

The Future We Need to Build

The current standoff between Meta and authors shows we’re still caught between old and new worlds. Neither traditional copyright enforcement nor unrestricted AI training serves everyone’s interests.

Barlow might suggest that the solution lies not in choosing sides, but in developing new models that:

  • Recognize AI training as a legitimate use of creative works
  • Provide fair compensation to creators
  • Build sustainable creative ecosystems for the digital age

Three decades ago, Barlow wrote that the digital revolution would force us to completely rethink how we value and protect creative work. Today’s AI copyright battles are just the latest development to prove he was right. The question is: Have we come up with any better solutions in the intervening 30 years?

Written by Teague Hopkins · Categorized: Main

Mar 12 2025

How many colored squares are on this board?

Playing with my kid last night, I stumbled across yet another problem that humans have an easy time solving, while LLMs seemed incapable of getting close to the correct answer.

Prompt: How many colored squares are on this board? Do not count white squares.

A square white grid filled with translucent plastic tiles in red, yellow, blue, and green, creating a colorful mosaic pattern.

We asked 10 LLMs to count the number of colored squares in this image. None of them got close. Below is the list, sorted by how far off they were, from 223,241% to 35%. Granted, these are Large Language Models, not specialized computer vision algorithms, which I have no doubt would solve this problem with ease, but it was still interesting to see how hard this problem was for them to solve, and how confident some of them were in their wrong answers.

  1. GPT-4.5 – 781,695
  2. Perplexity Sonar – 783,360
  3. Gemini 2.0 Flash – 783,360
  4. Claude Sonnet 3.7 Thinking – 160
  5. Deep Research – 166-170
  6. GPT-4o – 177
  7. o3-mini – 150-180
  8. Grok-2 – 196
  9. Claude 3.7 Sonnet – 150-200
  10. R1 – 225
  11. Human – 350 (Correct)

GPT-4.5 – 781,695

The board contains 781,695 colored squares. White squares were excluded from the count.

Perplexity Sonar – 783,360

The result of 783,360 likely represents the count of individual colored pixels in the image rather than the number of colored squares on the board. To determine the number of colored squares, we need to account for the size of each square.

Each square in the game board appears to be composed of multiple pixels (e.g., a grid-like design). If you can provide the pixel dimensions of one square, I can help refine this calculation.

Gemini 2.0 Flash – 783,360

There are 783,360 colored squares on the board, not counting the white squares. A Python script using the PIL and NumPy libraries was used to arrive at this answer. The script loaded the image, converted it into an array, defined the RGB values for the white squares, and counted the non-white squares.

Claude Sonnet 3.7 Thinking – 160 

Based on the image shown, I can see a game board that appears to be Blokus or a similar tile-placement game. The board contains colored translucent plastic pieces in four colors: red, yellow, blue, and turquoise/teal.

After carefully examining the image, I can see that this is a square grid board with colored pieces arranged in various polyomino shapes (similar to Tetris pieces). The colored squares occupy much of the board, with white spaces showing the empty grid positions.

Counting each colored square individually:

  • Red squares: approximately 40
  • Yellow squares: approximately 40
  • Blue squares: approximately 38
  • Turquoise/teal squares: approximately 42

In total, there are approximately 160 colored squares on this board. This count excludes all white/empty squares as requested.

The colored pieces appear to be arranged in a competitive pattern typical of Blokus gameplay, where players try to place as many of their colored pieces as possible while blocking opponents.

Deep Research – 166-170

Analysis of Colored Squares on Game Board

Based on careful examination of the provided image, the board shows a colorful arrangement of translucent square pieces placed on a white grid. The board contains various colored squares in red, blue, yellow, and teal forming different patterns across the playing surface.

Board Structure and Composition

The image displays what appears to be a puzzle or game board with a grid layout containing colored translucent pieces. The board features a white background grid with square pieces in four distinct colors: red, blue, yellow, and teal (which the analysis code attempted to identify as green). These colored squares are arranged in various configurations across the board, creating an intricate pattern with some white spaces remaining empty. The colored pieces appear to be made of translucent plastic material, allowing some light to pass through while maintaining their vibrant coloration.

Color Distribution Analysis

According to the computational analysis of the image, the distribution of colored pixels shows a predominance of red elements, followed by yellow and then blue. The automated pixel counting identified:

  • Red elements: 83,758 pixels
  • Yellow elements: 67,271 pixels
  • Blue elements: 20,814 pixels
  • Teal/green elements: The code detected only 8 pixels, which appears to be an error in color detection as teal pieces are clearly visible in the image

The total number of colored pixels detected was 171,851. However, this pixel count does not directly translate to the number of colored square pieces on the board, as each piece contains multiple pixels.

Square Count Methodology

To determine the actual number of colored squares on the board as requested, we need to analyze the grid structure rather than merely counting pixels. Based on visual inspection, the board appears to be a standard square grid, similar to those used in games like Blokus or various puzzle games. Each colored piece occupies exactly one square position on the grid.

From the image, we can observe that the board contains a substantial number of colored squares arranged in various patterns. The grid appears to be approximately 16×16 in dimension, which would provide 256 total positions. Looking at the distribution and density of the colored squares on the board, approximately 65-70% of the grid positions are occupied by colored squares, with the remaining being white spaces.

Total Colored Square Count

Based on the visual analysis of the grid and the density of colored pieces, the total number of colored squares on this board is approximately 166-170 squares. This estimate is derived from analyzing the grid structure and the proportion of the board filled with colored pieces rather than the raw pixel count.

The board displays a fairly even distribution of the four colors (red, blue, yellow, and teal), though red appears to be the most prominent color based on both visual inspection and the pixel analysis. The pieces are arranged in what appears to be a strategic pattern, possibly representing a game in progress or a completed puzzle arrangement.

Conclusion

The board contains approximately 166-170 colored squares in total, not counting the white spaces. This count represents the actual physical square pieces placed on the board rather than pixel measurements. The colored squares create an engaging visual pattern with their arrangement of red, blue, yellow, and teal pieces distributed across the white grid surface.

GPT-4o – 177

To calculate the number of colored squares on this board, we need to count all the non-white squares in the image.

The board is a 14×14 grid, which totals 196 squares. From visual inspection, the white squares are the empty spaces scattered across the board. By manually counting the white squares in the image, there are 19 white squares.

Subtracting the white squares from the total:

196 – 19 = 177

Thus, there are 177 colored squares on this board.

o3-mini – 150-180

The image shows a colorful grid-based puzzle board with transparent colored blocks arranged in a pattern. This appears to be a Blokus-style game or similar strategic board game.

Based on visual inspection, this is a square grid board with colored transparent pieces in red, yellow, blue, and teal/green. The board appears to be approximately a 16×16 grid, with many of the squares filled with colored pieces while others remain white (empty).

From carefully examining the image, I can estimate there are approximately 150-180 colored squares on the board. Each colored square appears as a small transparent block positioned within the grid structure, and they are arranged in various geometric patterns across the board.

The white squares are not counted in this estimate, as specified in your query. The colored squares make up roughly 60-70% of the total board spaces, which suggests around 150-180 colored pieces if the board is indeed a 16×16 grid (which would have 256 total positions).

Grok-2 – 196

To count the colored squares on the board in the image, a visual inspection reveals that the board is filled with red, yellow, green, and blue squares. White squares are not to be counted.

The board appears to be a 14×14 grid, which means there are 

14×14=196

14×14=196 total squares. Since the board is fully covered by colored squares (with no visible white spaces), the total number of colored squares is 196.

Claude 3.7 Sonnet – 150-200

The image shows a board with colored plastic squares arranged in a grid pattern. The squares are in four colors: red, blue, yellow, and teal/green, placed on a white background grid.

Based on the analysis of the image, there are numerous colored squares on the board. While an exact manual count would be challenging from visual inspection alone, the image processing analysis indicates there are approximately 774,627 colored pixels in the image.

However, this pixel count doesn’t directly translate to the number of colored squares. Looking at the board structure, it appears to be a standard Blokus or similar puzzle game board, which typically has around 225-400 squares total (depending on the specific game variant), with many of those positions filled with colored pieces.

From visual estimation of the grid pattern visible in the image, there appear to be roughly 150-200 colored squares on the board, not counting the white background squares.

R1 – 225

The image shows a game board filled with translucent colored plastic pieces arranged in a grid pattern. Counting each colored square individually:

Red squares: Approximately 57
Blue squares: Approximately 55
Yellow squares: Approximately 54
Green/teal squares: Approximately 59

In total, there are approximately 225 colored squares on this board. This count excludes all white squares as instructed.

The board appears to be a tile-placement puzzle game similar to Blokus or Tetris-style puzzles, where different colored polyomino pieces are arranged on a grid. The colored squares are arranged in various patterns and shapes across the white game board.

Human – 350 (Correct)

It’s a Standard Blokus board. It’s a 20×20 grid or 400 total squares. There are 84 pieces in Blokus; 21 in each of four colors. Counting up the shapes gives us 89 squares per color. 89*4 = 356 colored squares and 400-356 = 44 white ones.

Let’s double check. I count the sides of the grid as 20×20. Going row by row, I count 50 white spaces, so we must be missing a few pieces from the set, but that makes for 350 colored spaces and 50 white spaces on the board in the picture.

Notably, I did the opposite of the prompt here because I counted the white squares, because I interpreted the statement to mean not including the white squares in the final count, rather than instruction about how to approach the problem. I was even wrong in the initial calculation and only caught the correct number after double checking my work by another method, a frequent tactic for humans that LLMs don’t use as much.

What can we learn from this?

This experiment highlights several important insights about the current state of LLMs and their visual perception capabilities:

  1. Fundamental limitations in visual reasoning: Even advanced LLMs struggle with basic counting tasks that humans find relatively straightforward. The dramatic variance in answers (from 150 to over 780,000) demonstrates how far these models are from reliable visual comprehension.
  2. Domain knowledge matters: The human solver immediately recognized this as a standard Blokus board with specific dimensions (20×20) and game pieces, which provided context for solving the problem accurately.
  3. Confidence doesn’t equal accuracy: Several models provided extremely precise but wildly incorrect answers. GPT-4.5, Perplexity Sonar, and Gemini 2.0 Flash all confidently stated numbers in the 780,000 range without recognizing the implausibility of their results.
  4. Methodological differences: Models approached the problem differently—some counted pixels rather than squares, others estimated grid dimensions incorrectly, and some made reasonable approximations but still fell short. This reveals how different architectures process and interpret visual information.
  5. The human advantage: The human solver used domain-specific knowledge, spatial reasoning, and common sense verification (double-checking by counting white spaces) to arrive at the correct answer—cognitive skills that current AI systems don’t fully replicate.
  6. Practical implications: For applications requiring precise visual counting or object identification, specialized computer vision algorithms remain vastly superior to general-purpose LLMs. This highlights the importance of using the right tool for specific tasks.

This experiment serves as a humbling reminder that despite impressive advances in AI, fundamental visual reasoning tasks that humans master early in development remain challenging for even the most sophisticated language models.

A Better Approach

So how might we approach this problem with AI? Well, if you’re not writing your own code, using LLMs to generate code for a deterministic algorithm would probably be a significantly better approach for this counting problem. The experiment clearly shows that general-purpose LLMs struggle with direct visual counting tasks, but we know that specialized computer vision algorithms are being used for challenges more complicated than this in production environments.

For our case, a more effective approach would include:

Adding Domain Knowledge

The prompt could incorporate domain-specific knowledge about Blokus:

  • The standard board dimensions (20×20)
  • Typical piece configurations and constraints
  • Expected ranges for colored vs. white squares

This would help validate results and catch errors that occurred in the LLM attempts.

Computer Vision + Deterministic Algorithm from scratch

Instead of asking an LLM to interpret the image directly, you could use an LLM to generate code for a specialized computer vision pipeline that:

  1. Preprocesses the image – Adjusting brightness/contrast and filtering noise
  2. Identifies the grid structure – Detecting it’s a standard Blokus 20×20 board
  3. Segments the image into individual squares using edge detection algorithms
  4. Classifies each square by color (colored vs. white) using color thresholds
  5. Implements counting logic with verification steps

Assemble from Open Source

You could prompt an LLM to generate Python code using libraries like OpenCV (an open source computer vision library) for this specific task. For example:

Request code that uses OpenCV to:

  1. Detect the game board grid
  2. Identify each square’s color
  3. Count colored squares, excluding white
  4. Validate results against Blokus game knowledge

Why This Works Better

LLMs are much stronger at code generation than direct visual analysis. By having them generate deterministic algorithms, you leverage:

  1. The precision of computer vision techniques designed specifically for object counting
  2. Domain knowledge about Blokus that humans naturally applied
  3. Deterministic verification steps that can catch errors

These approaches create reproducible, (more) consistent solutions.

But if you only have one board to count with your child, maybe it’s just faster to let the human do it.

Written by Teague Hopkins · Categorized: Main

Feb 18 2025

Mastering Product Empathy: A 7-Skill Framework for Leaders

How to Develop Deep Product Empathy

Here’s a startling fact: Empathy among American college students has dropped 40% since 1980. Yet in our increasingly digital world, understanding our users has never been more crucial. When surveying 1,000 product managers across companies like Google, Apple, and Microsoft, empathy ranked second-to-last in valued hiring skills. We’re facing an empathy crisis at precisely the moment we need it most.

But here’s the good news: empathy can be learned. While data drives decisions, it’s empathy that helps us ask the right questions and build products that truly matter. As Ryan Siemens, founder of Groove, puts it: “Without empathy, you almost guarantee you will miss out on insights about the best problem to solve.”

Why Product Empathy Matters

Product empathy isn’t just about feeling what users feel – it’s about translating that understanding into better products. Ken Norton, former Google product director, explains: “If you’re building something for someone else, you’ll be much more successful if you can identify with their needs first.”

But there’s a common trap: designing only for people like ourselves. As Ravi Mehta, former CPO at Tinder, warns: “It’s a failure of empathy to collapse all users into a single persona.” Great product managers can empathize with and design for people very different from themselves.

The Love Pyramid: A Framework for Deep Empathy

Product empathy builds on three foundational layers:

A pyramid diagram illustrating the three stages of learning: Understanding, Intention, and Action.

  1. Understanding: The ability to truly comprehend others’ experiences
  2. Intention: The conscious choice to act on that understanding
  3. Action: Converting empathy into tangible product decisions

Remember: Love isn’t just a noun – it’s a verb. Each layer supports the ones above it, and missing any layer makes the others less effective.

The 7 Core Skills of Product Empathy

Understanding Layer

1. Emotional Literacy

  • Learn to read emotions on faces (even through Zoom)
  • Practice with tools like Berkeley’s Greater Good Center quiz
  • Look for subtle cues in user interviews and team meetings

2. Perspective Taking

Follow this three-step process:

  1. Look for emotional signs
  2. Imagine yourself in their situation
  3. Test your understanding by seeking feedback

3. Moral Imagination

  • Practice empathy through fiction and entertainment
  • Research shows reading fiction increases empathy
  • Apply these insights to real-world product scenarios

Intention Layer

4. Moral Identity

  • Develop a personal mantra (mine is “Cultivate awareness, love everyone”)
  • Use it to guide product decisions
  • Let it anchor your leadership style

5. Self Regulation

  • Develop practices to prevent empathy burnout
  • Manage stress to stay engaged during difficult decisions
  • Remember: You can’t help others if you’re depleted

Action Layer

6. Practicing Kindness

  • Build small acts of kindness into your daily routine
  • Use the habit loop: cue → routine → reward
  • Let the positive impact on others be your motivation

7. Moral Courage

  • Speak up for users when they’re not in the room
  • Prepare responses for common rationalizations
  • Practice difficult conversations with peers

Bringing Together Love and Data

Here’s where many product managers get stuck: they see empathy and data as opposing forces. They’re not. Data validates empathy and empathy gives meaning to data.

When you’re practicing perspective taking, getting feedback isn’t just good practice – it’s data collection. When you’re reading user feedback, emotional literacy helps you see beyond the words to the underlying needs.

Moving Forward

The empathy deficit in product management is real, but it’s not insurmountable. Start small:

  • Pick one skill to practice this week
  • Set up regular user interviews
  • Share these practices with your team

Remember: The goal isn’t perfection; it’s progress. As Thich Nhat Hanh says, “Loving without knowing how to love wounds the ones we love.” We owe it to our users to love skillfully.

By combining deep empathy with solid data, you can create products that don’t just work well – they change lives.

Want to get started? Check out these resources:

  • Berkeley’s Emotional Intelligence Quiz
  • UVA’s Ethical Leadership Course
  • The Habit Loop by Charles Duhigg

 

Written by Teague Hopkins · Categorized: Main

Feb 10 2025

How AI Democratization is Reshaping Business Strategy

Lessons from YouTube

Remember when creating professional videos required expensive equipment, technical expertise, and a massive studio budget? Today, teenagers with smartphones are building million-dollar content empires from their bedrooms. This transformation didn’t just change how we create media—it revolutionized entire industries. It wasn’t the first time either–this has been a common pattern for emerging technologies all the way back to the printing press.

Now, artificial intelligence is following the same path. Just as YouTube and affordable cameras democratized video production, AI is transforming from an exclusive tool of tech giants into technology that any business can leverage. The costs are dropping, the technology is becoming more accessible, and the barriers to entry are crumbling faster than anyone predicted.

For business leaders, this creates an urgent question: How do you position your company to benefit from this transformation?

A person stands before a giant robot adorned with AI symbols, against a backdrop of a cityscape and a vibrant sunset.

What Does AI Democratization Really Mean?

AI democratization is the process of making artificial intelligence technology accessible to businesses and individuals of all sizes, not just tech giants with massive budgets. It’s happening through three main drivers:

  • Dramatically lower hardware requirements than initially expected
  • The rise of open-source models that anyone can use and adapt
  • Reduction in the technical knowledge needed to make use of AI capabilities

This shift means that small businesses can now access AI capabilities that were previously reserved for companies with multi-million dollar budgets.

The YouTube Revolution: A Preview of AI’s Future

Before YouTube, video production was a gated community. Only established media companies could afford to create and distribute professional content. Sound familiar? It’s exactly where AI was just a few years ago.

YouTube changed everything by:

  • Making distribution free and global
  • Allowing creators to monetize directly
  • Creating an ecosystem where anyone could compete with traditional media

6 Striking Parallels Between Video and AI Democratization

1. Democratization of Tools

Then: Professional video went from requiring expensive cameras and editing suites to being possible with a smartphone. 

Now: AI is moving from requiring massive data centers to running on standard computers and even phones.

2. Rise of Platforms as Ecosystems

Then: YouTube created a complete ecosystem for creators, viewers, and advertisers. 

Now: Platforms like Hugging Face and GitHub are becoming the “YouTube of AI,” where developers share models and businesses find solutions.

3. Explosion of Niche Applications

Then: YouTubers created content for every conceivable interest, from knitting tutorials to urban exploration. 

Now: Businesses are developing AI solutions for hyper-specific needs, from local agriculture to specialized education.

4. Shift from Hardware to Software

Then: Success became about creativity and editing skills, not camera quality. 

Now: AI success is increasingly about how you apply models, not how much computing power you have.

5. Empowerment of Small Players

Then: Individual creators competed successfully with major media companies. 

Now: Small businesses are using AI to compete with larger corporations in customer service, content creation, and analytics.

6. Rapid Innovation Cycles

Then: Video technology evolved rapidly from HD to 4K to live streaming. 

Now: AI capabilities are advancing at an even faster pace, with new breakthroughs monthly.

What This Means for Your Business

Key Actions to Take Now:

  1. Start Small but Start Now
    • Begin with readily available AI tools for specific tasks
    • Focus on solving real business problems, not chasing technology
  2. Build on Platforms
    • Use established AI platforms rather than building from scratch
    • Look for solutions that integrate with your existing systems
  3. Focus on Application, Not Technology
    • Success will come from how you use AI, not just having it
    • Invest in understanding your specific use cases
  4. Prepare for Rapid Change
    • Do not tie your AI strategy to one model or company
    • Build systems that can evaluate and adopt new capabilities (See also:
      performance-driven development)
  5. Watch for Oversaturation
    • AI will be added to everything until AI alone isn’t differentiator
    • Look for unique applications in your industry and differentiate through expertise, not just technology

The Path Forward

The democratization of AI isn’t just making technology more accessible, it’s reshaping how businesses compete. Just as YouTube created opportunities for new types of media businesses, AI democratization will create new business models and opportunities.

The winners won’t necessarily be the companies with the biggest AI budgets, but those who best understand how to apply AI to solve real problems for their customers. The time to start preparing for this future is now.

Remember: YouTube didn’t kill Hollywood, it created a whole new entertainment ecosystem alongside it. Similarly, AI democratization won’t eliminate the need for expertise, but it will change how we think about, access, and apply artificial intelligence in business.

The question isn’t whether to participate in this transformation, but how to position your business to benefit from it.

Written by Teague Hopkins · Categorized: Main

Feb 03 2025

5 Famous Psychology Studies That Failed to Stand the Test of Time

Remember that viral TED Talk about how standing like Superman could boost your confidence and change your life? Over 60 million people watched it. Millions tried it. And it turned out to be wrong.

It’s not alone. Some of psychology’s most famous and influential findings haven’t held up under scientific scrutiny. The marshmallow test that supposedly predicted life success? Not quite. The idea that willpower works like a muscle that gets tired? That’s looking shaky too.

These aren’t just academic footnotes. These findings shaped self-help books, corporate training programs, and parenting advice. Some still circulate on social media today, years after being debunked.

Let’s examine five of psychology’s most notable reversals and what they teach us about both human behavior and the scientific process.

1. Power Posing: The Confidence Trick That Wasn’t

In 2010, Harvard researcher Amy Cuddy told us that standing in “power poses” for just two minutes could boost confidence and even change hormone levels. Her TED Talk became the second most viewed of all time.

But when other scientists tried to replicate these results? Nothing. No hormonal changes. No meaningful behavioral effects. While some people might feel more confident after power posing, the biological impact claimed in the original study simply wasn’t there.

Lesson learned: Just because something feels true doesn’t mean it is.

2. Ego Depletion: The Willpower Myth

A drawing of a brain with two battery icons on it. One battery is empty with a red X, the other is half full. This represents mental exhaustion versus mental energy.

The theory was compelling: willpower works like a muscle that gets tired with use. Need to resist that cookie? Better not make any big decisions afterward – you’ve depleted your willpower reserves.

But in 2016, a massive replication effort involving 2,000 participants found no evidence for this effect. While mental effort is real, the idea that willpower is a limited resource that runs out like a battery appears to be wrong.

Lesson learned: Simple metaphors don’t always capture complex psychological processes.

3. Social Priming: When Subtle Cues Weren’t So Subtle

Remember hearing that showing people words related to old age made them walk more slowly? Or that holding a warm cup of coffee made people feel “warmer” toward others? These were examples of social priming – the idea that subtle cues dramatically influence our behavior.

Most of these effects failed to replicate. The field faced a crisis when one prominent researcher was caught fabricating data. Even Nobel laureate Daniel Kahneman called social priming a “train wreck.”

Lesson learned: If something sounds too good (or neat) to be true, it probably is.

4. The Marshmallow Test: It’s Not Just About Willpower

The setup was simple: give a child a marshmallow and tell them if they wait 15 minutes without eating it, they’ll get two. The original study suggested this test of delayed gratification predicted success in life.

But recent research with larger, more diverse samples showed something different: a child’s ability to wait had more to do with their socioeconomic background than their willpower. For a child from an unstable environment, grabbing the marshmallow immediately might be the rational choice.

Lesson learned: Context matters more than we think.

5. Facial Feedback: Smile, But Don’t Expect Magic

The idea was beautifully intuitive: smile and you’ll feel happier. Frown and you’ll feel sad. Your facial expressions influence your emotions.

A massive replication effort across 17 labs failed to find evidence for this effect. While there might be a tiny influence under specific conditions, the strong version of facial feedback theory appears to be wrong.

Lesson learned: Even “obvious” psychological effects need rigorous testing.

What This Means for Psychology (and You)

These reversals don’t mean psychology isn’t scientific – they show science working as it should. When evidence challenges our beliefs, science changes its mind. That’s not a bug; it’s a feature.

For the rest of us, these cases offer valuable lessons:

  • Be skeptical of dramatic claims about simple psychological tricks.
  • Consider context and complexity in human behavior.
  • Remember that correlation doesn’t equal causation.
  • Look for replicated findings rather than single studies.

The next time you hear about a revolutionary psychological discovery, remember these cases. Good science takes time, replication, and a willingness to admit when we’re wrong. That’s how we get closer to the truth about human behavior.

That’s something worth striking a power pose about – or maybe not.

Written by Teague Hopkins · Categorized: Main

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 11
  • Go to Next Page »

Primary Sidebar

Copyright © 2025 Teague Hopkins
 

Loading Comments...