• Skip to main content
  • Skip to primary sidebar

Teague Hopkins

Mindful Product Management

  • Home
  • About
  • Blog
  • Contact

Blog

Mar 04 2014

How to Set your Minimum Success Criteria

When you’re running a lean experiment, one of the key decision points is setting your minimum success criteria: the breakpoint at which you consider the experiment to have validated or invalidated your hypothesis.

Make sure you explicitly set the criteria before you run the experiment, and make a record of it. It’s too easy to fudge the numbers later and rob yourself of any valuable insight.

There are two methods I recommend for determining the minimum success criteria for your experiments.

The ‘business school’ method of setting minimum success criteria is to put together your pro forma spreadsheet with projections of what numbers you need to hit for the business to be financially viable and then reverse engineer the conversion rate you need to hit to make those numbers.

The approach that I find tends to work better, particularly for companies that are in the early stages, is to ask your team. What conversion rate would you have to see for you and your team to still be excited about the opportunity? This is the half-life of enthusiasm approach (H/T Frank DiMeo for the terminology).

half-life of enthusiasm
the time taken for your team’s enthusiasm to drop to half of its initial level

In early-stage companies (we’re talking pre-product/market fit), funding is tight, but usually not the limiting factor. Enthusiasm and the will to continue is usually in much shorter supply. So, optimize for your scarcest resource. If it’s truly funding, make the numbers work. If it’s enthusiasm, make sure the problem opportunity still excites the team.

Written by Teague Hopkins · Categorized: Main · Tagged: Breakpoint, Decision theory, Lean Startup

Feb 12 2014

Lean + Agile DC: A Summary in Tweets

Written by Teague Hopkins · Categorized: Main · Tagged: Agile, Lean, Lean Startup, Web 2.0

Jan 27 2014

Getting Started with Lean Startup

From Lean Startup Circle at Capital One Labs, December 17, 2013.

Written by Teague Hopkins · Categorized: Main

Jan 14 2014

Stop Celebrating Failure

One of the big challenges with doing innovation in any setting is that most people are afraid of failure, and when you’re afraid to fail, you don’t take the risks that are necessary to keep creating and innovating.

Photo Credit: AlmazUK cc
Photo Credit: AlmazUK cc

Most professionals have had the idea that “failure is bad”, “failure is not acceptable”, or “failure is final” drilled into them through 16 years of schooling by the time they enter the workforce. We take tests in school and only get one shot to get it right, which is not usually how it works in the real world.

Recognition of this disconnect has led to a sort of “counter-cultural movement” among entrepreneurs of celebrating failure. While I understand how we got here, I think it’s an over-correction. We’re trying to balance one extreme out with another.

The problem with celebrating failure is that, unless you’re learning something, it’s still just failure.

We need to stop celebrating failure indiscriminately, and instead celebrate the learning that can come out of failure (and sometimes out of other things). Failure might be a necessary cost, but it’s the learning that helps us improve our creations and make the next ones better.

Written by Teague Hopkins · Categorized: Main · Tagged: Cognition, ego risk, Failure, Risk, Tests

Jan 06 2014

One Simple Trick for User Centered Design

How often has your company run a focus group or usability test and generated big fat report that just sits on the shelf somewhere full of great ideas the never get implemented?

You can do all the research you need, but if you don’t use it in your decision-making process, you’d be better off not having done that all.

In order for that data in the report to get used, it must be highly visible and personally relatable. We all know what happens when the team has to seek out the results and can’t see how they relate to their work.

Enter the Information Radiator

[box type=”info” style=”rounded” border=”full”]An information radiator is a large, highly visible display used by software development teams to track progress.[/box]
Photo Credit: hugovk cc
Photo Credit: hugovk cc

One of the best approaches I’ve ever seen to achieving salience like this is a variant on the information radiators used for things like bugs fixed, bugs reported, or server uptime.

After conducting a series of recorded usability session with end users, one particularly clever usability expert I know convinced the team to let him put data from the sessions on the information radiators in the office (in this case, large monitors). Rather than reduce the users to a set of charts, he compiled the recordings of each session, edited them down to the biggest pain points, and played this highlight reel of ‘users having difficulty’ in a loop on the big screens around the office.

Every time people came in the office, they saw the endless loop of users trying and failing to use the website. Having those results staring at them every day was a great way to motivate the team to fix the confusing spots, empathize with the user, and raise the salience of usability problems to a level normally reserved for technical errors.

Written by Teague Hopkins · Categorized: Main · Tagged: Evaluation, Evaluation methods, Human–computer interaction, Salience, Science, Software testing, Technology, Tests, Usability, User

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 3
  • Page 4
  • Page 5
  • Page 6
  • Page 7
  • Interim pages omitted …
  • Page 11
  • Go to Next Page »

Primary Sidebar

Copyright © 2025 Teague Hopkins
 

Loading Comments...