Product Principles4 min read

Thinking in Bets

How to treat every product initiative as a calculated experiment, embrace uncertainty as a feature, and systematically minimise the cost of being wrong.

Thinking in Bets

TL;DR

  • Product management is not about making perfect decisions. It's about making high-quality, calculated bets.
  • Frame every initiative as a hypothesis. If you can't state what you'll measure, you're guessing.
  • AI has collapsed the cost of bets. You can now test more hypotheses, faster, at lower cost. Use that.

You will be wrong. Regularly. The skill isn't avoiding mistakes. It's minimising the cost of being wrong and maximising the learning from every outcome.

This is the engine of learning. It transforms uncertainty from a threat into a strategic advantage.

The four-step betting framework

1. Frame every initiative as a hypothesis

Before you build anything, articulate a clear hypothesis that links a proposed solution to a specific, measurable outcome:

"We believe that [this action] for [this user] will result in [this outcome]. We will know this is true when [this metric changes]."

This forces critical thinking about assumptions and what success actually looks like. If you can't fill in those blanks, you're not ready to build.

AI introduces new categories of bets worth framing explicitly. Model selection bets: "We believe Claude Opus 4.6 will outperform GPT-5.4 on this extraction task at 40% lower inference cost." Prompt strategy bets: "We believe a chain-of-thought prompt with examples will reduce hallucination rates from 12% to under 3%." Architecture bets: "We believe a multi-agent pipeline will produce higher-quality outputs than a single model call, and the latency trade-off is acceptable for this use case." Each of these carries inference cost implications that need modelling before you commit resources. The business viability chapter covers how to model COGS and price these bets.

These are real product decisions with measurable outcomes. Treat them the same way you'd treat any feature hypothesis.

2. Test assumptions with minimal effort

Use a hierarchy of evidence to validate hypotheses, starting with the riskiest assumptions first. This maps directly to the discovery process, where every idea moves through stages of increasing confidence. Test them with the least effort possible:

  • AI-coded prototypes (vibe-code a working version in hours, not weeks)
  • Eval suites (run structured evaluations against test datasets before shipping AI features, covered in depth in evaluation frameworks)
  • Landing page tests (gauge interest before building)
  • Concierge MVPs (manually perform the service before automating)
  • AI-simulated user behaviour (generate synthetic test scenarios to stress-test assumptions)

AI tools have compressed the time and cost of every testing method on this list. A prototype that took two sprints now takes an afternoon. An eval suite that required a data science team can be built by a PM with the right tooling. This doesn't change the principle (test before you commit). It changes the economics radically. The excuse "we didn't have time to validate" no longer holds.

3. Instrument everything to measure impact

If a feature ships without a defined mechanism for measuring its impact, it is not "done." Instrument products from the outset to capture the data needed to prove or disprove hypotheses. Use the data to understand what changed, then go back to customers to understand why.

4. Conduct post-launch reviews

After a feature has been in the market, review it. This is not a celebration. It's a critical analysis of your initial hypothesis. Review the data, compare results to predicted outcomes, and document key learnings.

The best product teams treat failures as valuable data. A hypothesis that's proven wrong is a success if it generates learning that improves the next bet.

The collapsed cost of bets

When inference is cheap and prototyping takes hours, bet size shrinks and cycle time accelerates. This changes the math on experimentation.

Previously, testing a hypothesis might cost two engineers for six weeks. That's an expensive bet, so you'd better be fairly confident before placing it. Now, the same test might cost one person and an afternoon of AI-assisted building. You can afford to be less certain going in, because the downside of being wrong is small.

The practical effect: run more experiments per quarter. Kill losing bets faster. Double down on winners sooner. Your planning and prioritisation cadence needs to match this pace. Teams that still agonise over whether to build a prototype are being outpaced by teams that build three prototypes and let the data decide.

The diagnostic question

Ask any PM on your team: "What is your current hypothesis, and what would cause you to abandon it?"

If they can answer precisely, with a metric and a threshold, they're thinking in bets. If the answer is vague, or if they've never considered the failure condition, they're building on faith. Faith ships features. Bets ship learning. In a world where AI has made building cheap, the team that learns fastest wins.

v2.1 · Updated Apr 2026