Product Lifecycle & Process6 min read·v2.0 · Updated Mar 2026

Planning and Prioritisation

Outcome-driven roadmapping, AI-specific prioritisation criteria, and planning in compressed build cycles.

TL;DR

  • A roadmap is a strategic communication tool, not a Gantt chart of deliverables.
  • When AI coding tools compress build timelines from months to weeks, planning cadence must accelerate to match.
  • Prioritisation for AI features demands new criteria: eval coverage, inference cost, agentic reliability, and data flywheel potential.

Planning and prioritisation is where customer insights and validated problems become a clear, actionable plan. Three pillars: strategic roadmapping, prioritisation frameworks, and goal setting.

1. Strategic roadmapping

A product roadmap tells a story about where the product is going and why. It connects day-to-day work to a long-term vision. It is not a feature list with dates.

Key components

The vision. A clear, compelling narrative about the future you're trying to create for customers.

The outcomes. Measurable results you aim to achieve (e.g., "Improve search success rate by 15%"), not a list of features. This empowers the team to find the best solution.

The time horizons. Balance short-term execution with long-term strategy:

  • Now – well-defined work you're actively delivering
  • Next – validated opportunities planned for implementation, with some flexibility
  • Later – big, long-term strategic bets, framed as high-level themes

The Opportunity Solution Tree (OST)

A visual framework connecting desired outcomes to customer opportunities (problems or needs), potential solutions, and the experiments you run to validate those solutions. It makes the connection between strategy and execution visible and traceable.

Best practices

  • Start with the "why" – every item connects to vision and OKRs
  • Focus on problems, not solutions – give teams autonomy to find the best approach
  • Collaborate extensively – work with cross-functional partners on feasibility and alignment
  • Communicate relentlessly – share frequently to build trust
  • Tailor for your audience – leadership sees strategic bets; the team sees sequencing and problems; customers see commitments and certainty

Roadmapping in compressed build cycles

AI coding tools have changed the economics of building software. Work that took a team two months can ship in two weeks. This has real implications for planning.

The "Now" horizon expands. When your team can deliver more per cycle, the set of things you can commit to doing right now grows. Roadmap items migrate from "Next" to "Now" faster than traditional planning cadences anticipate.

Sprint planning becomes more tactical. With shorter build times, sprint planning shifts from "what can we fit in two weeks" to "what's the highest-value sequence of small bets this week." Daily reprioritisation becomes practical, not chaotic.

Roadmapping becomes more strategic. Paradoxically, as execution speed increases, the strategic layer of the roadmap matters more, not less. When you can build almost anything quickly, the discipline of choosing the right thing to build becomes your competitive advantage. The bottleneck shifts from engineering capacity to product judgement.

Planning cadence accelerates. Quarterly roadmap reviews feel too slow when the team can ship an entire feature between reviews. Move to monthly strategic reviews with weekly tactical check-ins. Keep the roadmap a living document, not a quarterly artefact.

2. Prioritisation frameworks

Prioritisation is the discipline of choosing what to build and, more importantly, what not to build. Several established frameworks exist. Use the one that fits the decision.

  • RICE (Reach x Impact x Confidence / Effort) – best for objectively scoring and ranking competing initiatives
  • Kano Model (Basic needs, Performance features, Delighters) – best for understanding customer satisfaction and balancing different types of work
  • ICE (Impact, Confidence, Ease) – a simpler, faster alternative to RICE for smaller bets
  • RUF (Reliability, Usability, Features) – best for balancing investment beyond new features, visualised as a pyramid with reliability at the base
  • The $10 Game – each participant allocates a virtual $10 across ideas, then explains their reasoning. Best for collaborative prioritisation sessions that build team alignment.

None of these frameworks are proprietary. Pick the one that matches the decision type and move on.

AI-specific prioritisation criteria

Standard frameworks don't capture the unique trade-offs of AI features. For any AI initiative, ask these five questions alongside your chosen framework:

What is the eval coverage? If you can't measure whether the feature works, you can't prioritise it. An AI feature without an eval set is an unvalidated bet. Features with strong eval coverage are lower risk and should be prioritised accordingly.

What is the inference cost at scale? A feature that costs $0.02 per query at demo scale might cost $200,000 per month at production volume. Model the cost curve before committing. Cheaper models with acceptable quality often beat expensive models with marginally better quality.

Does this need agentic reliability or is single-shot sufficient? Single-shot features (summarise this document, classify this ticket) are simpler, cheaper, and more reliable. Multi-step agentic features (research this topic across five sources, then synthesise a report) compound errors at each step. Know which category your feature falls into and price the reliability work accordingly.

Build vs buy at this capability level? Foundation model providers ship new capabilities quarterly. A feature you'd spend three months building might ship as a native API capability next quarter. Assess the build/buy/wait trade-off explicitly.

Does this create a data flywheel? Prioritise features where usage generates data that improves the product. A feature that gets smarter with every interaction compounds in value. A feature that's static from day one doesn't.

3. Goal setting with OKRs

OKRs (Objectives and Key Results) align work with strategic objectives. They shift focus from delivering features to achieving tangible outcomes.

Define aspirational objectives. Ambitious, qualitative, time-bound goals that inspire the team (e.g., "Create a delightful onboarding experience").

Attach measurable key results. Two to three specific, measurable metrics per objective that define success (e.g., "Increase user activation rate from 20% to 35%").

Align across the organisation. Cascade company-level OKRs to portfolio and team OKRs so everyone pulls in the same direction.

Track and grade. Track weekly. Grade each key result 0 to 1 at quarter's end. A score of 0.6 to 0.8 indicates success and appropriately ambitious goals. Consistently scoring 1.0 means you're not stretching.

Common OKR mistakes

  • Output-based key results – "Ship feature X" is an output, not an outcome. Use "Increase metric Y by Z%."
  • Too many OKRs – if everything is a priority, nothing is. Three objectives per team maximum.
  • Set and forget – OKRs that aren't reviewed weekly become wall decoration.
  • Punishing misses – if missing a stretch goal has consequences, teams will sandbag their targets.