Every product decision is a trade-off. You will always have more ideas than capacity. These frameworks make trade-offs explicit and give teams a shared language for saying no.

No single framework is universally best — choose based on your context, data maturity, and stakeholder needs.

RICE

Developed by Intercom. Scores items on four factors:

Score = (Reach × Impact × Confidence) / Effort

FactorDefinitionScale
ReachHow many users affected per quarterAbsolute number
ImpactEffect on each user (minimal → massive)0.25, 0.5, 1, 2, 3
ConfidenceHow sure you are of estimates50%, 80%, 100%
EffortPerson-months of workAbsolute number

Best for: data-driven teams with analytics in place. Forces you to quantify assumptions.

Watch out for: false precision. A RICE score of 47.3 is not meaningfully different from 45.1.

MoSCoW

Categorises items by necessity:

  • Must have — the product fails without this (non-negotiable for release)
  • Should have — important but not critical (painful to omit)
  • Could have — nice to have (include if capacity allows)
  • Won’t have (this time) — explicitly out of scope

Best for: scope negotiation with stakeholders. “We can’t do everything — which of these are Must?”

Watch out for: everything becomes a Must. Be strict: Musts should be ~60% of capacity to leave room for Shoulds.

Kano Model

Classifies features by their effect on customer satisfaction:

CategoryAbsentPresent
Basic needsDissatisfiedNeutral (expected)
PerformanceDissatisfiedProportionally satisfied
DelightersNeutralDisproportionately satisfied

Basic needs (login works, pages load) — must be flawless. No credit for doing them well, but huge penalty for doing them badly.

Performance (speed, storage, features) — more is better, linearly. Competitive differentiators.

Delighters (unexpected features) — create loyalty and word-of-mouth. Today’s delighter becomes tomorrow’s basic need.

Best for: understanding where investment yields the most satisfaction. Don’t over-invest in basics; don’t ignore them either.

Value vs Effort Matrix

A simple 2×2:

Low EffortHigh Effort
High ValueQuick wins (do first)Big bets (plan carefully)
Low ValueFill-ins (do if idle)Time sinks (avoid)

Best for: roadmap planning sessions and quick visual prioritisation. Good for workshops with mixed audiences.

ICE Scoring

Score = Impact × Confidence × Ease (each rated 1–10)

A lightweight alternative to RICE. Less rigorous but faster to apply. Good for early-stage teams or when you need a rough ranking quickly.

Cost of Delay

Quantifies the economic cost of not doing something now.

Cost of Delay = value lost per unit of time

Three profiles:

  • Standard — steady value, constant cost of delay
  • Urgent — value drops sharply over time (regulatory deadlines, security patches)
  • Diminishing — value decays slowly (market opportunity, seasonal features)

Combine with duration to get WSJF (Weighted Shortest Job First): CoD / Duration. Do the highest WSJF items first.

Best for: urgency-based decisions and economic arguments for prioritisation.

Opportunity Scoring

Plot features on two axes:

  • Importance — how important is this outcome to the customer?
  • Satisfaction — how well does the current solution satisfy them?

High importance + low satisfaction = biggest opportunity. Based on Ulwick’s Outcome-Driven Innovation.

Best for: customer-research-driven prioritisation where you have survey or interview data.

Estimation

MethodHow it worksBest for
Story Points (Fibonacci)Relative sizing: 1, 2, 3, 5, 8, 13, 21Teams tracking velocity over sprints
T-shirt SizingXS, S, M, L, XLQuick roadmap-level effort classification
Time-basedHours or daysSmall tasks with well-understood scope
NoEstimatesSlice everything small; count itemsMature teams with consistent item size

Story points measure relative complexity, not time. A 5-point story is roughly 2–3× a 2-point story, but nobody should convert points to hours.

Use Planning Poker for team estimation: reveal simultaneously, discuss outliers.

Choosing a Framework

ContextRecommended
Data-rich, quantitative cultureRICE or Cost of Delay
Stakeholder scope negotiationMoSCoW
Customer satisfaction focusKano or Opportunity Scoring
Quick visual alignmentValue vs Effort Matrix
Rapid rough rankingICE
Urgency-driven decisionsCost of Delay / WSJF

Anti-Patterns

Analysis paralysis — spending more time scoring than building. The framework should take minutes, not days.

Scoring without data — RICE with made-up numbers is just opinion with extra steps. Be honest about confidence levels.

Using one framework for everything — different decisions need different lenses. A strategic bet needs different treatment than a bug backlog.

HiPPO — Highest Paid Person’s Opinion overrides the framework. If leadership can veto any score, the framework is theatre.

Ignoring Cost of Delay — two items with the same value but different urgency should not be treated equally.