Every product decision is a trade-off. You will always have more ideas than capacity. These frameworks make trade-offs explicit and give teams a shared language for saying no.
No single framework is universally best — choose based on your context, data maturity, and stakeholder needs.
RICE
Developed by Intercom. Scores items on four factors:
Score = (Reach × Impact × Confidence) / Effort
| Factor | Definition | Scale |
|---|---|---|
| Reach | How many users affected per quarter | Absolute number |
| Impact | Effect on each user (minimal → massive) | 0.25, 0.5, 1, 2, 3 |
| Confidence | How sure you are of estimates | 50%, 80%, 100% |
| Effort | Person-months of work | Absolute number |
Best for: data-driven teams with analytics in place. Forces you to quantify assumptions.
Watch out for: false precision. A RICE score of 47.3 is not meaningfully different from 45.1.
MoSCoW
Categorises items by necessity:
- Must have — the product fails without this (non-negotiable for release)
- Should have — important but not critical (painful to omit)
- Could have — nice to have (include if capacity allows)
- Won’t have (this time) — explicitly out of scope
Best for: scope negotiation with stakeholders. “We can’t do everything — which of these are Must?”
Watch out for: everything becomes a Must. Be strict: Musts should be ~60% of capacity to leave room for Shoulds.
Kano Model
Classifies features by their effect on customer satisfaction:
| Category | Absent | Present |
|---|---|---|
| Basic needs | Dissatisfied | Neutral (expected) |
| Performance | Dissatisfied | Proportionally satisfied |
| Delighters | Neutral | Disproportionately satisfied |
Basic needs (login works, pages load) — must be flawless. No credit for doing them well, but huge penalty for doing them badly.
Performance (speed, storage, features) — more is better, linearly. Competitive differentiators.
Delighters (unexpected features) — create loyalty and word-of-mouth. Today’s delighter becomes tomorrow’s basic need.
Best for: understanding where investment yields the most satisfaction. Don’t over-invest in basics; don’t ignore them either.
Value vs Effort Matrix
A simple 2×2:
| Low Effort | High Effort | |
|---|---|---|
| High Value | Quick wins (do first) | Big bets (plan carefully) |
| Low Value | Fill-ins (do if idle) | Time sinks (avoid) |
Best for: roadmap planning sessions and quick visual prioritisation. Good for workshops with mixed audiences.
ICE Scoring
Score = Impact × Confidence × Ease (each rated 1–10)
A lightweight alternative to RICE. Less rigorous but faster to apply. Good for early-stage teams or when you need a rough ranking quickly.
Cost of Delay
Quantifies the economic cost of not doing something now.
Cost of Delay = value lost per unit of time
Three profiles:
- Standard — steady value, constant cost of delay
- Urgent — value drops sharply over time (regulatory deadlines, security patches)
- Diminishing — value decays slowly (market opportunity, seasonal features)
Combine with duration to get WSJF (Weighted Shortest Job First): CoD / Duration. Do the highest WSJF items first.
Best for: urgency-based decisions and economic arguments for prioritisation.
Opportunity Scoring
Plot features on two axes:
- Importance — how important is this outcome to the customer?
- Satisfaction — how well does the current solution satisfy them?
High importance + low satisfaction = biggest opportunity. Based on Ulwick’s Outcome-Driven Innovation.
Best for: customer-research-driven prioritisation where you have survey or interview data.
Estimation
| Method | How it works | Best for |
|---|---|---|
| Story Points (Fibonacci) | Relative sizing: 1, 2, 3, 5, 8, 13, 21 | Teams tracking velocity over sprints |
| T-shirt Sizing | XS, S, M, L, XL | Quick roadmap-level effort classification |
| Time-based | Hours or days | Small tasks with well-understood scope |
| NoEstimates | Slice everything small; count items | Mature teams with consistent item size |
Story points measure relative complexity, not time. A 5-point story is roughly 2–3× a 2-point story, but nobody should convert points to hours.
Use Planning Poker for team estimation: reveal simultaneously, discuss outliers.
Choosing a Framework
| Context | Recommended |
|---|---|
| Data-rich, quantitative culture | RICE or Cost of Delay |
| Stakeholder scope negotiation | MoSCoW |
| Customer satisfaction focus | Kano or Opportunity Scoring |
| Quick visual alignment | Value vs Effort Matrix |
| Rapid rough ranking | ICE |
| Urgency-driven decisions | Cost of Delay / WSJF |
Anti-Patterns
Analysis paralysis — spending more time scoring than building. The framework should take minutes, not days.
Scoring without data — RICE with made-up numbers is just opinion with extra steps. Be honest about confidence levels.
Using one framework for everything — different decisions need different lenses. A strategic bet needs different treatment than a bug backlog.
HiPPO — Highest Paid Person’s Opinion overrides the framework. If leadership can veto any score, the framework is theatre.
Ignoring Cost of Delay — two items with the same value but different urgency should not be treated equally.
Links
- Intercom on RICE
- ProductPlan Prioritisation Glossary
- Scaled Agile — WSJF
- OKRs — aligning priorities with objectives
- Product Discovery — ensuring you’re prioritising the right problems