Posts
RICE scoring framework: how to prioritize your product backlog
Product team gathered around a whiteboard covered in colorful sticky notes, sorting and ranking items to decide what to build nextKelly Lewandowski
Last updated 19/02/20268 min read
How RICE scoring works
| Component | What it measures | How to score it |
|---|---|---|
| Reach | How many people this affects in a set time period | Real numbers (e.g., 2,000 users/quarter) |
| Impact | How much each person is affected | 3 = massive, 2 = high, 1 = medium, 0.5 = low, 0.25 = minimal |
| Confidence | How sure you are about your estimates | 100% = data-backed, 80% = some evidence, 50% = mostly guesswork |
| Effort | Total team time required | Person-months (include design, dev, QA, docs) |
A quick example
- Reach: 2,000 customers per quarter
- Impact: 2 (high, directly affects conversion)
- Confidence: 80% (you have A/B test data from a competitor)
- Effort: 3 person-months
Scoring each component well
Reach: use real numbers, not percentages
Impact: force a distribution
Confidence: start low and earn your way up
Effort: count everything, not just engineering
A person carefully balancing objects of different sizes on a scale, representing the weighing of reach, impact, confidence, and effortRICE vs MoSCoW vs WSJF
| RICE | MoSCoW | WSJF | |
|---|---|---|---|
| Type | Quantitative score | Categorical buckets | Economic score |
| Best for | Feature-level ranking | MVP scope definition | Epic/initiative-level decisions |
| Data needed | User metrics + estimates | Stakeholder judgment | Economic data + cross-team input |
| Complexity | Medium | Low | High |
| Output | Numerical priority score | Must / Should / Could / Won't | Cost-of-delay priority |
Common mistakes that skew your scores
Team members individually writing scores on cards before revealing them together, preventing anchoring bias