<aside> đź“‘ Table of contents



RICE Score = Reach x Impact x Confidence x Ease.


One of the hardest problems in growth is figuring out what to work on next. Most teams have a backlog of ideas that looks something like this:


How do you prioritize? RICE is one framework teams use.

Ultimately, it boils down to opportunity cost. If you run one experiment, it means you won’t be able to run another, so picking the best next experiment matters a lot.

Quick definitions

How it’s used

In a growth meeting, you’ll discuss as a team how to score each backlog experiment. Each category gets a score from 1-10. 1 is bad, 10 is good.

Is the final score totally scientific? No. But in the process of fleshing everything out, you’ll often realize that one experiment is order-of-magnitude better than another.

(By the way, it’s very easy for backlogs to get long. You can have a lot of ideas. We find it saves time to just use your gut to pick your 3 to 5 favorite ideas if the backlog is long, then score those. You’ll build your intuition the more you score ideas.)

Here’s more detail on RICE.


This is the most quantifiable one. It also takes the most work.

You’ll want to make some assumptions and boil down the experiment to how much the North Star Metric will go up at the end of the day.

For example: 5000 waitlist members * 5% email click-through-rate * 1% site conversion = 2.5 Monthly Active Users

Here’s some more detail.

Pretend your north star metric is “Monthly Active Users.”

Let’s say you’re scoring the experiment “Send a reminder email to everyone on the waitlist”.

To figure out reach, do some back-of-the-napkin math and assumptions.

  1. Look up the number of people who are on the waitlist. In our example, let’s say this is about 5000 people.
  2. Let’s assume 5% of people who read our emails click the link.
  3. Let’s assume 1% of people who land on the site convert into an active user.
  4. Do the math: 5000 * 5% * 1% = 2.5 Monthly Active Users

Compare that to another experiment. Say, “Run Google Ads targeting a competing tool”

  1. Look up how expensive the bids are for the competing tool and take the rough average. In the case below, let’s say the average is about $5. (To make the math easier.)


  2. Take your company’s budget. Most seed-stage startups budget $5,000 per ad channel as an initial test.

  3. Divide your budget by the average cost per click. $5,000 divided by $5 = 1,000.

  4. That means we’ll get 1,000 clicks to our site.

  5. Make sure enough people search per month to justify getting 1,000 clicks. You generally need at least 20x more searches than clicks. In our example, 246,000 is way more than 20,000, so we’re fine.

  6. Assume 1% of landing page visits turn into an active user.

  7. Do the math: 1,000 * 1% = 10 Monthly Active Users

The Google Ads experiment clearly has a higher reach than the waitlist email, so you’d score it higher, say a 7 vs. a 3.

You can usually get a feel for how to scale this to 1-10 after you do it for a few experiments.


We want experiments that can scale, are repeatable, and feed a growth loop.

What can scale? Cold outreach tests that eventually turn into a full sales team. Digital ads with a large audience. Landing page improvements on pages that could reach millions of people.

What’s not as scalable? Tiny markets. Tweeting from the founder’s account. Reaching out to the 3 influencers in your space.

What's repeatable? Sending a weekly newsletter. Running Google Ads that target new people searching every month. Creating a template for your sales team to close deals.

What's not as repeatable? Launching to Product Hunt. Getting press or going viral (in most cases). Anything one-off.

Rough scores


How likely is this to succeed? Inform this with your knowledge from past experiments, the industry, user learnings, and gut feel.

Map your percentage confidence to a number.

Teams often overlook marketer-channel fit. If your background is in sales, you’ll probably be good at cold-calling. If you used to be a journalist, content and PR may be your superpower. Put simply: you’ll do a better job with channels that fit your background and personality.

One last thing teams undervalue: excitement. Some ideas are just more fun to work on, and you tend to do a better job on them. Give those a 2-3 point bump.

Rough scores


“Ease” is how many resources it takes and how quickly you can launch an experiment.

For example, if you want to build a referral program into the product, that may take:

Compare that to posting in a Google Group:

Rough scores

Again, you’re looking to be able to compare experiments with rough orders-of-magnitude.