In Attempts to Decrease Subjectiveness in Backlog Prioritization

Do not Believe Your Product Manager

The coolest part of working in IT is MVP creating—a mass of ambiguity when every right way to go is one of the thousands of the same rightness. Test, throw away, and forget as a bad dream making a new prototype. But no fun is endless; after that, any product team should see the whole picture of where to go next in the most transparent way possible. The prioritization has begun.

There is a big “but”—we are all people. And people are subjective as hell. We are deficient at calculating numbers, making rational decisions, controlling our emotions, etc. And combining all of these, we are imperfect at prioritizing. The real problem is that many famous prioritization frameworks indulge us in being so.

There is a big “but”—we are all people. And people are subjective as hell. We are deficient in calculating numbers, making rational decisions, controlling our emotions, etc. And combining all of these, we are imperfect in prioritizing. The real problem is that many famous prioritization frameworks indulge us in being so.

Take a look at RICE, for example—a great framework, the name is pretty sleek, it’s desirable in its simplicity. And in some good hands, it is a powerful tool of articulating priorities, no doubts. And at the same time, RICE is walking a tightrope.

The only objective metric in it is the letter R (stands for Reach). And even here, I would like to ask: reach who, new customers, old users, my deepest emotions?

I (Impact) is an ambiguous one. Who is going to define how impactful a new feature is? Is it going to be a team’s poll? I need to imagine a user encountering a feature? Is it about business, UX, or the delightfulness of our customers? Mostly, a product manager will not ask these questions and will hallucinate this rank, making it subjective.

Then C (Confidence). And, c’mon, any rank here is going to be a lie. Almost everybody is going to be, whether pretty confident in himself or a lack of it. Otherwise, you are going to define what confidence is for your team and measure it somehow.

E (Efforts) is acceptable. You estimate it poorly anyway in IT (too mush variables). Bigger a feature, the worse our estimations. However, it’s another topic to discuss somewhere else.

The same issue is with other frameworks—the desire to misuse them is too huge. With all of “confidence,” ambiguity, and vague letter interpretations, you face with a problem of bad articulating of priorities to your team. If a developer would go and stare for a while on your backlogs sorted by the RICE framework, he will not understand anything. Why is it so impactful? Why are you so confident in it? Reach where? Transparency is the key to these questions, and no so many frameworks can deliver this trait.

Unfold Frameworks

But how can you increase the transparency of prioritization frameworks and get rid of nearly all subjectiveness? Adding more symbols to names is not an option, of course. I’m not even sure that you need to remember all these abbreviations. They are simple, yes, but not scalable and fundamental—frameworks by their nature are just derivative. They have an underlying ground. In a nutshell, the idea is to unfold these frameworks most clearly for our team and yourself to show the fundament of our prioritizing processes.

Start with your team. If you use any prioritization models, you already have gotten many interrogations internally about model-dictated decisions. Next time when a developer or a product designer comes so you could explain priorities, be sure to jot down their questions, worries, and feelings and how you cover them (your thoughts in the process are valuable notes). And yes, it’s like an old-fashioned interview with you as an interviewee.

Then interrogate yourself in the same way. What are questions looming around when you think about priorities? What does Reach or Impact mean to you? What do you need to know to set priorities?

Doing so with a few colleagues and yourself, you will come up with a bunch of questions from which you need to draw out the most important ones (not more than 5–7).

The hardest part is done. Now you have to create a table with these questions for columns and features in the backlog for rows like it is on the screenshot below. If needed, convert questions into short descriptions (Metric-driver or Delighter in my case).

The example of the prioritization table

Value Your Features

The table is ready, and now it’s time to really prioritize it. Questioning every feature under every gotten column, you will value every side of a feature by some weight. How to chose weights? Easy. Just use one of two options:

  • N-point scale (I prefer N = 5)
  • YES or NO where YES = 1 and NO = 0

In the next step, these weights could become heavier or lighter by some math magic depending on your thoughts on the importance of every question. The reason is we all work in different environments and, for example, metrics for your company could be more valuable than, let’s say overall delightfulness of users. Or vice versa, whatever.

Job Size is essential for everybody. And the right way to measure it is to ask your team with a questionnaire where the answer lies on the same N-point scale.

No more waiting, go and value features to move on!

Do Some Math

When you know what is really important to put the backlog in order, let’s do some math and develop the final score formula. Actually, I do not think this step is crucial—having the table, you already can choose what to do next depending on values you’ve set in. The final score is just the easiest way to order features, no more.

For many of us, math is scary, but do not worry; it’s pretty harmless regarding our prioritization model. The hardest thing here is to handle the clumsiness of the Excel/Google Sheets formula input.

The more questions you have, the bigger and scarier the end formula is going to be. In my case, seven columns led me to this:

ROUNDUP((((POWER(0.5*(1+SQRT(5)),B3+1) - POWER(0.5*(1-SQRT(5)),B3+1)) / SQRT(5)) + ((POWER(0.5*(1+SQRT(5)),C3+1) - POWER(0.5*(1-SQRT(5)),C3+1)) / SQRT(5)) + ((POWER(0.5*(1+SQRT(5)),D3+1) - POWER(0.5*(1-SQRT(5)),D3+1)) / SQRT(5)) + ((POWER(0.5*(1+SQRT(5)),E3+1) - POWER(0.5*(1-SQRT(5)),E3+1)) / SQRT(5)) + ((POWER(0.5*(1+SQRT(5)),5+1) - POWER(0.5*(1-SQRT(5)),5+1)) / SQRT(5)) + F3)/(POWER(1.25, H3:H37)), 1)

In a nutshell, it’s the sum of adjusted weights divided by the Job Size. So let’s break it down.

Every column has its formula, and most of the time, they are the same. I mostly used the Fibonacci formula that you can see on the chart here:

First values of the Fibonacci sequence

I used it for most of the columns because when I was putting “5” somewhere, I was thinking something like, “It’s a goddamn huge thing.” And the difference between “3” and “5” is much more significant than just 5 − 3 = 2, and it’s in my case, 8 − 3 = 5. At the same time difference between “1” and “3” is not so big in my mental model.

In the case of the column with “Yes/No” values, the same formula was used except the nuance that if “Yes,” it means that it’s going to be the fifth value in the Fibonacci consequence = 8.

Job Size is a tricky one. I had had to adjust it so it would not pull high-impact features to the bottom of the prioritization table, AND if a feature is effortless to build, it will not change the impact too hard. But, again, it’s more about how you feel it. In my case, I decided to use the power function that looks like this graphically:

Job size dependencies

Of course, you can use my approach or come up with your formula. It doesn’t matter. Eventually, you’ll start to notice how one column (question) affects the whole score. If, for example, you will find out that delightfulness is less critical than business metrics, change the formula, so one adds more weight than another with the same values.

The link on my table with the formula above: https://docs.google.com/spreadsheets/d/1e8jL1vw6YYP\_lpUnw7cqsD8J8HuVa3EXhrBvp\_c87gw/edit?usp=sharing]

Rundown

The idea of the question-based model is not to be the one-fits-all solution but instead to put you in a situation where you have to ask questions and be more transparent before your team and yourself. At the same time, it’s a compromise between a smooth framework like RICE and a lot of mess inside your head.