Practising product management involves continuously making decisions that impact users and companies serving those users. Relying on good mental models to make these decisions can help PMs make better decisions on average, avoid common pitfalls and build better products. To learn about useful mental models when building, growing and leading products and teams, I’m talking to leaders working on different products at a wide range of companies. This blog includes edited notes and reflections on those conversations.
I recently caught up with Thomas Arend, a Senior Director of Product at Johnson & Johnson. In his 25 years in product he has led product teams at Facebook, Airbnb, Google, Twitter, SAP, IBM and Mozilla. Here are the edited notes from our conversation. We covered a few topics and, for the sake of clarity, these notes are split into two articles. Enjoy!
Lean validation — Using limited data to make better product decisions
Depending on your scope as a PM, you might input on both low-level feature related decisions as well as higher-level long term product and company strategy. Ideally, you could rely on reliable data to shape insights and guide your opinions decision making. In practice, this will often not be the case. To improve the outcome of your product decisions in situations where reliable data is limited, adopting a lean validation mindset can help you be right more often and establish a framework for aligning teams on which options to pursue and why.
Lean validation boils down to finding a quick and cheap way of gathering some initial data that either improves your understanding of the different options you’re presented with, thereby making you more informed than you were in the first place. To do this, you’ll need to know what to measure, collect and estimate your impact on that measure and then use your findings to inform your decisions.
Knowing what to measure
As an example, imagine you’re responsible for increasing the “engagement” of the imaginary news-aggregator app Newster. Let’s assume you’ve come up with a range of product solutions for how to drive more engaged users, and now have to decide what you and your team should spend time developing. Before you start building anything, start by answering three questions:
- Who is Newster for, and why would they care about using Newster?
- What does success look like for this product change, and what does extreme success look like?
- Using what metrics would I measure that success?
By answering the questions above, you’ll have defined both the purpose of the product you want to build, what outcome that product should drive as well as how you can measure that success. Importantly, the metrics you use for measuring success and what you’ve understood the user needs to be should link up.
In the case of Newster, our ultimate goal might be to drive user engagement. Through answering the first question — who Newster is for — we’ve determined that Newster is for busy professionals who are often on the go, and who want to stay on top of the news cycle without spending too much of their time browsing through endless news content. Newster would successfully provide value to these users — the answer to the second question — if they were able to satisfy their desire to stay on top of the news cycle without wasting too much time consuming endless news content.
Knowing the reasons your users might want to use Newster, determining a metric for success becomes easier. At first glance, defining success for driving engaged users might seem obvious. Surely, the longer users stay active each session, viewing and interacting with news content, the more engaged they are? In the case of Newster, session length would likely be a poor success metric. As we’ve determined, Newster users are short on time and want to regularly stay up to date with the news cycle without having to invest too much time. The metric we use to measure success should reflect what success looks like for our users. In this case, our success metric should instead increase proportionately with how well Newster helps users satisfy their desire of staying up to date with the news. One such metric is looking at the average number of distinct days in which users have used Newster over the past week.
Even this metric isn’t perfect, and it doesn’t need to be. To avoid analysis-paralysis, try to pick a metric that makes sense and reasonably reflects the real purpose for why users should turn to your product, and then move on to thinking about how you can measure your impact on that metric.
Lean validation of your assumptions
Having defined what metric you could measure to indicate how well your solution delivers value to your users (and drives engagement as a consequence) try to identify the fastest and cheapest way of testing your assumptions around how your product will impact this metric. Ask yourself w hat you or your team could do in the next week to verify if your solution meets your users’ needs. The key here is “ the next week”. Validating a solution before you build a working version of it will allow you to to save time and test your assumptions more quickly.
In the case of Newster, let’s imagine one of our solutions for driving our metric is sending users of the Newster app one more push notification on average every week. With enough active users, push notifications set up and easy access to your data, this solution could be validated relatively quickly by running an A/B test. But what if setting up push notifications would require a more significant time investment from your team to get set up? Try to find quicker ways to validate your assumptions that don’t require a large upfront time investment, such as doing feedback studies to find out how they interact (or expect to interact) with push notifications from the Newster app.
When talking to your users, try to understand their behaviours and motivations for using Newster and how your solution would influence those. Ask them to rate on a scale from 1 to 10 how much they would like to have your solution in place. How much would they pay for it? If they wouldn’t pay for it, try to understand why not. This approach won’t give you complete information, and you should treat your findings as indications rather than facts. You’ll likely have to do further testing and validation as you continue to refine and build your solution. But by testing your assumptions early on, you raise the chance of finding out about incorrect assumptions before wasting time on a flawed solution.
Be aware of the limitations of lean validation
Lean validation can be a great way of quickly becoming more informed about a decision you have to make when the best option isn’t clear. Validating an idea in a lean and scrappy way won’t give you perfect data. The smaller your data set is, the more exposed you’ll be to statistical biases such as selection bias and the less likely you are to find existing statistically significant differences.
Be aware of confirmation bias, the tendency to interpret data in a way that confirms your pre-existing beliefs and assumptions. Lean validation should be thought of as a scrappy way to get closer to the truth, whatever that truth is.
Decide on what to measure by understanding who your product is for, what goals they should use your product to accomplish and what metric indicates how well your product helps them accomplish those goals.
Make a habit out of frequently trying to validate if your assumptions are right or not, and do so as quickly and cheaply as you can.
Finally, use the data you collect to inform — but not determine — your decisions. As with any tool, lean validation is not always applicable, and should only be one of the tools in your decision-making toolbox. No matter how far you go in fine-tuning your tools and mental models for decision making, chances are you’ll be wrong more than you are right. But by keeping that in mind, and finding ways of validating your beliefs and assumptions, you might be able to be slightly less wrong.
Want to receive future articles in your inbox? Consider subscribing at https://www.axelthomson.com!
Originally published at https://www.axelthomson.com on June 21, 2020.