Join Cole Mercer for an in-depth discussion in this video What is a hypothesis?, part of Designing and Running Experiments for Product Managers.
- Hey, guys, welcome back to the course. All right, so we have a big, bad list of assumptions that we've made about our proposed feature or product. And in that big, bad list, we've identified the order and the priority in which we should figure out, hey, is this true? Now your goal at this point is to steadily and methodically start testing these assumptions one by one, and ruling them out as potential issues that could blow up down the road. But we can't do that until we first solve a new problem we have.
An assumption list is just a raw list of things we roughly think need to be true for the success of our product. These assumptions are not precise, and they're not particularly actionable. If I said, Go build an MVP that tests the assumption of people are satisfied parking in a garage, you would probably say, I don't know what you're talking about. We need something we can more easily work with. We need to take our assumptions, flesh them out a little bit more, and roll them together into their much more specific and easier to deal with sibling, the hypothesis.
So what is a hypothesis? It's a single, written, testable statement of what you believe to be true with regards to the assumption you've identified. Now I know that's a mouthful so let's break it down and explain it a little bit more. Let's say that we had identified that people are not satisfied with parking in a garage as a particularly hairy assumption we were making. Now remember, this is from that xerox example that we covered previously. Check that one out if you're lost.
We would probably think of running some sort of test to see if that assumption was true. It could be as simple as doing street surveys of people leaving garages on a Friday night. Or maybe it's an ad we run that says, Parking in garages suck. And we see who responds to that ad. Or maybe we plan to pitch an alternative and how many people will then vote with their feet and start using the parking garages. The point here, though, is that it doesn't matter yet, how we get our data and how we run our test.
The simple fact is that we cannot construct a test around people are not satisfied with parking. We need to get more specific. We need to identify who, exactly, we think are unsatisfied, how unsatisfied they are, and potentially things like why we think they're unsatisfied. All of this information is crucial for you to understand. Now there's a reason why MVPs are more correctly called MVP experiments. And, yes, if you say, Oh my god, Evan, look at my MVP, I'm going to correct you, because, come on.
Now because we don't want to waste an of our resources pursuing our new product or feature, we need to treat this entire process like a science experiment. And the best way to focus our efforts is on one, singular hypothesis, something we can try to prove true. Hypothesis brings clarity not only to you, but the rest of your team. If you skip this stage, you run the risk of, down the road, forgetting what exactly you're trying to do in the first place. So in the next couple lectures, we're going to try to cover how to put these together.
As a matter of strategy, it's going to be up to you whether or not you want to build a hypothesis for every single assumption you have, or you want to build a hypothesis that, if proven true, could rule out several assumptions, killing multiple assumption birds with one MVP stone. We'll cover both, stay tuned.
Instructors Cole Mercer and Evan Kimbrell show how nail down your business hypothesis and define the minimum criteria for success. They introduce the tools and techniques you need to build email-based MVPs, shadow buttons, "coming soon" messaging, piecemeal MVPs that use existing tools, and even concierge MVPs, where you solve the customer's problem without coding anything. They also show how to evaluate the results of your testing and iterate on the design—building better products as you go.