The Hypothesis Canvas is a tool for validating business assumptions. It involves writing down an assumption and converting it into a testable hypothesis using a scientific method. Assumptions are then ranked using the RICE framework to prioritize which ones to tackle first.
<aside> ℹ️ Before you start!
We suggest you hide (or fold if you have the physical copy) the right side of the canvas. The “Learn” section will not be needed until after we launch and test the experiment.
Here’s a reference to how StartHack Validation Loop works. Check ‣ for more info.
</aside>
Start with the end in mind and phrase your assumption as “We believe…”
In our Idea stage, our assumption should be around
I think [Idea name] is a great potential because it [customer name] has [problem definition] and the [current solution] is not the best one out there.
It’s important to follow the scientific method when trying to validating things out. The first thing is to check the boxes of what part of the biz model this assumption is part of:
It’s ok if your assumption checks more than one part of your business model. Although, we highly recommend to limit this to as few as possible because if the experiment failed, it’s usually hard to know the responsible factor, which makes the learning part not useful. This is also why we have to approaches to the StartHack Framework (as stated in the intro): the WISE way and the BULLET way.
Since we’re just testing out our idea, in general, we don’t have to worry about this a lot.
The next thing is to use this guideline to convert our assumption to a testable hypothesis:
If all are checked, then we’ve converted an assumption to a testable hypothesis that is disconnected from human emotional bias or subjective gut feelings.
As you approach your business, you must have a lot of assumption cards. The key is to prioritize the key one to tackle, one at a time.
We’ll follow a simple prioritization framework called RICE:
If you don’t want to use a score or find it hard to throw one, we’ve included a quick mapping card:
<aside> ℹ️ After you’re done defining your assumption, you should proceed to your Experiment Card (found below) to design, build and launch your experiment.
Only once you’re done with that, you should proceed to the next step to reflect on your findings. That’s why we asked you to first fold the right section of this canvas so you don’t get distracted.
Why have the hypothesis and learning section in one canvas? In our experience testing this with many startups, the hypothesis and the lessons learned are the most important thing about your experimentation and need to be in one place to ensure alignment. They’re the destinations. The experiments, in the other hand, that are found inside the Experiment Canvas are simply projects to get you to there.
</aside>
This is the section where you need to mention the experiment you used to reach to whatever finding you’ve find. Sharing the description of the test, the SIR score, who was the lead, how long it took and the metrics used to analyze the results will all give you enough context about the experiment used to validate or invalidate this hypothesis.
In the “learn” section, start by listing down the actual results followed by “we learned…” and then write down what you learned from this experiment. Also conclude whether what you learned led you to either validate, invalidate or “not sure” by circling the check, cross or question mark accordingly.
You need to sincere with yourself here and the best way is to reflect to the OMTM (only metric that maters) along with the definition of success and failure that you’ve already listed during the experiment inside the Experiment canvas and summarized in the above step.
Based on the results and your learnings, write down your decision on what’s needed next. In most cases, you’re trying to validate a hypothesis to validate if you need to proceed and preserve or pivot and change course.
In the StartHack Framework, we suggest potential routes for each stage in the Analytical Progress To PMF canvas as an estimates on where we recommend you pivot in case your hypothesis got invalidated. Each stage also has a roadmap in the first chapters to explain those possible potential routes
Not all experiments go as planned and sometimes the type of experiment and how it was launched might impact the results, that’s why it’s always preferred to reflect after each experiment and list down the whys, how and what is needed in the upcoming experiments in order to optimize for better results