<aside> <img src="/icons/info-alternate_gray.svg" alt="/icons/info-alternate_gray.svg" width="40px" /> This is one out of the 130+ canvases and exercise we developed within the StartHack Framework - a practical set of 6 workbooks to take your startup from an idea to P/M fit in 6 months

How it works?

  1. Start with the Hypothesis Canvas to convert your assumptions to testable hypothesis
  2. Use the Experiment Card to draft and launch your experiment. </aside>

Hypothesis Canvas


ℹ️About

The Hypothesis Canvas is a tool for validating business assumptions. It involves writing down an assumption and converting it into a testable hypothesis using a scientific method. Assumptions are then ranked using the RICE framework to prioritize which ones to tackle first.

📋How to

Hypothesis Canvas.jpg

<aside> ℹ️ Before you start!

We suggest you hide (or fold if you have the physical copy) the right side of the canvas. The “Learn” section will not be needed until after we launch and test the experiment.

Here’s a reference to how StartHack Validation Loop works. Check ‣ for more info.

Untitled

</aside>

Step 1: Write down your assumption


Start with the end in mind and phrase your assumption as “We believe…”

In our Idea stage, our assumption should be around

I think [Idea name] is a great potential because it [customer name] has [problem definition] and the [current solution] is not the best one out there.

Step 2: Convert your assumption to a testable hypothesis


It’s important to follow the scientific method when trying to validating things out. The first thing is to check the boxes of what part of the biz model this assumption is part of:

It’s ok if your assumption checks more than one part of your business model. Although, we highly recommend to limit this to as few as possible because if the experiment failed, it’s usually hard to know the responsible factor, which makes the learning part not useful. This is also why we have to approaches to the StartHack Framework (as stated in the intro): the WISE way and the BULLET way.

Since we’re just testing out our idea, in general, we don’t have to worry about this a lot.

The next thing is to use this guideline to convert our assumption to a testable hypothesis:

  1. Relate to Model?
  2. Can be proven wrong?
  3. Can be tested? (we have an idea of how to test it. We’ll use another canvas for this… bear with us)
  4. Actionable?
  5. Clear expected result?
  6. If/then statement?

If all are checked, then we’ve converted an assumption to a testable hypothesis that is disconnected from human emotional bias or subjective gut feelings.

Step 3: Rank your assumption


As you approach your business, you must have a lot of assumption cards. The key is to prioritize the key one to tackle, one at a time.

We’ll follow a simple prioritization framework called RICE:

  1. Risk: is this the riskiest assumption in your model that if not validated will kill the business?
  2. Impact: how impactful to your startup would this assumption be if validated successfully?
  3. Confidence: how confident are you that this would work? Support this with data or previous experience.
  4. Ease of testing: how easy and quick is it to test and get results

If you don’t want to use a score or find it hard to throw one, we’ve included a quick mapping card:

  1. Test now > when you have high R and high ICE
  2. Test ASAP > when you have high R but low RICE
  3. Test later > when you have low R but high ICE
  4. Skip > when you have low R and low ICE

<aside> ℹ️ After you’re done defining your assumption, you should proceed to your Experiment Card (found below) to design, build and launch your experiment.

Only once you’re done with that, you should proceed to the next step to reflect on your findings. That’s why we asked you to first fold the right section of this canvas so you don’t get distracted.

Why have the hypothesis and learning section in one canvas? In our experience testing this with many startups, the hypothesis and the lessons learned are the most important thing about your experimentation and need to be in one place to ensure alignment. They’re the destinations. The experiments, in the other hand, that are found inside the Experiment Canvas are simply projects to get you to there.

</aside>

Step 4: Summarize your test


This is the section where you need to mention the experiment you used to reach to whatever finding you’ve find. Sharing the description of the test, the SIR score, who was the lead, how long it took and the metrics used to analyze the results will all give you enough context about the experiment used to validate or invalidate this hypothesis.

Step 5: Summarize your learnings


In the “learn” section, start by listing down the actual results followed by “we learned…” and then write down what you learned from this experiment. Also conclude whether what you learned led you to either validate, invalidate or “not sure” by circling the check, cross or question mark accordingly.

You need to sincere with yourself here and the best way is to reflect to the OMTM (only metric that maters) along with the definition of success and failure that you’ve already listed during the experiment inside the Experiment canvas and summarized in the above step.

Step 6: Decide on your next steps


Based on the results and your learnings, write down your decision on what’s needed next. In most cases, you’re trying to validate a hypothesis to validate if you need to proceed and preserve or pivot and change course.

In the StartHack Framework, we suggest potential routes for each stage in the Analytical Progress To PMF canvas as an estimates on where we recommend you pivot in case your hypothesis got invalidated. Each stage also has a roadmap in the first chapters to explain those possible potential routes

Step 7: Optimize & recovery action


Not all experiments go as planned and sometimes the type of experiment and how it was launched might impact the results, that’s why it’s always preferred to reflect after each experiment and list down the whys, how and what is needed in the upcoming experiments in order to optimize for better results

Experiment Card


ℹ️About

The Experiment Card provides a detailed guide on how to design, build, and measure experiments to validate business hypotheses or assumptions.

It explains the steps involved in picking potential tests, defining a key performance indicator (KPI) or "One Metric That Matters" (OMTM), sketching the experiment, defining and finding the target audience, and listing and ranking the best channels to find the audience.

The canvas also provides steps on how to translate the plan into actions and how to analyze and reflect on the results after running the experiment. It emphasizes the importance of documenting and reflecting on findings and updating your progress board.

📋How to

Experiment Card 1/2 > Design & Build your experiment

Untitled

Step 1: Pick 3 potential tests to validate your assumption


“There’s many roads leading to Rome”, I’m sure you’re familiar with this quote. Similarly with all your business hypothesis or assumptions, there’s multiple ways to test and validate.

But since many entrepreneurs are not familiar with all possible ways, that’s why we launched “Validation Cards” in Bites. Bites is a set of tips, tricks and inspirations to help you make the right possible assumption.

<aside> 👉 Access 50+ Bites here: www.febe.io/bites/home Customer who bought the StartHack Framework are automatically granted a full one year FREE subscription.

Simply use the discount code sent to your email when you signed up for StartHack in your checkout or reach out to our support [email protected]

</aside>

We want you to grab 3 test you think are the best to validate your assumption. We, then, want to rank the best one. We can then execute one (and have the rest as back-up plan) or test multiple ones and compare results. The best way to rank experiments is to use the SIR score:

  1. Speed: how fast can we build and execute this experiment?
  2. Investment: how much money do we want to spend to build and execute this experiment?
  3. Reliability: how reliable would the data be in helping us learn and decide? Is it a quantitative (QA) or qualitative (QL) data?

Score them from 1-5 and then use this formula: S*R/I

Step 2: Define your OMTM


Now, that you have your top test ranked, it’s time to define what success looks like and answer. In short: how do we know we successfully validated our idea?

We do that by defining OMTM (One Metric That Matters). It’s just another definition to a KPI, but we wanted to call it OMTM to emphasize on making it ONE.

How to find the right KPI for your experiment? Well, we’ve already helped with this using our “Metrics Cards” in Bites. These are a set of Metrics well-defined with benchmarks to help you make fast decisions. Also, in the “Experiment Cards”, you’ll see suggested Metrics to track.

<aside> 👉 Access 50+ Bites here: www.febe.io/bites/home Customer who bought the StartHack Framework are automatically granted a full one year FREE subscription.

</aside>

After picking the name of the KPI, let’s say [example], it’s time to now provide measure description by writing: “To verify that, we’ll measure [OMTM name]”

Then, we’ll define what success must look like and what failure must look like and write that down as an accountable documented way to call a hypothesis “validated” or “Invalidated”

Experiment card 2/2 > Test & Measure your experiment