originally published February 1, 2020
Here’s a simple tool that you can use to test whether the results of your A/B Tests are statistically significant. Happy growth hacking!
Plug in your two variations sample sizes (n1
and n2
) and estimated success rates (p1
and p2
), and scroll down to Interpreting The Results, to understand the results of your test.
The Hypothesis Specification explains how to formulate your A/B Test experiment.
https://codepen.io/stedmanblake/pen/ZEjapYB
Sample distributions. Red: Variation A. Green: Variation B. The Null Hypothesis is that the true, population average of the distribution that generated the red sample is higher than the true, population average of the distribution that generated the green sample. The Alternative Hypothesis is that green’s underlying distribution has a higher average.
In statistics vernacular, we’re doing a test of “difference in proportions”, or a “two-proportion z-test”.
The data that we’re considering is analogous to a repeated coin toss. You flip the coin, and it either comes up heads or tails. Then you do it again, and again, …
The distribution that this sort of data follows is called a “Binomial Distribution”. It’s characterized by two parameters: sample size (denoted by the variable n, for number of coin flips), and probability of success on any given “coin flip” (denoted by the variable p, for probability of success).
Many business applications with a discrete outcome follow a Binomial Distribution:
As such, we often gather this data in the course of growing our businesses: optimizing our ads, websites, and funnels for conversion.