This video is an internal guide for Product Managers at Air on running A/B tests in Google Optimize. Follow the instructions in the video. Before you start your first experiment, please read the article below. Special thank you to Kyle for helping us set up our very first A/B test at Air!

This video is an internal guide for Product Managers at Air on running A/B tests in Google Optimize. Follow the instructions in the video. Before you start your first experiment, please read the article below. Special thank you to Kyle for helping us set up our very first A/B test at Air!

Why Run an A/B test?

As Product Managers at Air, we believe curiosity and experimentation is key to perfecting our craft. A/B testing allows us to make careful changes to our user experiences (UX) while collecting data on the results. This will enable us to formulate hypotheses and learn why some aspects of our UX impact user behavior. In another way, our assumptions and opinions about the best experience for a given goal can also be proven wrong.

More than just answering a one-off question or settling a disagreement, A/B testing can be used consistently to improve a given experience and goal continually.

By testing one change at a time, we can pinpoint which changes affected our users' behavior and which ones did not. Over time, we can combine the effect of multiple winning changes from experiments to demonstrate a new experience's measurable improvement over an old one.

What is an A/B test?

An A/B test is a randomized experiment using two or more variants of the same web page (A and B). Variant A is the Original (control), and Variant B (experiment) contains at least one modified element of the original. Find more idea starters in the Google Optimize resource hub here.

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/bde1b0d7-e7aa-4092-8e71-28be1dee7f5e/ab-testing.png

Before You Start Testing

Before you can start testing, you need something small to test on. Test a change to a Call To Action (CTA), change the color of a button, or remove an extraneous form field. Once you're comfortable creating variants and experiments, you can expand the scope of your testing.

Create a Hypothesis

Before creating the first experiment, you need to identify a problem, then create a hypothesis (backed up by data, of course) about what you can change to improve it.

What's the problem that you want to solve? Have conversions dropped off? Have your demographics shifted? Have engagement numbers dropped?

Once you identify the problem, assemble a cross-functional pod within your organization and solicit feedback about the cause of the problem. Use input from this pod to form your hypothesis, an educated guess that you'll validate or invalidate with experimentation.

Example Hypothesis