What I thought A/B testing Was!

“A/B testing is Pre/Post Analysis”

I thought it was like pre/post analysis where you rolled out an update and showed it to 100% of users and then started comparing.

this absolutely does not work due to “casual control” for example if there was a bugs, downtime, different cohorts due to seo/social campaigns - there are SOOOOO many things that have nothing to do with a product update

For example:

“A/B Testing Slows You Down”

It actually speeds things up.

When you deploy its safe since there is a fallback - and you can run multiple tests concurrently.

And when you find something that works - you build a culture of confidence not endless debate(with others or yourself).


Putting it practically

“What If i wanted to A/B test the landing page copy

Variant A (Control): “Sign up for free”

Variant B (Treatment): “Get started - it’s free!”

and you want to measure CTR (Click through rate) on the button → signup

# Flask Example

@app.route("/")
def landing_page():
	variant = request.cookies.get("cta_variant")

	if (variant not in ["a", "b"]: variant = random.choice(["a", "b"])
	if (variant == "a"): cta_copy = "Sign up for free"
	else: cta_copy = "Get started - it's free!"

	resp = make_response(render_template(landing.html, cta_copy=cta_copy))
	return resp

# landing.html

<button id="cta-button">{{cta_copy}}</button>

Sub questions that I had about Experiment Implementation:

Well how do you know the amount of people you want to show a feature to?