I thought it was like pre/post analysis where you rolled out an update and showed it to 100% of users and then started comparing.
this absolutely does not work due to “casual control” for example if there was a bugs, downtime, different cohorts due to seo/social campaigns - there are SOOOOO many things that have nothing to do with a product update
For example:
It actually speeds things up.
When you deploy its safe since there is a fallback - and you can run multiple tests concurrently.
And when you find something that works - you build a culture of confidence not endless debate(with others or yourself).
“What If i wanted to A/B test the landing page copy
Variant A (Control): “Sign up for free”
Variant B (Treatment): “Get started - it’s free!”
and you want to measure CTR (Click through rate) on the button → signup
# Flask Example
@app.route("/")
def landing_page():
variant = request.cookies.get("cta_variant")
if (variant not in ["a", "b"]: variant = random.choice(["a", "b"])
if (variant == "a"): cta_copy = "Sign up for free"
else: cta_copy = "Get started - it's free!"
resp = make_response(render_template(landing.html, cta_copy=cta_copy))
return resp
# landing.html
<button id="cta-button">{{cta_copy}}</button>