• What is A/B Testing?

A/B Testing

A/B testing is running two versions of the same page, email, or ad at the same time, sending half your audience to each, and keeping whichever one makes more of them do the thing you actually want.

That’s it. Simple in theory.

In practice, most teams butcher it within the first week - so badly that the “winner” they crown is basically a coin flip dressed up as a decision.

The three things that kill most A/B tests

1. Testing too many things at once

You changed the headline, the button colour, the image, and the subhead. Version B won. Cool - what actually moved the needle? You don’t know. You can’t know. The whole point of A/B testing is to isolate one variable. If you’re changing four, you’re running marketing theatre.

2. Calling it after three days

Statistical significance doesn’t care that your Monday-morning review is coming up. You need enough conversions per variant (usually a few hundred minimum) before the difference between A and B is meaningfully bigger than random noise. On a typical affiliate site or small SaaS page, that can be 2-4 weeks. Not three days.

3. Running tests on pages without enough traffic

If your page gets 400 visits a month, you will never A/B test your way to a real answer. The math won’t work in any reasonable timeframe. Go fix copy, positioning, or your distribution first. Come back to A/B testing once you have traffic to split.

A real example

A solo affiliate runs a camping-gear review site pulling 8,000 monthly visitors. Their “Compare Prices” button sits at a 2.1% click-through rate.

They swap in a red “See Today’s Price” button for half the traffic. They wait three weeks - not three days.

The new button lands at 3.4%. Across roughly 4,000 monthly button impressions, that’s an extra ~52 people clicking through to the merchant per month. On affiliate margins, that’s real money - and it compounds forever because the test found an actual improvement, not noise.

One test. One change. Patience. That’s the whole discipline.

We built Penfriend with A/B testing of content in mind. The same page, generated with different opening hooks or different content-structure choices, can produce measurably different conversion or dwell-time outcomes. Treating the AI draft as one variant rather than the final answer is how we think Penfriend pays for itself.

Related terms

Here's how we can help you

Want a glossary just like this?

Get in touch for our DFY glossary service.