• What is Conversion Rate?

Conversion Rate

Conversion Rate is the percentage of visitors who complete the action you wanted them to take - buying, signing up, downloading, booking a call, whatever the page exists to drive. The single metric most teams obsess over and most teams misunderstand.

The misunderstanding: a higher conversion rate isn’t always better. A landing page that converts at 12% but pulls in low-intent traffic that churns in two weeks is worse than a page that converts at 4% and brings in customers who stick around for 18 months. Conversion rate divorced from conversion quality is a vanity metric.

How the math actually works

Conversions ÷ visitors × 100. Two visitors per 100 sign up, conversion rate is 2%. Standard ranges by context (very rough):

Ecommerce homepage: 1-3%. Retail averages tend to land around 2.5%.
SaaS free trial signup landing page: 3-7%. Higher for category-leaders, lower for narrow niches.
Lead generation form for a high-ticket B2B service: 1-5%. The variance comes from offer-fit and trust signals more than design.
Email opt-in via content upgrade: 10-30%. Much higher because the audience is already on a topic-specific page.

“Average” is meaningless without specifying the funnel stage and intent. A 2% conversion rate from cold traffic is a great result. The same 2% from warm trial users who already entered card details is a disaster.

Where conversion rate work goes wrong

Three patterns:

Optimising the wrong page. Teams spend weeks A/B testing button colours on a homepage that gets 4% of total traffic, while the pricing page (which 60% of converters visit) hasn’t been touched in two years. Find the page where the actual decision happens first.

Treating CRO as design tweaks instead of fundamental fixes. If your conversion rate is 0.4% on a SaaS signup page, no headline tweak will get you to 4%. The issue is offer-market fit, pricing model, or trust - not the colour of the CTA button.

Ignoring downstream quality. The team optimising the trial-signup page hits its KPI of “trial signups.” The product team then sees 70% of those trials never log in twice. The “wins” were qualified visitors being filtered out by a different page that the optimisation broke.

An example

A SaaS founder running a self-serve product was sitting at 2.1% conversion on the homepage signup. They spent six weeks A/B testing variants of the hero section - different headlines, different CTA copy, different above-the-fold layouts. Best variant lifted to 2.4%. Marginal.

The unblocking insight came from looking at where converters actually came from. 70% of paid signups had visited the comparison-vs-competitor page before signing up. That page had been written 18 months ago, hadn’t been touched since, and was where the actual decision was being made. They rewrote that page (clearer differentiation, three concrete customer stories, an honest “where the competitor is better” section) and homepage conversions dropped slightly while overall paid signups went up 38%.

The conversion rate was being made on a different page than the one being measured. Common pattern.

We built Penfriend to produce content whose downstream conversion rate is measurable and improvable. Content that ranks but doesn’t convert is a failed campaign; Penfriend’s briefs tie generation to conversion intent, not just to keyword-matching.

Related terms

Here's how we can help you

Want a glossary just like this?

Get in touch for our DFY glossary service.