Experimentation By Gregor Spielmann, Adasight

CRO and Analytics: How to Measure What's Worth Optimizing

Conversion rate optimization (CRO) is only as good as the analytics feeding it. Without the right analytical foundation, CRO programs optimize for the wrong thing (improving page metrics without improving business outcomes), run tests that are too underpowered to be reliable, or mistake correlation for causation in funnel analysis. This guide covers how to build the analytics foundation that makes CRO programs actually work.

🧮 Use the free tool: A/B Test Sample Size Calculator — no signup required

Open tool →

Funnel analysis: finding the highest-leverage drop-off points

The first step in any CRO program is a clean funnel analysis: a sequential view of conversion from top of funnel to the desired outcome, broken down by the key dimensions that matter (traffic source, device type, user segment, time period). The goal is to find the funnel step with the highest absolute drop-off volume — not the highest drop-off rate. Fixing a step where 40% of users drop but only 100 users pass through is far less valuable than fixing a step where 10% drop but 10,000 users pass through. The product of users × drop-off rate is the optimization priority score.

Session replay and qualitative data: why does the drop-off happen?

Funnel analysis tells you where the problem is; session replay and user research tell you why. Tools like Hotjar, FullStory, and Heap's session replay feature let you watch actual user sessions at the drop-off point. The patterns you're looking for: rage clicks (users clicking repeatedly on a non-interactive element), form abandonment patterns, scroll depth vs. conversion correlation, and confusion indicators (users hovering over elements for a long time before leaving). This qualitative data generates the hypotheses that drive A/B tests — it's the bridge between analytics and experimentation.

Segmentation: why your overall conversion rate is misleading

An overall conversion rate of 3% can hide enormous variance by segment. The same checkout might convert at 6% for desktop users and 1.5% for mobile, at 8% for returning visitors and 1% for first-time visitors, at 10% for email traffic and 2% for paid search traffic. Before running any CRO test, segment your funnel by device type, traffic source, user type (new vs. returning), and geography. The segment with the lowest conversion rate relative to its expected behavior is almost always the highest-leverage optimization opportunity — and the segment where tests will show the largest absolute improvement.

The CRO test hierarchy: what to test first

Not all tests are equal. The hierarchy for most e-commerce and SaaS products: (1) Clear value proposition and headline on the primary landing or product page — the biggest impact on conversion comes from clarity, not design details. (2) Friction reduction in the checkout or signup flow — removing unnecessary fields, reducing steps, adding social proof near the conversion action. (3) Trust and credibility signals — reviews, security badges, social proof numbers. (4) Pricing presentation — plan comparison, anchoring, trial framing. (5) Design and layout details. Most CRO programs start at (4) and (5) and miss the larger opportunity at (1) and (2).

CRO analytics foundation checklist

Need expert help applying this?

Adasight works with scaling D2C and SaaS companies to build the analytics foundations and experimentation programs that make this work in practice.

Talk to Adasight →

Frequently asked questions

What is a good conversion rate for a landing page?

This depends entirely on traffic source and offer. Median landing page conversion rates by source: paid search 4–6%, organic search 2–4%, email 8–12%, direct 6–10%. These are medians — excellent landing pages in optimal niches convert at 20%+. The more relevant question is how your conversion rate compares to your historical baseline and to your internal segments, not to industry averages.

How do you know if a CRO test result is reliable?

A reliable test result has: reached its pre-calculated sample size, run for at least 7 days (to account for weekday/weekend variation), showed consistency across daily results (not a spike on one day that drove the average), and passed a simple sanity check (the variant's performance by day doesn't show a trend that suggests an implementation bug or data collection issue). Results that 'just barely' hit significance on the last day of a test should be treated with extra skepticism.

Should CRO teams use Bayesian or frequentist statistics?

Both are valid, but they answer different questions. Frequentist testing (the standard approach) answers: 'Is this effect real or is it noise?' Bayesian testing answers: 'What is the probability that variant A is better than variant B?' For CRO teams that need to make clear ship/don't-ship decisions, frequentist testing with pre-registered sample sizes is simpler and harder to misinterpret. For teams with lower traffic who benefit from early stopping and continuous analysis, Bayesian methods are more appropriate.

Related guides