Minimum Detectable Effect: How to Set It Right for A/B Tests
The minimum detectable effect (MDE) is the smallest improvement you want to be able to detect with your A/B test. It's one of the four inputs to sample size calculation — and the one most teams get wrong. Setting the MDE too low (trying to detect tiny improvements) leads to enormous sample size requirements and tests that run for months. Setting it too high (only detecting large improvements) risks missing real but moderate effects. This guide explains how to set the MDE correctly.
🧮 Use the free tool: A/B Test Sample Size Calculator — no signup required
Open tool →What is MDE and why it matters for sample size
MDE is the threshold below which you're willing to call an effect 'not practically significant' even if it exists. If you set MDE to 5% relative improvement, your test is powered to detect improvements of 5% or larger — but improvements of 3% would go undetected (a false negative). Sample size is inversely proportional to MDE squared: halving the MDE quadruples the required sample size. A test designed to detect a 5% improvement requires roughly 4× more users than a test designed to detect a 10% improvement. This relationship is why MDE selection is the most consequential decision in A/B test design.
How to set MDE based on business impact
The right way to set MDE is to work backwards from business impact. Ask: what is the minimum improvement that would justify the engineering cost of implementing this change permanently? For a checkout flow change that costs 2 weeks of engineering, you might set the floor at a 5% improvement in conversion (because anything below that is probably not worth the cost and risk). For a simple copy change that costs 2 hours, you might be willing to ship it even for a 2% improvement. The MDE should reflect the economic trade-off, not an arbitrary statistical preference.
Absolute vs. relative MDE: which to use
MDE can be expressed as an absolute change or a relative change. For a conversion rate of 3%: a 1 percentage point absolute MDE means you want to detect changes from 3% to 4% or larger. A 33% relative MDE means the same thing (1 pp absolute = 33% of the baseline 3%). Relative MDE is more intuitive for comparing tests across different baselines — a 20% relative MDE applies the same economic bar whether your baseline is 2% or 20%. Our sample size calculator accepts both formats.
Common MDE mistakes that waste months of testing
Setting MDE too low for your traffic volume is the most common mistake. Teams set a 5% relative MDE on a low-traffic page, calculate a required sample of 80,000 users per variant, and either wait 6 months for the test to complete or stop early (invalidating the results). The fix: match your MDE to your traffic reality. If your traffic will give you 1,000 users per variant per week, you need a test that can answer in 4–8 weeks — which means your MDE needs to be 15–25% relative for low baseline metrics. If that's not a realistic improvement, the honest conclusion is that you don't have enough traffic to test this feature at all.
MDE decision checklist
- MDE is set before the test launches, not adjusted after seeing early results
- MDE reflects the minimum economically meaningful improvement (not arbitrary)
- MDE is realistic given baseline conversion rate and available traffic
- Sample size calculated after MDE is set (to confirm feasibility)
- If required sample size implies >4 weeks at current traffic, MDE should be revisited
- Both absolute and relative MDE values are understood and documented
- Test duration is calculated and committed to before launch
Need expert help applying this?
Adasight works with scaling D2C and SaaS companies to build the analytics foundations and experimentation programs that make this work in practice.
Talk to Adasight →Frequently asked questions
What is a typical minimum detectable effect for A/B tests?
There is no single 'typical' MDE — it depends on your baseline conversion rate and business economics. Common ranges: 10–20% relative for checkout and activation flows (where even moderate improvements have significant revenue impact), 5–10% relative for high-traffic, low-baseline metrics (like email click-through rates). Tests designed to detect effects below 5% relative typically require enormous sample sizes and are rarely practical for most teams.
Should I use absolute or relative MDE in my sample size calculation?
Both are mathematically equivalent if done correctly. Relative MDE is easier to reason about when comparing tests across different metrics with different baselines. Absolute MDE is more interpretable when communicating results (a '1.5 percentage point improvement in checkout conversion' is more concrete than a '15% relative improvement'). Many sample size calculators accept either format — our calculator supports both.
What happens if the actual effect is smaller than my MDE?
A real effect smaller than your MDE will appear as a null result — your test will not achieve statistical significance even though a real effect exists. This is a false negative, and it's the intentional consequence of your MDE choice: you pre-decided that effects smaller than your MDE aren't worth detecting. If a null result concerns you (perhaps the effect might still be worth shipping), consider whether your MDE was set at the right level — or run a follow-up test with a larger sample size.
Related guides
What Is Growth Analytics? A Complete Guide for 2026
Growth analytics is the discipline of using data to understand, measure, and improve how a product grows. It sits at the...
Read guide →The 12 Growth Analytics Metrics Every Team Should Track
Most growth teams track too many metrics and understand too few. The result is a dashboard full of numbers that don't co...
Read guide →