The 9-point CRO audit that lifted a B2B SaaS pricing page CVR 38% in 30 days
Real anonymized case study from Q1 2026. B2B SaaS pricing page at 0.8% CVR. 9-point CRO audit found 5 critical issues with ICE scores 36-81. Fixes shipped over 14 days. CVR lifted to 1.1% by week 2, 2.1% by week 4. Pipeline contribution from the page lifted 162% in 30 days.
Real anonymized case study from a B2B SaaS Series A engagement Xpand ran in Q1 2026. Pricing page CVR sat at 0.8% (industry benchmark 1.5 to 2.5%). The team had run two A/B tests and found 'no significant lift'. The actual issue: they were testing on a broken page. The 9-point CRO audit found 5 critical issues with ICE scores 36 to 81. Fixes shipped over 14 days. CVR lifted to 1.1% by week 2, 2.1% by week 4. Pipeline contribution from the page lifted 162% in 30 days. This post is the exact 9 points, the 5 issues found, and the fix order that produced the lift.
By the end you will have a working 30-point audit framework you can run on any landing page in 60 minutes, the ICE-scoring discipline that ranks fixes by leverage, and the test-vs-ship decision logic that says when to A/B test and when to just deploy. The full template lives in our CRO Audit Template.
Key takeaways. Pricing page CVR 0.8% → 2.1% in 30 days. 5 critical issues with ICE 36-81. Top issue (hero hierarchy, ICE 81) shipped without A/B test. CTA color, form length, social-proof position, mobile tap target shipped sequentially over 14 days. Pipeline contribution +162%. Don't A/B test on broken pages.
What was the page state before the audit?
B2B SaaS pricing page targeting VPs of Demand Gen at $5M to $50M ARR companies. CVR 0.8% on a primary CTA of 'Book a 30-min call'. 11-field form. Hero H1 was the brand name (not a value prop). 4 customer logos in the bottom third of the page (below the fold on mobile). Mobile CTA tap target 32px tall (below iOS 44px minimum). 2 prior A/B tests ran on copy variants and showed no statistically significant lift.
What did the 9-point audit find?
| Issue | Impact | Confidence | Effort | ICE | Fix |
|---|---|---|---|---|---|
| Hero H1 was brand name, not value prop | 9 | 9 | 1 | 81 | Replace with 'Cut MQL→SQL handoff time 40% in 30 days' |
| CTA color blended into background | 8 | 9 | 1 | 72 | Switch CTA from #2563eb to #f59e0b accent |
| Form length: 11 fields, 6 unnecessary | 9 | 8 | 2 | 36 | Drop company size, role, country, source, phone, comments |
| Social proof below fold on mobile | 7 | 8 | 2 | 28 | Move logo bar above fold on viewports under 768px |
| Mobile CTA 32px tall (below iOS 44px) | 7 | 9 | 1 | 63 | Increase CTA to 56px on mobile breakpoint |
What is ICE scoring and how do you use it?
ICE = Impact (1-10) × Confidence (1-10) ÷ Effort (1-10), normalized to 1-10. The framework forces a ranked fix list that prioritizes leverage over visibility. Most CRO audits produce 30 to 50 issues. ICE filters them down to the top 3 to 5 that actually move CVR. Issues above ICE 60 ship without A/B test (the math is too obvious to test). Issues ICE 30 to 60 ship as A/B tests. Issues below ICE 30 wait until the high-leverage fixes are done.
Why ship the hero rewrite without an A/B test?
ICE 81 means the fix is unambiguous: the page lacks a value proposition entirely. Testing 'brand name' vs 'specific outcome with timeframe' on a broken page is a waste of impressions. The previous A/B tests had failed to find lift because they tested copy nuances on a foundation that needed rebuilding. After shipping the hero rewrite without test, the page CVR moved from 0.8% to 1.1% in 8 days. The remaining issues (ICE 28 to 72) shipped as targeted A/B tests over the next 21 days. The full test-vs-ship logic lives in our CRO Audit Template.
What was the fix sequence and the daily CVR trajectory?
- 1Day 1: Hero H1 rewritten ('Cut MQL→SQL handoff time 40% in 30 days'). Shipped without A/B test. CVR baseline 0.8%.
- 2Day 3: Mobile CTA height increased to 56px. Shipped without A/B test. CVR climbed to 0.9%.
- 3Day 8: Re-baselined CVR at 1.1%. Started A/B test on form length (11 fields vs 5 fields).
- 4Day 15: Form-length test hit significance. Winner: 5 fields. Shipped. CVR climbed to 1.6%.
- 5Day 18: CTA color test started (blue vs amber).
- 6Day 25: CTA color test hit significance. Winner: amber. Shipped. CVR climbed to 1.9%.
- 7Day 30: Mobile social-proof position fix shipped. CVR settled at 2.1%.
What did the team learn that they did not expect?
Three things. First, the previous A/B tests had been 'noise' because the foundation was broken; once the hero was fixed, copy A/B tests produced meaningful lift. Second, the form-length fix was the single biggest absolute CVR mover (0.5 percentage points). Most teams underestimate form friction. Third, the mobile-CTA height fix lifted mobile-only CVR 35% even though desktop was unchanged. Mobile traffic was 60% of the page, so the absolute lift was material.
Common mistake: A/B testing on broken pages. If your audit finds 6+ ICE-60+ issues, the page needs a rebuild, not a test. The signal-to-noise ratio is too low for tests to find meaningful lift. Fix the foundation first, then test the nuances.
What does the pipeline contribution math look like?
Pricing page sessions: 8,400 per month, stable. Pre-audit pipeline contribution: 8,400 × 0.8% × 35% meeting-to-revenue × $42K average deal = roughly $99K monthly attributed pipeline. Post-audit at 2.1% CVR (same traffic, same downstream conversion): 8,400 × 2.1% × 35% × $42K = roughly $259K monthly attributed pipeline. Net lift: $160K monthly, or 162%. The 4-hour audit + 14 days of dev time produced an annualized $1.92M pipeline lift.
Run the 9-point audit on your highest-traffic conversion page first. If 4+ issues score ICE 50+, the page is broken not nuanced. Fix the foundation before any A/B test. The full audit template walks the discipline end-to-end.
FAQ
How do you score Impact and Confidence accurately?
Impact = your honest estimate of CVR percentage-point lift if the fix worked. Confidence = how sure you are based on prior evidence (your data, industry benchmarks, qualitative research). Effort = engineering hours / design hours / coordination time. Score teams of 2-3 to reduce individual bias.
Why not run the hero rewrite as an A/B test?
ICE 81 with strong qualitative evidence means the test is wasted impressions. The original H1 had no value proposition. The new H1 has a specific outcome with timeframe. The math is too obvious. Save A/B test budget for ICE 30-60 issues where the math is closer.
How long should each A/B test run?
Until statistical significance at 95% confidence with minimum sample size of ~5,000 sessions per variant for B2B traffic. For this page (4,200 sessions per variant per week), tests took 7-10 days each.
What if traffic is too low for A/B testing?
Below ~3,000 sessions per week per variant, A/B tests take 30+ days to hit significance. Below that volume, ship the high-ICE fixes without test and validate via conversion-rate tracking week-over-week. Use qualitative research (heatmaps, session recordings, user interviews) for the close calls.
Does this approach work for ecommerce pricing pages?
Same audit framework, different priority issues. Ecommerce: image quality, price visibility, trust badges, shipping cost transparency, mobile checkout flow score higher than form length. Run the full 30-point audit either way.
Sources
Want this shipped for your brand?
Book a 20-minute strategy call
We audit your current setup, show you exactly where the highest-leverage moves are, and tell you whether we are the right fit. No pitch, no commitment.