My take: the concern is valid, but "therefore stop testing" is the wrong conclusion.
The criticism has a legitimate core. The testing industry has been slow to reckon with acquisition costs. If your test "wins" on conversion rate but the winning experience makes your CAC worse, you might have won nothing — that's a real blind spot. But every single change you introduce might affect your CACs; stopping to measure their impact is not a solution.
What's actually going on in most cases: paid media and on-site testing run on completely different clocks. Meta sees a 5% CAC shift and someone acts that day. A valid on-site test usually needs weeks of traffic. They're measuring different things in different places, and when paid wobbles during a test window, it's easy to blame the test.
But there's something many people miss: if your numbers hurt a little while the test runs, but you walk away knowing something you'll use for the next 6, 12, 18 months... that's not a loss. That's the cost of learning something real. A temporary CAC spike that teaches you the right price point is a bargain.
What I'm curious to hear from you:
- Have you seen any true correlation between running tests and CAC spikes? What did you do about it?
- Has this changed how you structure your program at all?
- How are you handling this conversation with stakeholders? Our gut says the CAC impact is probably overstated... but that's not always how it reads in the room. How do you navigate that?