Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
What is this?
Less
More

Memberships

G.E.M. by Intelligems

74 members • Free

1 contribution to G.E.M. by Intelligems
Does A/B testing hurt CACs? Let's talk about this openly
My take: the concern is valid, but "therefore stop testing" is the wrong conclusion. The criticism has a legitimate core. The testing industry has been slow to reckon with acquisition costs. If your test "wins" on conversion rate but the winning experience makes your CAC worse, you might have won nothing — that's a real blind spot. But every single change you introduce might affect your CACs; stopping to measure their impact is not a solution. What's actually going on in most cases: paid media and on-site testing run on completely different clocks. Meta sees a 5% CAC shift and someone acts that day. A valid on-site test usually needs weeks of traffic. They're measuring different things in different places, and when paid wobbles during a test window, it's easy to blame the test. But there's something many people miss: if your numbers hurt a little while the test runs, but you walk away knowing something you'll use for the next 6, 12, 18 months... that's not a loss. That's the cost of learning something real. A temporary CAC spike that teaches you the right price point is a bargain. What I'm curious to hear from you: - Have you seen any true correlation between running tests and CAC spikes? What did you do about it? - Has this changed how you structure your program at all? - How are you handling this conversation with stakeholders? Our gut says the CAC impact is probably overstated... but that's not always how it reads in the room. How do you navigate that?
3 likes • 5d
I haven't seen A/B Testing itself (not just the change introduced) hurt CAC — not to mention that it's very hard to prove. Meta is a very complex beast, and attributing a spike in CAC to a launched test ignores the hundred other variables in play at the same moment. And even if that's the case — that's the whole point of Experimentation! Changes are going to be introduced either way, so why not try to understand their effect, learn, and iterate? What is a valid concern is that CR/ARPU uplifts don't necessarily translate into better CACs or ROAS. In theory, increasing CVRs by 10% should decrease CPAs by ~10%, but in reality, the effect is hard to isolate and probably not linear — especially if the effect is negative. Recently, I ran an experiment that generated a 10% increase in ARPU (statistically significant, correctly powered), driven by a 25% increase in AOV and a 15% decrease in CVR. In theory, it worked. But when we pushed the change to production, CACs — instead of increasing 10% — increased 25%. My theory is that Meta penalizes your site for converting worse by charging higher CPMs, so you end up getting hit twice. An Intelligems A/B test can never catch that, because it happens before the randomization occurs. There's also a conspiracy theory going around that if your ARPU improves because of a test, Meta will detect it and — since you're now capable of paying more — it will raise your CACs. I want to believe that's not the case, but I have my doubts. So yeah, observing how CAC/ROAS behave after a big experiment is sent to production is important, but again, hard to prove causality — especially when the effect is small (3–5%).
1-1 of 1
Juan Cruz Giusto
1
2points to level up
@juan-cruz-giusto-4567
Head of eCom @ Mars Men

Active 3d ago
Joined May 5, 2026
Powered by