User
Write something
How long should you let a split test run?
Just wrote this up based on a question I got yesterday and I thought it would be useful for you guys! This is always a fun question because there isn’t a clear answer and there's a lot of nuance. First and foremost, we need to make sure the changes make don’t HARM conversion rate. That will happen about 50% of the time. The trick is we don’t know which times that’s gonna be… so we have to test. Obviously, the more data we have the better. But we don’t want to run tests for months and months. Ask any statistician if you have enough data and they’re always going to say more is better. But we can’t tests run forevermore so we need to compromise and be ok with some level of uncertainty. At the same time, running a test for one single day also doesn’t feel right (for reasons we’ll go over). So the optimal strategy must be somewhere in the middle. Let’s go over some of the competing interests; ✅ Volume of visitors in the test - We don’t want to run a test to 20 visitors and decide the variant is a winner because it has one more conversion than the control. More data is almost certainly better for certainty that a variant is indeed better than the control. ✅ Difference in conversion rate. A control that has 1% CVR and a variant that has 4% CVR requires less data to be certain that we have an improvement in conversion rate. By the same token, if you have a 1% vs. 1.1% conversion rate, you’re going to need a lot of data to be confident that difference isn’t due to random chance. ✅ Product pricing/AOV. Higher ticket products can have a lot more variability day to day. If you have a product that’s more expensive, generally that means there’s a longer buying cycle. If your average buying cycle from click to buy is 7 days, you don’t want to make a decision after 4 days. You haven’t even let one business cycle run through yet. ✅ Getting a representative sample of traffic (days of week) - similar to above, when we are making long term predictions about conversion rate differences, we need to make sure that we have a sample that is close to our long term traffic. Would you want to poll a random set of Americans to make predictions on the Japanese economy? So when running a split test we want to make sure that we are running it during a relatively normal time period AND account for different traffic throughout the week.
3
3
New comment 17d ago
Split Testing Images on Sales pages
Hey guys! We just got a 34% lift by split testing the image on a sales page for a health brand and wanted to report back on it. The importance of the above the fold image on your landing pages can’t be overstated. They’re also some of the easiest tests to run.. even if you put zero thought into it. To be honest, randomly testing images on your LPs is probably a good use of your time. As in, putting 30 seconds of thought into it and testing will probably get you results. But if you want to put a 10 minutes of thought into it, you can use the following framework for a test: “Aspirational” vs. “identifiable” Aspirational images appeal to the end result/the person they will become by using the product. They showcase what and who your customer WANTS to be. If you sell skincare, this would be showing a young and attractive woman or man with perfect skin. Identifiable images appeal to who the customer currently is. Prevailing wisdom would say that aspirational one would win out. I mean, isn’t the whole point of product marketing to show what the person can become if they buy a product? The truth is that depends on the confidence of the avatar. Some markets and avatars are so mistrusting and jaded from trying dozens of solutions that they don’t even believe that they can get to the end goal. If you show them an aspirational image, it’s just going to turn them off. If you’re dealing with an insecure market, identifiable image would likely be more appropriate. So which test won in the test I referenced above? Aspirational. My theory is because the brand has a pretty clear unique mechanism that has a ton of trust built-in to the product. Even jaded and sophisticated prospects believe the results. Sidenote: You can use both as aspirational and identifiable images in the same above the fold. Before and after images oftentimes show both - the before is identifiable, the after is aspirational. Showing the transformation builds trust. The beauty of split testing is it puts all the armchair philosophizing to bed… even though I love armchair philosophizing about CRO. Ultimately, the market decides. What we think doesn’t matter.
6
4
New comment Sep 2
Some FREE recruitment nuggets
Hey everyone! After being in the recruiting business for over 10 years, I have just launched a GUIDE ON TALENT ACQUISITION in the marketing & eCommerce space. This guide is a great resource to help you save over $15,000 you'd spend on wrong hire. For some time, it was only available to our clients, but now I'd like to share it with you too! If this is something you might need now or in the near future, drop me a message here and I'll send it over! P.S. Anyone wants to network on a zoom call? 😉
1
2
New comment Aug 29
Facebook Group Engagement Farming 101
Posted fun little square post this a big Facebook group for entrepreneurs. The comments are unhinged lol You get seen a lot more in your polarizing, but the prices there are people start fuming at you. If you don’t care about the opinions of people who aren’t your dream client you can post things like this and play your lyre like Nero as the FB group burns 🔥 What’s the ROI you ask? 0>🤷🏻‍♂️ I find a little fun though.
0
0
Facebook Group Engagement Farming 101
Generating $70k+ Incremental Revenue from 1 Split Test
What's up guys! I was sharing a split-test result from a webinar opt-in page with Tobias we ran and wanted to post it here. Long and the short of it - Opt-in % = Same - Book Call % = 36% Increase - High Ticket Revenue = 300% increase. Incremental Revenue Increase: $70k+ (as of today) Now the cool part about this test is that it literally was just the opt-in page. We changed the headline and the bullet points - nothing else. We've seen this a BUNCH of times. *Your opt-in/landing page directly affects the QUALITY of leads - even if it doesn't affect the quantity* This is test showcases why even having the same (or lower) opt-in rate can in fact be better. I can't share the exact copy change for privacy reasons, but I'll describe it: The control headline was talking about an outcome "How to [get desired result]". It was a pretty general headline - not bad, but just very straight to the point. The variant headline introduced a mechanism "This new [mechanism] is a way to [solve sophisticated problem] to [get desired result] sustainably" What *likely* happened here is that the headline that introduced a mechanism attracted a more sophisticated lead - someone who has done research, is knowledgable, and has tried a few solutions before. We changed the bullet points in the variant to speak about misconceptions - misconceptions that a more sophisticated audience would believe. Example: - Why [this thing every competitor tells you to do] is actually wrong and hurting your progress - How [this thing you don't want to do but you think is helping] is unnecessary And that was really it. We got the idea from reading customer feedback and applications. Is it good copywriting? Yea for sure. Is it going to impress other marketers? Probably not. Is the market impressed by it? Definitely! The key with this is to split-test the result with software and then track it on the backend via Hyros or whatever attribution platform you use. Hope this is helpful!
3
7
New comment Jun 7
1-6 of 6
Lords of Marketing
skool.com/lordsofmarketing
Welcome to the house of marketing lords.
Leaderboard (30-day)
powered by