A/B testing — where a brand tests two versions of a web or mobile page against each other with one variation — has long been a reliable method for increasing conversions. It lets the facts of consumer behavior drive decisions rather than the individual taste of the marketing or design team. But, lately there has been some discussion over the efficacy of A/B testing. Why would we, an A/B testing solution, dig in to these articles? Because it’s important to understand where other A/B tests are tripping up and how to correct those errors. This week we’ll answer the question, “Is A/B testing still effective?”
Late last year, Optimizely phased out its original free plan in favor of a new platform called Optimizely X. This platform is expensive, and essentially requires customers to pay more for the same services. Writer Yvonne Koloczek argues that this shift in Optimizely’s focus also has broader implications for A/B testing in general — she claims that A/B testing is dead. The fault she finds is with uninformed A/B test implementation and analysis, so that many A/B tests on the platform didn’t offer a significant ROI. She writes, “The truth is, A/B tests are often set up incorrectly, polluted by external factors, or never reach statistical significance.”
The Takeaway: Koloczek argues that most users don’t get results from A/B testing because of incorrect implementation or analysis.
“Most A/B Tests Don’t Produce Significant Results: Experimenting remains a crucial aspect of marketing,” eMarketer
Writer Ross Benes explores why, even though two-thirds of marketers use A/B testing, many A/B tests fail to produce significant results. Yet despite the headline, Benes actually champions A/B testing as a necessary way for brands to understand user behavior and protect against revenue loss when trying to improve any aspect of their digital channels. He writes, “In some instances, A/B testing call-to-action features and ad headlines can save marketers 40% of their media budget.” What it comes down to for Benes is being smart and strategic about how your direct traffic during traffic in order to produce significant results.
The Takeaway: The reason most A/B tests don’t produce significant results is twofold. First, that is an unrealistic expectation, since part of A/B testing is discovering what is significant. Second, you can’t get significant results without properly directing and segmenting traffic.
A Little Point of View
Both of these writers’ points about A/B testing focus on execution rather than on inherent value. The problems and risks they point out actually have much more to do with issues like inaccurate data, insignificant sample sets, or unrealistic expectations. But your testing solution should also solve all of those problems. For example, SiteSpect offers full implementation support so that your tests run properly. We also ensure accurate and up to date data, so you know what is happening when. If an A/B test is proving insignificant, you can abandon it in the moment. But, as Benes says, “similar to how scientists learn from their failed experiments, marketers can learn from A/B tests that didn’t yield anything.” The insignificant results are significant too!
More important even than just whether your results are significant is the information you pick up along the way — if your solution can track it. With good, full stack A/B testing, you should be able to acquire information along the entire funnel. Perhaps your call to action variation didn’t yield significant results, but instead you learn about how consumers behave at every step leading up to that point. That is invaluable information, and why A/B testing is still critical for digital success.
To learn more about SiteSpect, visit our website.