When you first start managing an A/B testing and optimization program, the possibilities can be equal parts exciting and overwhelming. Chances are you’ve had a running list of improvements you want to make on your site and are eager to finally be able to accomplish your goals.
However, despite the temptation to jump into everything all at once, it’s important to take your optimization strategy step by step, as every change to your site may affect other parts of the user experience. But, if you’ve already gone ahead and launched a number of simultaneous A/B tests and are now realizing that they are affecting each other in unforeseen ways, it’s okay. Here are some steps you can take to make use of your learnings and move forward strategically.
Pause, Prioritize, and Reassess
Usually when taking on too much causes a problem, it’s because you’re simultaneously A/B testing multiple parts of the conversion flow, obscuring the reasons accounting for the changes you’re seeing. For example, let’s say you have the following A/B tests all running at the same time, with the corresponding results:
- A homepage A/B test that has the effect of increasing clicks on one particular product category page
- An A/B test on your product category pages that highlights featured items and increases adds to cart for those items
- An A/B test in the cart that makes last-minute add ons more personalized, and increases average order value
At first glance, these are all great, successful, A/B tests with promising results. That very well may be true, but now there are additional factors that you need to contend with. You know what happened in each of these A/B tests, but you’ll have new questions about why. Are the increased adds-to-cart because of the change on the product category pages, or because you’re seeing an influx of traffic to one particular category page highlighted in your homepage A/B test? Is your average order value increased because of the new add-on promotion, or is it because more users are seeing your product category pages?
The combination of factors can make your understanding of why the user journey is changing murky. If you encounter this type of confusion, you can always pause your A/B tests and reprioritize. One way to do this is to map out your typical user journey. Are most users entering on the homepage? If that’s the case, then maybe focus just on that A/B test until you get more confidence in your results. Or, do most of your users enter directly on a product category page? If that’s the case, you’ll want to focus there. Either way, map out your customer journey and restart your A/B testing sequence from the beginning. This will give you greater clarity about what causes any changes you’re seeing on your site and as a result of your A/B tests.
Of course overlapping traffic is only a problem if users are assigned to multiple campaigns. If your traffic levels allow it, another option is to run all three A/B tests at once, but split traffic between them. Each user will only be assigned to one campaign, rather than possibly seeing all three at once. Each campaign will likely take longer to reach a mature conclusion, since the traffic will be lower to each, but you’ll be able to assess all three changes in isolation. The results of each A/B test can help you determine what areas to prioritize next.
Become a Segmentation Pro
Segmentation is a critical part of any results analysis for your A/B test. But, especially when you find yourself with overlapping A/B tests that muddy your understanding of the user journey, it’s time to segment out your results to get some clarity. Put simply, if you’re newer to data analysis, segmentation is when you separate groups of users based on a common factor, for example, by device type, by action taken on the site, or by entrance page. This is really simple to do — in SiteSpect you can easily create segments based on any number of factors.
For the above situation, where you have your three overlapping A/B tests, you can gain some clarity on your results by segmenting based on other actions the user has taken on the site. For example, for your A/B test on the product category page, try excluding users who have clicked on the homepage feature you’re also A/B testing. This way, you can see the impact of the product category page A/B test without the additional weight of the homepage A/B test. Or, in the cart, try looking only at users who actually clicked on a last minute add-on item. This way, you can see how this particular feature directly affected average order value.
In general it’s best practice to always segment your data to get the clearest picture possible. But, especially if you make the same mistake I have and have too much going on at once, segmentation can be a life saver.
To learn more about SiteSpect, visit our website.