A/B test campaigns often try to improve the online user experience in order to increase key performance indicators such as conversion and engagement. But, what happens when our ideas for improvement, tested in A/B test campaigns, have unexpected results? What happens when conversion or other KPI-type metrics do not move in any direction? There is a common phrase in the optimization world that states that "no test in which you learn something is a bad test.” When our great idea presented in some variation does worse than control, or shows no real change, we still learn the painful lesson that our great idea was not really that great. But, before moving on, I encourage you to dig deeper into your data analytics and consider any lurking variables that might have impacted the results.
Good user experience has many components and one of them is page performance. Speed is in general a big potential influencer of user behavior. End users today expect fast websites to the point that slower page response time actually increases page abandonment. Site speed is also a factor in Google search engine ranking algorithm. The significance of page performance today suggests that you should always be measuring speed. But, how does this relate to A/B testing?
Consider the following example. Your users often request that they would like to see more product results on your search page. This feedback turns into a A/B test where one variation shows twice as many products on search pages. Inside your campaign, you capture user engagement metrics, add to cart clicks, and completed purchases to make sure that this change in user experience is improving business metrics. After running the campaign for some time, the results are negative: engagement and conversion drops. What happened? You learned that increasing search results on the page was not beneficial to your business -- but do you take the time to dig deeper and try to figure out why?
The example about increasing results on search pages is actually a pretty popular A/B test that was run by Google. The variation, which increased search engine results on the page, caused a decrease in conversion, and digging deeper into the data revealed that the decreased conversion was attributed to the negative impact due to a longer pageload time from having twice the content. The important takeaway is that website speed is a key factor in user experience, and analyzing performance metrics with testing can reveal why a specific variation did not show positive results.
Some of our clients are actually adding page load response points into all of their A/B test campaigns so that they can always track page speed with the changes in variations. Others have taken an even bolder move of purposely adding latency to their pages in A/B test campaigns to directly measure the impact of speed on user behavior. The message is clear: users care about performance and website changes that degrade page responsiveness can lower business metrics.