Page Performance and A/B Tests

By SiteSpect Marketing

May 9, 2013

Share

A/B test campaigns often try to improve the online user experience in order to increase key performance indicators such as conversion and engagement. But, what happens when our ideas for improvement, tested in A/B test campaigns, have unexpected results? What happens when conversion or other KPI-type metrics do not move in any direction? There is a common phrase in the optimization world that states that “no test in which you learn something is a bad test.” When our great idea presented in some variation does worse than control, or shows no real change, we still learn the painful lesson that our great idea was not really that great. But, before moving on, I encourage you to dig deeper into your data analytics and consider any lurking variables that might have impacted the results.

Good user experience has many components and one of them is page performance. Speed is in general a big potential influencer of user behavior. End users today expect fast websites to the point that slower page response time actually increases page abandonment. Site speed is also a factor in Google search engine ranking algorithm. The significance of page performance today suggests that you should always be measuring speed. But, how does this relate to A/B testing?

Consider the following example. Your users often request that they would like to see more product results on your search page. This feedback turns into a A/B test where one variation shows twice as many products on search pages. Inside your campaign, you capture user engagement metrics, add to cart clicks, and completed purchases to make sure that this change in user experience is improving business metrics. After running the campaign for some time, the results are negative: engagement and conversion drops. What happened? You learned that increasing search results on the page was not beneficial to your business — but do you take the time to dig deeper and try to figure out why?

Understanding why our ideas fail brings a lot of insight. Segmentation by browser could reveal that the negative metrics originate from a particular user group, perhaps a specific browser. Maybe that browser, which represents a large portion of your assigned users, experienced a JavaScript error due to the change in the variation and this was the true reason for the overall negative results. This is one possible explanation, and maybe this example can be blamed on poor QA, but if you are measuring the performance of your website, have you checked how the variation compares to control in terms of page speed? Do the variations that try to improve the user experience also increase the page load and ready time?

The example about increasing results on search pages is actually a pretty popular A/B test that was run by Google. The variation, which increased search engine results on the page, caused a decrease in conversion, and digging deeper into the data revealed that the decreased conversion was attributed to the negative impact due to a longer pageload time from having twice the content. The important takeaway is that website speed is a key factor in user experience, and analyzing performance metrics with A/B testing can reveal why a specific variation did not show positive results.

Some of our clients are actually adding page load response points into all of their A/B test campaigns so that they can always track page speed with the changes in variations. Others have taken an even bolder move of purposely adding latency to their pages in A/B test campaigns to directly measure the impact of speed on user behavior.  The message is clear: users care about performance and website changes that degrade page responsiveness can lower business metrics.

To learn more about SiteSpect, visit our website.

Share

SiteSpect Marketing

SiteSpect Marketing

SiteSpect Marketing focuses on authoring content for the web and social media.

Suggested Posts

Subscribe to our blog: