What To Do With A/B Test Results? How To Iterate for Optimization

By Luke Hardwick

March 27, 2019

Share

We’ve talked about how to choose your first A/B test, but what about your second, third, fourth, or fifth A/B test? Where do you go from there? In this blog I’ll walk through how to iterate in a way that’s productive, instructive, and gets you the most value out of your A/B testing program.

Optimization Pro interpreting data

Interpreting Your Analytics

If you’re new to A/B testing, it can feel like a let down if your first A/B test doesn’t show a huge impact to your KPI. But, ConversionXL nicely lays out the numbers in this article, showing that anywhere from 50% – 80% of A/B test results are inconclusive [read: still valid]. Other CRO experts estimate something like 1 out of 7 A/B tests has a clear winner. Despite this information, it can be easy to focus on big wins or big risk mitigations with statistical significance. But it’s crucial to remember that flat results are still valid and relevant.

Once you get into the rhythm of A/B testing, these flat results can actually build the groundwork for your more impactful A/B tests. The learnings you get will allow you to narrow in on the parts of your site that affect user behavior the most. But in order to achieve this, you need to have planned out an effective campaign structure with a strong hypothesis, and considered a wide variety of metrics and user segments. Let’s look at an example:

Example A/B Test:

 

A/B Test: Reduce your checkout flow from three pages to one page

Metrics: Add to cart, purchase, time on page

Segments: Browser, device type, location, new users, return users

In the above example, the big hope is that a shorter check out flow will reduce friction and increase purchases. But, when the A/B test reaches maturity you find that your KPI numbers haven’t moved significantly. This looks at first like the checkout flow made no difference, but now let’s look at the other metrics and segments:

Across All Segments:

Purchase: flat

Add to Cart: flat.

Time on Page (first page of checkout flow): increased

By Device Type:

Purchase: flat across Chrome, Firefox, and Safari; decreased on desktop, increased on tablet and mobile

Add to Cart: flat across devices and browsers

Time on page: flat across browsers, increased on desktop, decreased on mobile and tablet

By User Type:

Purchase: flat for return users but increased for new users

Add to Cart: flat

Time on page: flat for return users but decreased for new users

Now, while our A/B test proved flat in terms of our overall metrics, we actually learned some really valuable information about specific segments of users, and the “flat” result was deceptive. From the data, we can see that the variation actually improved the experience for mobile users, tablet users, new users, and made their checkout flow more successful. However, desktop and return users kept getting stuck and abandoning that new checkout page. In this example A/B test, it’s important to note that each subset of data had a large enough sample size to reach significance.

How to Iterate

Now you can iterate on your original hypothesis. Common wisdom suggests that shorter checkout flows are more effective, but now you know that’s not true in this instance. Since users are getting stuck here, what if you try breaking the checkout flow into two pages (rather than the original three), redesigning the look of the checkout, or even lengthening the checkout flow to take more pages. Alternatively, could you get away with requiring fewer steps for checkout? Maybe the key isn’t the number of pages but the amount of information required? Are there optional fields you could remove? You may have some fields that cause more friction than others. Metrics tracking which fields throw validation errors most in the first version of your campaign would give you further areas to explore in the next iteration.

Now, just because you lifted conversions on mobile doesn’t mean you’re done. Fewer pages in checkout succeeded here, but the findings would suggest you should consider taking a different approach for desktop. Now you’re ready to dive into your next A/B test.

Communicating Your “Flat” Learnings

Even in companies with the most mature and robust A/B testing programs, optimization specialists struggle to communicate the value of flat learnings across departments. When you hear the word “optimize” you expect a string of big wins. To wrap up this blog, I’ll give some tips for how to communicate the value of these learnings outside of the optimization team.

One of the most valuable things you can do for your team is to make sure you do not put all of your eggs into one basket with a primary variant and a primary KPI. These are experiments and A/B tests, and you never know for sure what metrics will see the biggest impact or which variations will move the needle. If you knew that, you wouldn’t need to A/B test. So, when your report looks flat there are some important pieces to highlight in your communications.

First off, a flat result is still a valid result. You may not have proved your hypothesis but the business has the choice to revert to control or choose to rollout the variant with the expectation it is not going to shift the conversion needle but is a conversion neutral option.

When you present your results, make sure your analytics show the full picture — don’t present just one metric, or one segment. While you don’t want to overcomplicate your findings, you also don’t want to reduce the value of your experiment. If the total conversion metrics didn’t move significantly what about for specific segments of users? What did some of the secondary metrics show? Bounce rate, time on page, or errors are likely to point to how to evolve your hypothesis and iterate on your campaign giving you actionable next steps to reach your goal.

To support your optimization efforts, you might also consider each A/B test as a step in a broader goal. For example, if you set out to optimize your checkout flow, this will entail a series of A/B tests, variations, and analyses. You’ll end up with a huge impact and impressive ROI, but it can take time to really understand how users behave on your site and how to improve the experience to reach that desired result.

To learn more about SiteSpect, visit our website.

Share

Luke Hardwick

Luke Hardwick

Luke Hardwick is a Manager of Customer Success at SiteSpect, consulting for SiteSpect users on their optimization and personalization road maps and projects. Luke is based in London and has experience as an conversion rate optimization specialist across many softwares before landing at SiteSpect.

Suggested Posts

Subscribe to our blog: