How to Solve Common Mistakes in A/B Testing: Setting the Wrong Metrics

By Ruby Brown

March 26, 2021

Share

Starting an A/B testing and personalization program is hard. Whether you’re starting a program from scratch or starting a new role on a team with an established optimization program, there can be a pretty steep learning curve. Not only do you need to contend with strategizing in a new way and learning a new technology, but it also requires looking at your sites in completely new ways, and hopefully offers the opportunity to gather a lot more data than you could before. It’s a lot to take on, and mistakes are bound to be made. But that’s okay! Very rarely is all lost. In this blog series, I’m going to walk through common mistakes we all make, and how to correct them on the fly. Use it to try to avoid making the same mistakes I have, and save your project when you make them anyways. This week, let’s talk about what happens when you set the wrong metrics.

Setting the Wrong Metrics

It takes some experience to get really good at knowing all of the metrics you should be measuring for each campaign. Of course, it’s better to err on the side of collecting too many metrics than too few (we typically recommend measuring anything a user may do on your site), but doing this aimlessly and without any methodology can make it harder to analyze your data later. But, the much more common mistake is to focus only on the big, flashy metrics, and end up with only part of the picture of your customer journey. You find out what users are doing on your site, but without those supporting metrics it’s a lot harder to discover why.

For example, our website sitespect.com, features a demo request. Of course, we want to know how many people submit demo requests. A common mistake would be to apply this metric to an A/B test, but then leave out metrics that track micro conversions, like clicks on the demo request button, pageviews of the demo request page, clicks on other links to the demo request page, and maybe even a pagepath metric that allows you to look at every page each user visited. For a retailer, micro conversions may include searches, product detail page views, category page views, or adds to cart.

So, why are these peripheral metrics so important? Let’s say you implement an A/B test and one variation sees a 10% increase in demo request submissions. But, the same variation also sees a 5% drop in overall conversion rate. This means that more traffic is getting to your demo request page, but a smaller percentage of those visitors are actually converting. From here, you have a clearer view of how to iterate. You can focus on getting traffic to the original demo request page and see if you can match that 10% increase in demo request submissions, or you can better optimize the higher-traffic page to increase that overall conversion rate. If you only had the metric for submitted demo requests, you would count the A/B test as a win and stop there — missing out on an accurate understanding of how users are responding and what factor is causing your uptick in submissions.

How to Fill in the Data Gaps

Now the important step. You’ve run your A/B test for several weeks, and realized that you did, in fact, miss out on metrics that fill in the story of customer behavior. You have some options. First of all, you still got some great data! It is important to know that whatever you changed affected at least one important metric. Your first option is simple: continue to run the A/B test and add in the additional metrics you need as you realize you need them. When you go to analyze your data in another few weeks, just make sure you’ve kept track of what dates you added those new metrics so you can look at your data just from that date.

Alternatively, you can take what you’ve learned and run with it into your next A/B test. You may not have the detail you want, but you can still start to iterate. In the example I gave above where I only measured submitted demo requests, I still know a few crucial pieces of information. First, leads are important to my bottom line, so I can still call this a tentative success and set up another A/B test to get that additional information. Second, I still know that a few scenarios are true: something about my variation caused more people to sign up for demos, and no matter what, I will always want to continue to improve that conversion rate. So, if you need to keep moving forward and can’t afford to spend a few more weeks on this A/B test, you can take your winning variation and focus on improving conversion rate.

Finally, if you don’t want to go with either of these routes, you can always set up or add to a validation campaign. This is a campaign that doesn’t apply any changes, but just collects data. It’s a great way to gather long-term baseline metrics, and is an opportunity to fill in that missing data if you realize you need a fuller picture. You can also use this data and perform full segmentation, looking at groups of users who did what you wanted them to and then connecting the dots to see what other actions correlate to that successful conversion. You can (and should) do this type of segmentation on your A/B test data as well.

With any of these methods, you can still report on your results. Your data is still valid even if you still have unanswered questions. You can build some morale and momentum, as long as you understand where the holes lie and have a plan to fill them in. Ideally, as you gain experience this will happen less. But, if you find yourself in this particular predicament, don’t worry. Sometimes you don’t know what you missed until after the fact.

Finally, remember that with SiteSpect, you’re never alone. You’ll be partnered with an Optimization Consultant who will help you plan, execute, and analyze all of your experiments.

To learn more about SiteSpect, visit our website.

Share

Ruby Brown

Ruby Brown

Suggested Posts

Subscribe to our blog:

[hubspot type=form portal=7289690 id=55721503-7d2c-4341-9c5f-cd34a928a0dd]