By Kevin Plankey
May 4, 2023
Several months ago, I posted about mistakes to avoid when A/B testing. Over the next few weeks, I want to get into a little more detail about other mistakes to avoid when A/B testing. This week, we’ll talk about setting the wrong metrics.
Setting the Wrong Metrics
It takes some experience to know all the metrics one should be measuring in experimentation. Of course, it’s better to err on the side of caution and collect too many metrics than too few. We recommend measuring anything a shopper may do on your site, but doing this aimlessly and without any methodology can make it harder to analyze your data later. Meanwhile, the more common mistake is to focus only on the big, flashy metrics, and end up with only part of the picture. By doing this you’ll identify behaviors and “what” shoppers are doing on your site, but without the supporting metrics, it’s a lot harder to discover the “why”.
Let’s say you’ve run your A/B test for several weeks, and realize that you’ve been missing out on metrics that could fill in the story of shopper behavior. All is not lost – you have options:
- You can take what you’ve learned, close out the first test, then run with it into a subsequent A/B test. You may not have the detail you want from the first test, but you can still start to iterate.
- Continue to run the A/B test and add in the additional metrics you need. When you go back to analyze your data in a few weeks, make sure you’ve kept track of the date on which you added the new metrics and look at your data from that date forward.
If you don’t want to go with either of these options, you can always set up or add to a validation campaign. A validation campaign doesn’t apply changes to an experience, it just collects data. It’s a great way to gather long-term baseline metrics and is an opportunity to fill in that missing data if you realize you need a more complete picture. You can also use this data and perform full segmentation, looking at groups of shoppers who did the things you wanted them to do and then connecting the dots to see what other actions correlated to that successful conversion.
With any of these methods, you can still report your results. Your data is still valid even if you still have unanswered questions. You can build some momentum if you understand where the holes lie and have a plan to fill them in. Ideally, as you gain experience this will happen less. But, if you find yourself in this predicament, don’t worry. Sometimes you don’t know what you’ve missed until after the fact.
Finally, remember that with SiteSpect, you’re never alone. You’ll be partnered with an optimization consultant who will help you plan, execute, and analyze all your experiments.
Speak with an experimentation expert to see a personalized product demo. It will be time well spent!
Subscribe to our blog: