How many times have you looked at results from an A/B test, only to realize that something went wrong? Some of your metrics don’t have data, certain segments of users look off, the sample size is much lower than you expected, etc. Where do you go from here? Cut your losses, spend additional time and resources to relaunch the A/B test, and try not to think about your backlog of A/B test priorities that is slipping yet again. It’s a very frustrating experience!
How do we prevent this from happening in the first place? Are there tools that can help you along the way? Let’s have a look.
An A/B Testing and Personalization Rollout Strategy
The first step in implementing a risk-averse campaign monitoring strategy is to be methodical about how you approach campaign rollouts. This will help clarify what monitoring measures you’ll need to put in place. A good way to do this is from a risk standpoint:
- High risk: changes are made to critical parts of the user experience, or you are making significant changes to the user experience. These may include above the fold homepage changes, checkout flow changes, lead generation flow changes, recommendations algorithm improvements, infrastructure rollouts, or multi page changes.
- Medium risk: changes are made to less critical parts of the experience with lower complexity changes, but still high visibility parts of the user experience, such as: subcategory pages, below the fold content, or single page changes.
- Low risk: changes to low visibility areas of the website or application, with simple modifications to the end user’s experience such as button color, label, or text changes.
The higher the risk, the more conservative the rollout strategy should be. Consider rolling out high risk campaigns at a low percentage of traffic, monitor them daily, and adjust traffic accordingly. If anything looks off, pause the campaign, troubleshoot, and unpause when you are confident that the issue has been resolved. A reference model for traffic ramp-up for high risk campaigns is to start at 10%, and progressively increase to 20%, 30%, 50%, 75% and 100% of traffic.
For medium size campaigns, start at 25% traffic, and go straight to 50% and 100%. For low risk campaigns, launch directly to 100% traffic. In all campaigns, make sure to keep monitoring your campaign results.
Pro tip: In order to simplify your data analysis, keep the split even across your variations, and adjust the percentage of traffic that goes to your A/B test instead.
As you can see, in order to successfully increase the percentage of traffic in each campaign, you need to be vigilant about monitoring data as it comes in. So does this mean you should just log in to your A/B testing tool daily and check the performance of each campaign and variation? Not necessarily.
What to Consider When Setting Up Campaign Monitoring
Unfortunately, monitoring your campaigns for safe and effective rollouts isn’t just about checking the performance of your KPIs (though of course that is important too). Yes, if your metrics are hurting you’ll want to adjust your strategy, but that’s assuming that everything is working as it should. This type of monitoring can miss some major problems. For example:
- An unrelated code release or site change could inadvertently break one (or some) of your metrics or part of your campaign, causing them to no longer collect data.
- A newly published URL, or an error in a campaign set could cause variations to apply to the wrong set of pages or users.
- Users may be counted or bucketed inaccurately.
- Your A/B test hypothesis may prove incorrect, and you therefore see a drop in performance for important metrics.
- An A/B test may negatively impact page performance by introducing latency.
Feeling overwhelmed about everything that could go wrong? Don’t worry! That’s why there are monitoring strategies you can adopt to make sure these things don’t negatively impact your site and your A/B testing strategy.
Alerts for Better Campaign Monitoring
It’s absolutely a good idea to log in to your A/B testing platform and check on your metrics regularly for each active campaign. However, this isn’t always enough to catch the above errors. Instead, you’ll want to set up active alerts so that nothing falls under the radar. The most helpful A/B testing and personalization alerts to consider are:
- Campaign has no visits: This usually means something is wrong with the campaign’s segmentation, page application, or triggers. This is easier than you would think to miss, especially if the campaign had previously collected a lot of data or seen a lot of traffic.
- Metric has no visits: This probably indicates that the metric isn’t triggering properly. Of course, you may have metrics that you don’t expect to get much traffic. In that case you can simply turn the alert off or lower the alert threshold.
- Campaign disablement: Disablements often happen when your website has changed since a campaign was pushed to live traffic. If your campaign is disabled for any reason, you want to know right away. You don’t want to wait until the next day when you go to write up a report for your campaign.
- Winning or underperforming metrics: A clear winner or an underperformer can equate to a lot of revenue won or lost. Knowing results sooner means you can maximize success and minimize losses.
- Slow performing campaigns: Is your campaign introducing slowness or latency with a poorly designed experience? Review how the experience is built to reduce impact on web core vitals. SiteSpect Real User Monitoring monitors site performance for you in the same interface as your optimization platform.
A well designed A/B testing & personalization dashboard can really help surface alerts and other relevant data points to help you stay on top of your optimization program.
On top of setting up these automatic alerts, tools like Real User Monitoring can help you track overall site performance, as well as site performance related to specific campaigns.
Focus on Optimization, Not Monitoring
If you are able to set up slow and risk avoidant rollouts, intelligent alerts for good monitoring, and good data review practices, then you can focus your energy on creating optimized, personalized experiences. This should both be a key criteria when evaluating optimization platforms, and also a priority when setting up new campaigns. Use your resources on developing ideas, A/B testing hypotheses, and moving your strategy forward, with the confidence that your campaigns are running smoothly.
To learn more about SiteSpect, visit our website.
Subscribe to our blog: