A Guide to A/B Testing With Low Traffic

September 23, 2020


It goes more or less without saying that when it comes to A/B testing, the more traffic the better. More traffic means it’s faster and easier to reach statistical significance, get results, and make decisions. But not all businesses thrive on immense traffic numbers, and even brands who do may find circumstances where certain experiments just don’t get as much traffic volume. For example, sites selling high-priced or high-end products typically see a lower volume of purchases compared to stores selling low-priced essentials. Or, A/B tests or code releases that are necessary changes but meant to check functionality may roll out to fewer visitors than a typical, conversion-focused experiment.

Does that mean that A/B testing and optimization isn’t for them? The answer is a resounding no. Almost any site can benefit from A/B testing and personalization, but there are a few specific approaches you’ll want to keep in mind if you find you have lower traffic volume.

The Challenge with Reaching Statistical Significance

The greatest challenge of A/B testing with low traffic is reaching statistical significance. So first, what exactly is statistical significance? When you collect data from a sample, there is the possibility that your results are due to chance or a sampling error. Statistical significance is a complex calculation that determines that results indicate an actual trend and not just an accident of chance. But remember, each value of each measurement (Add to Cart Totals, Add to Cart Uniques, etc) has its own statistical significance calculation. SiteSpect calculates this for you for each metric value within each campaign.

Significance is pretty important when it comes to making big, revenue-impacting changes to your site. However, in cases where you have lower traffic — either because of the circumstances of a particular experiment or because of the nature of your site — reaching significance can pose some problems. It can take so long to collect enough data that the A/B test becomes irrelevant or impractical, or it may mean you can only run one A/B test at a time, limiting your agility and capacity to respond to consumer needs.

Strategies for Successful Testing

There are a few approaches you can take for A/B testing on a small set of traffic, but overall your strategy has to be a little bit different than if you had seemingly unlimited traffic.

1. Focus on micro-conversions:

When A/B testing, it’s important to measure each step of the user journey. Each metric measured has its own sample set, and it’s own statistical significance calculation. Though there may not be enough traffic to measure a lift in orders placed at statistical significance, there may be enough to measure product views, adds to cart, and many other behaviors between landing page and thank you page. When you measure several points along the user journey, you can see the change in user behavior and correlate these clues toward the complete picture, even though the final metric may lack sufficient data.

2. Look for “better/worse” rather than precise measures:

Sometimes, behavior lifts can be judged as “Better/Worse” at a lower statistical significance threshold than if a precise lift needs to be known. A lift that is varying between, say 5 and 10% where the statistical significance is growing but still a few points shy of 90%, could be taken as “Positive” long before it can be judged as 6.25%. Remember, it’s important that the entire change of the User Journey makes sense. Sometimes precision is important, and sometimes it is not.

3. A/B test fewer items at a time:

While you may be able to overlay campaigns in many circumstances, in general when you have less traffic, you’ll want to maximize the amount of users in each campaign. This means you may want to run fewer campaigns at a time. But don’t be discouraged, this doesn’t mean you still can’t make a big impact.

4. A/B test for functionality:

Especially if you’re A/B testing a new release, code update, or feature, you may be less concerned about the conversion impact than you are about ensuring everything functions properly. Focusing on metrics like site speed can give you the confidence you need without needing to wait for definitive conversion metrics.

5. “No harm done” A/B testing:

It’s not uncommon for branding, inventory, or other corporate changes to come with necessary site changes — meaning changes that need to happen regardless of their impact to the site. However, it’s still critical to measure the impact of these changes. One way to do this with lighter traffic is to look for “no harm done.” A lack of change will be apparent more quickly than a clear winner or loser, and this way you can be sure that these changes don’t harm the business. If it turns out that you see a negative impact, you can then tweak the implementation until you get to a steady state result.

So, if you find yourself wondering whether your site has enough traffic to warrant A/B testing and optimization, the answer is almost certainly yes. You may need to tailor your approach, but more data on user behavior means more power to improve your site and the customer experience.

To learn more about SiteSpect, visit our website.


Paul Terry is a SiteSpect Consultant in Customer Support, guiding SiteSpect users on the road to optimization. He has over 15 years experience in optimization, testing, and personalization. He is based in Duluth, Georgia.

Suggested Posts

weekly roundup graphic.jpg

SiteSpect Synopsis: Know Thy Customer

Couple shopping online together

Optimizing the Customer Experience On and Offline

sitespect real user monitoring

Announcing SiteSpect Real User Monitoring (RUM)