How Do You A/B Test and Measure for Customer Satisfaction?

By Luke Hardwick

August 4, 2021


A/B testing and optimization are all about making data driven decisions. But what happens if you want to focus on something a little less tangible, like customer satisfaction? What is a satisfied customer? You can possibly tie satisfaction to some concrete behaviors like purchases or subscriptions. But as anyone who has shopped online knows, you can have a great experience without converting. Maybe you’re researching before buying, or maybe you enjoy browsing until you’re ready to subscribe. These customer behaviors are still positive and worth fostering. But without direct metrics, are you left to just hope that your customers are satisfied? The answer is a resounding no. There are strategies you can employ and metrics you can use that will help you gauge and improve customer satisfaction as part of your A/B testing and optimization program.

Metrics to measure customer satisfaction

Many of our customers have implemented A/B tests designed to improve customer satisfaction by focusing on metrics such as return visits and calls to customer service, and by introducing feedback points throughout the customer journey.

For example, one of our customers conducted an A/B test designed to measure how their checkout process would impact customer service calls. This brand specializes in home fixtures, and so their items often require a level of delivery coordination. They used to have an optional phone number field in their checkout process, but after making this field mandatory they saw post-purchase customer service calls decrease by 15%. A relatively minor website change ended up improving the customer experience significantly, in a quantifiable way.

Another customer, Corendon, creatively used A/B testing to increase their qualitative feedback. Like many brands, Corendon depends on this customer-generated qualitative feedback in addition to quantitative data to improve the customer experience. However, they found that users only submitted feedback via their forms when there was a major problem, and not for minor points of friction. They A/B tested embedding short, yes or no customer feedback questions into their customer journey, which increased feedback submissions without decreasing conversions.

Finally, you may want to measure purchases, return visits, and order value or revenue per visit among users in each variation. If return visits go up while purchase related metrics remain stable or also increase, then you can infer that your customers are satisfied with their experience. If you see a drop in return visits, or a drop in purchases or revenue then you may be missing something that your users are looking for.

Evaluating Customer Satisfaction Without Direct Quantitative Data

Of course when you’re A/B testing and optimizing your site, you want quantitative data to drive your decisions. But there are circumstances where you’ll also want to consider less definite correlative data that still helps fill in the story of your customer journey. You can track certain experience indicators from the time you implement an A/B testing tool or run an A/B test and see if there is a change. For example, if you release a new experience do you also see a change in your average review rating for the same time period? Are your NPS scores stable or do you see a change within a given time frame? Did you see an increase in positive social media engagement over time? You could also look at time on site. For example, in a retail environment a faster journey to checkout would imply higher customer satisfaction. On a blog or content driven site, more time on site would imply higher customer satisfaction.

These metrics aren’t perfect, since they are difficult to control. There are many reasons people leave reviews, and a change could have to do with something like external marketing efforts. But, keeping track of these correlative trends can help you get a sense for the overall feeling of your customers toward your brand.

Finally, you should periodically reflect on what an ideal customer journey looks like. For our initial example, that included completing an order and receiving the delivery without needing to call customer service. For your brand, perhaps it entails visiting your site once a week to read new content, or maybe your ideal customer enrolls in a subscription service. These behaviors — rather than just focusing on purchases — can give you great insight into customer satisfaction and how site changes you introduce impact it.

To learn more about SiteSpect, visit our website.


Luke Hardwick

Luke Hardwick

Luke Hardwick is a Manager of Customer Success at SiteSpect, consulting for SiteSpect users on their optimization and personalization road maps and projects. Luke is based in London and has experience as an conversion rate optimization specialist across many softwares before landing at SiteSpect.

Suggested Posts

Subscribe to our blog: