Webinar Recap: Fail Fast, Fail Cheap

By Paul Terry

May 18, 2020


While we’re at home right now and unable to interact in the ways we used to, we’ve been hosting casual weekly webinars to keep up the communication and highlight some ways that A/B testing, personalization, and optimization can help your business right now. You can always rewatch these anytime on Youtube.com. But, if you prefer a read, please enjoy the write up from “Fail Fast, Fail Cheap.” For Q&A, leave a comment on the blog or email info@sitespect.com.

Fail Fast, Fail Cheap

Most businesses who rely on their websites for major business activity, whether for primarily marketing, lead generation, or direct sales, have multiple stakeholders with responsibility for different sections of the site. Because of this, there are often multiple initiatives for site improvements, and A/B testing offers the flexibility to explore the company’s best ideas without the fear that failure will risk overall company success.
Failing fast is about investing as little as possible early on, and making sure that the investment made will pay off. Measure twice, or perhaps a million times, and cut once with confidence.

Example One: Fake It ‘Till You Make It

Sometimes this means “fake it till you make it.” Rather than build an entirely new subsystem or service for your entire product catalog, for example, it makes better sense to mock up the UI and make it look real, but perhaps only make it available for a single category or a few products. A/B test different layouts and calls to action, analyze the usage and outcomes, and repeat perhaps several times until you’re certain that building it out “for real” will pay off in higher conversion.

Example Two: Release A/B Testing

One of the smartest types of A/B tests is the release A/B test. Rolling out a major site release slowly to a small set of users and ramping up after assuring that business metrics are maintained is part of the “secret sauce” of continuous improvement. This type of A/B test can be used for major funnel or page redesigns as well.

For example, your product, marketing, and UX team spec an entirely new, and most definitely improved product detail page. This new page will incorporate the latest web tech, delivering a much faster presentation. The product team has incorporated an improved layout, the marketing team contributed what they deem as more effective messaging and more convincing graphics, and the UX and R&D teams have collaborated on the new 3D feature widget that became a pet project of the CIO.

It has taken the dev team 9 months to birth the new PDP, and the data driven stakeholders are now anxious to have it A/B tested – with results available several weeks before the holiday code freeze.

You’ve now invested hundreds of thousands of dollars and gobs of pride of many company stakeholders, based on some data, some experience, and a lot of guts and glitter.

Unfortunately, A/B testing this new PDP, with 20 changes to the control, we can determine with 95% statistical significance and to 5 decimal places that it failed, but perhaps not why it failed. Or we may never discover that 15 of the changes lifted conversion by 15%, but the other 5 cost us 10% of that back.

Each of our 20 changes should have been A/B tested quickly, honing ideas, and perhaps several of them together. If possible, you should A/B test and deliver technology changes on their own — separate from UX changes. Some UX changes may be inevitable, but mixing in unrelated messaging, layout, or features invariably muddies the waters and diminishes optimization efforts.

Catch the full webinar here.

To learn more about SiteSpect, visit our website.


Paul Terry

Paul Terry

Paul Terry is a SiteSpect Consultant in Customer Support, guiding SiteSpect users on the road to optimization. He has over 15 years experience in optimization, testing, and personalization. He is based in Duluth, Georgia.

Suggested Posts

Subscribe to our blog: