Product launches are an exciting but hectic time. Marketers are busy preparing communications and campaigns to highlight the new features and benefits, while IT and development teams are double-checking code to ensure deployment goes smoothly. Product Managers are typically focused on talking with customers, writing feature specifications, and managing the process of what makes it into the final release.
Amidst all of this activity is usually some risk. Hoping that a new release will improve conversion just doesn't cut it anymore. But the good news is that there are ways to both mitigate risk and learn valuable insights without negatively impacting your conversion. The best practice is to test new features to a subset of your audience and measure behavior upfront, before a launch to all customers. And SiteSpect can help. In fact, SiteSpect is the only digital optimization solution that enables you to validate a new release for a test audience first.
Let’s walk through an example of how to conduct a feature launch with testing. For this example, we’ll use a web application for a large online retailer.
Before you can test a new feature, figure out the audience for which it is intended so you can target the feature appropriately. Using our example above, if you have redesigned the whole checkout process for visitors on a smartphone, you will likely only want to show the new feature to a subset of smartphone visitors.Then consider segmenting portions of your audience, for example, testing a subset of iPhone 5 and Samsung Galaxy 4S users.
Once you have identified the audience that will be exposed to the new feature, you can begin to create all of the necessary test elements. Before activating the campaign, remember to show the new feature to a small subset of your intended audience for test purposes. This offers the ability to quickly measure KPIs while also checking for misplaced code or a poor experience.
The size of your test audience for the new feature depends on traffic volume. If your site receives a significant amount of traffic, we recommend rolling out the feature to 1-3% of your traffic. This can always be adjusted if you aren’t getting the necessary traffic to reach statistical significance.
We can’t stress this enough: measurement is the basis for any good testing program. Regardless of whether the results are positive or negative, if you don’t have data to help accurately gain insight into user behavior, then you are simply relying on opinions.
In the example of a web application for a large online retailer, we would want to measure conversion rates, average order values, order values by departments, number of hits on the checkout page per department, and more. In a registration-centric application, you may focus more on form submittals and follow-ups. Analyze test results as a whole and by individual segments to see how specific groups of users behaved with the new feature. You can use this data to further refine the feature accordingly.
After showing the new feature to a small subset of visitors and validating that there is not a negative impact to conversions, increase the exposure to a larger set of the audience. This “soft launch” tactic will ensure that by the time a majority of your audience sees and interacts with the new feature, it will have been refined and perfected. Continue to closely monitor and measure all of the important KPIs of this new feature with an increased number of visitors.
After your test completes and you have significant results to share, be sure to provide feedback to key stakeholders in your organization. We recommend that you keep a development or product management resource involved in the feedback loop so that as data is provided on new features, improvements can be made and re-tested.
Whether you are releasing a new feature or a major product, these tactics can enable you to gather feedback and data on the impact of the change. This feedback loop is a huge win for Product Managers who can now see immediate data-driven results from any feature at any time.