Analytics-Driven Campaign Monitoring
To get the most out of your testing program, Analytics should be a milestone in every step in the process – from ideation to decision-making. How you leverage data – and when – is key to your success. The following research-backed tips will help you create an analytics framework for your optimization program:
- IDEAS. Gut-based planning and risk-reduction testing are typical ways organizations begin testing. These approaches tend to be focused on validating beliefs, and as such suffer from confirmation bias, according to research published by Brooks Bell. Using a data-driven approach to plan new tests will dramatically increase your chances of finding big success. Analytics data provide insight into customer behavior and help clarify the most effective ways to reach and engage your target audience. Be sure to revisit analysis from both winning and losing campaigns when you’re assessing the potential impact of a new test.
- HYPOTHESIS. A richer hypothesis will help you create a stronger test and will help you defend your idea against a long list of other test ideas. Rich Page recommends listing analytical insights and reasons for each test, and then rating your ideas in terms of both the likely conversion lift value and the difficulty of implementation. The test ratings validate that you are not just testing something for the sake of it, and will be essential for helping you gain approval, prioritization, and the necessary resources to run the test. Make sure your hypothesis includes not only a prediction of the potential outcome but also specifies the basis for the prediction. If you expect to impact conversions as a result of changes being tested on your Home Page, be sure to identify and quantify the anticipated behavioral impact of the changes. Specifically, what behaviors will change – from the Home Page though the Order Confirmation Page – as a result of what you are testing and by how much. An Analytics-driven hypothesis is an important part of a successful testing strategy not only in terms of your roadmap but also for making decisions during and after each experiment.
- EXPERIMENTAL DESIGN. There are several components that constitute a good test design, such as what type of test to run (A/B or multivariate), how many variations to test, choosing the relevant audience, defining success criteria, estimating test duration, and scoping resources required. Analytics from previous tests help guide each of these components. Avoid common missteps like running a series of A/B tests when an MVT would allow you to test multiple changes concurrently or testing a single change when you could learn a lot more from testing several versions relative to each other. Often times these small improvements to your Experimental Design can add up to a big difference in the impact and insight gained from each test.
- CAMPAIGNS. Once your test goes live you’ll want to keep an eye on the data to ensure that your experiment is working as expected but avoid getting too excited or concerned about the early results. Wait until your data falls into a stable, predictable pattern before making any decisions regarding pruning underperforming versions or declaring winners. The most common misstep at this stage is to share early results with stakeholders. Watch your critical success metrics closely but also look for a correlation between the impact on your critical metrics and secondary, supporting metrics. Make sure that your data represents a credible story; avoid focusing on a single metric that doesn’t line up with the others.
- DECISION-MAKING. This part of the framework is about aligning results with your hypothesis and communicating the outcomes. Focus on your prediction of the behavioral and quantitative impact and provide context for your analysis. If your hypothesis was incorrect and the test yielded marginal results, demonstrate what you’ve learned about your customers and what value those learnings provide. If you’ve seen a significant lift in conversions, you can use those learnings to start the cycle again to create follow-up campaign ideas.
Don’t wait until the end of your optimization process to use analytics. Leveraging data throughout each stage of the process will help you increase the value of your testing program, scaling up the volume and impact of each experiment while improving the efficiency of your optimization team.
To get the most out of your testing program, Analytics should be a milestone in every step in the process – from ideation to decision-making
Source: SiteSpect, Inc.