Welcome to installment 5 of my 6-part series, “5 Questions You Need to Answer to Optimize an Experience”. In the first 3 blogs, I introduced my approach to optimization, and I walked through ways to answer the questions, Who is the user?, What experience should you show a user?, and How should you deliver that experience? In this blog, I’m walking through how to answer the question, Was the user’s experience any good?
As we all embrace and refine data-driven decision making within our own organizations, we find ourselves more frequently asking the question, “was the user’s experience any good?” A great experience not only drives revenue and engagement in each session, but encourages loyalty and fosters repeat business. Most of us have a variety of data sources that we rely on to tell us about whether a given experience is effective. Unfortunately, much of that data isn’t as straightforward as it may seem, and a mistake could steer your organization in the wrong direction. To combat this, you need to dig in and understand how each piece of data is collected, where it came from, and what it tracks. Here are some key points to keep in mind as you assess the effectiveness of user experiences.
ITP, ETP, and What Happened to My Data?
Last year Safari introduced Intelligent Tracking Prevention (ITP), and Firefox introduced Enhanced Tracking Protection (ETP). For in-depth details review our optimization and browser protection tracking page, but in short, but in short, both browsers now prevent cross-site tracking, and Safari now “forgets” users after 24 hours. Microsoft Edge and Google Chrome have followed up with their own anti-tracking updates. These updates don’t affect SiteSpect’s data, but you will need to be aware of discrepancies with any other tool you use that collects data.
This means that tracking a user across multiple sessions is more difficult for these browsers and may be modifying your reports. Many organizations are seeing an increase in first-time users and a decrease in repeat visits, but this doesn’t reflect actual user behavior. Again, your data may look normal, but may not be reflecting what has actually transpired. If you are unsure if you have a problem, segment your data by browser to see if your first time visitor versus repeat visit data is significantly different in each segment.
For more information on our approach to browser tracking prevention, watch this webinar.
The wrong experience or the wrong delivery?
Finally, if your data suggests a bad user experience, don’t write it off immediately.
Take some time to verify that it was delivered correctly. We know there is a strong correlation between site performance and conversion. If you notice that a new experience is underperforming, make sure you look not only at the business KPIs but also at performance metrics with Real User Monitoring. If you see that a poorly performing experience is also loading more slowly, this may be what users respond to rather than the experience itself. At that point, you will want to investigate why the experience was slower. It could be a bug, or a limitation of your optimization software. Either way, you need to solve that problem before you can write that experience off as a failure.
Data-driven decision making isn’t always as straightforward as it seems. When trying to understand whether a user experience was any good, be sure to also interrogate your data and how you collect it.
To learn more about SiteSpect, visit our website.