Monday, September 26, 2016

Why A/B Tests Are Just Part of the Story

Nothing is worse than expensive purchased website visitors who are not able to find their way through the checkout process or abandon part way through. To optimize web pages, A/B Tests or MVT are established testing methods and are considered as the Holy Grail of conversion optimization.



Hypotheses are always created with the goal of a better conversion or more engagement in mind. Ultimately, the success of re-positioning a button, using a different headline or a complete re-design of a landing page will be based on quantitative data.



But how often does the promise of sales fail to match the actual implementation of the method? Everybody who tested on their own already knows that good optimization strategies are only successful if you combine multiple testing methods.



If you want to be one step ahead and develop real user-oriented hypotheses you won`t be able to pass qualitative analysis methods. Only the real user behavior gives you information on how your website is perceived by the user and what your users really experience.



Do You Ask the Right Questions?

One of those qualitative methods is interaction analysis tools. Previously known under the term mousetracking, these modern variations are nowadays able to capture user interactions from mobile devices.



Those solutions capture mouse movements, keyboard entries but also touch points of real website users and they are made visible through recorded videos. Heat maps, form analysis, and scrolling maps such as aggregation forms provide insights into usability obstacles or technical errors and give you the possibility of revealing specific suspicious interaction patterns like abandonments via a drill-down mechanism.



Before a typical A/B Tests starts, you conduct a survey; meaning you are choosing a certain area which probably has the most potential of reaching your defined targets. Through insights of your customer behavior data you are able to be one step ahead in this critical phase. You can measure the potential and relevance of a test beforehand.



For example, if you know that 30% of your visitors are not seeing the CTA you are more likely to develop an A/B Test there and not as likely if only 5% couldn't see it.



The analysis of interactions not only compensate for the weaknesses of A/B Tests, but also provide beneficial insights into user behavior.



Reduce the Number of Test Cycles & Win Beneficial Insights With the Mix of Methods

Valid test results can only be achieved if each hypothesis is independently checked from influencing factors. This means that re-positioning a CTA and changing a headline at the same time doesn't give any evidence as to why the click rate is now higher than before. Moreover, it doesn't prove which variant is better. In the end there is just a number, leaving the question open to whether the CTA alone would have been more successful as the combination of the CTA and headline.



So if you want applicable A/B Tests you need know-how and patience, because each hypothesis should be tested against a control group. Logically, there are countless iterations and long terms to reach significant results especially with little traffic - therefore it slows down agility and raises costs.



The combination with real customer behavior data offers a new perspective on this problem. If you use interaction analysis, you are able to look over the shoulder of your visitors during their interaction with your A/B Test. You are able to see which hypothesis applies and where there is more potential. Maybe you notice that the user doesn't even realize your change, takes note of it, or does something else which you didn't expect him to do.



Quantitative data from a test can be determined by qualitative data and answering additional hypotheses, making it possible to gain insights in a short period of time and avoids expensive dead ends.



Usually, the target is a conversion. An interaction analysis stops this limitation and makes it possible to measure more soft targets for each variant. For example, it is possible to capture and quantify the engagement through the mouse movements and/or the average length of a stay on specific page areas.



State of the art User Experience Management applications offer this API for all common A/B Testing tools, enabling you to contrast and evaluate user behavior for each test variant with the session playback, heat maps, and funnel analysis.



With the combination of these methods you are able to shorten optimization processes because the insights of the user behavior are included at the earliest stage possible and without any additional development effort. The combination of using A/B Tests and interaction analysis is well worth your attention and effort, especially for complex tests with far reaching content changes and a lot of programming effort.



Choosing the Right Customer Experience Optimization Platform involves a lot of additional time and research. If you want help in the process, download the CXO Buyer's Guide



CXO Buyer's Guide



No comments:

Post a Comment