What AB Testing Gets Wrong
Creating the perfect message and medium can be challenging for marketers. Simple tweaks in word choice or user interface have the power to attract views or repel any traffic whatsoever. Gauging the effects of different features can be complex and inconsistent across contexts. Fortunately, AB testing can take the guesswork out of optimizing your online presence and marketing efforts. Unfortunately, AB testing can also leave a lot to be desired if you don’t execute it properly.
What Is AB Testing?
AB testing is simply a feedback experiment to find which iteration (of a message, product, website—anything, really) users like best. The goal of this test is to make your product (software, website, etc.) more appealing, resulting in more clicks and purchases.
The testing takes two versions of your variable and compares them to see which receives more clicks; half of the site’s online traffic is diverted to your control page and the other half is diverted to a modified version.
Successful AB testing can reveal ways in which a website or app can increase your traffic, sales and email distributions. However, modern AB testing should collect more insights than just which version gets the most clicks. Don’t fall prey to the following three shortcomings of an over-simplified AB test.
1. Non-Specific Feedback
AB tests often only measure user feedback in terms of clickthrough rates. But this lacks specific feedback. It only measures indiscriminate clicks, without answering why or who. To really understand what’s going on, you need to get specifics on why one iteration resonates more than another.
Here’s an example: You have a webinar about your product which you hope will convert some prospects to buyers. The webinar content is the same but your slides outlining the content have different language—one is very technical while the other is not. In typical AB testing, you would just track the numbers of clicks each version received. But that leaves a lot to desire.
2. Misleading Feedback
AB testing is good at telling you which version is more popular, but does this mean the unpopular version should be trashed? No. A common mistake in AB testing is assuming that because one version is less popular, it should be eliminated. Don’t let results mislead you like this.
For example: You are trying to market a new feature in your website or product. You release the new feature to some of your users and not others. You see that a majority of the users who had access to the new feature did not use it. Those who did not have the feature were happy without it, too. In this case, you may be led to believe that the new feature is unnecessary and unwanted. But instead of assuming you should ditch the feature, you should consider doing more tests to see what about the new feature didn’t resonate, rather than just discounting its usefulness completely. Just because an AB test tells you what is popular does not mean the unpopular feature is dispensable.
3. Inadequate Response Rates
AB testing can get you responses but that does not mean the sample size is big enough or consequential. A company can receive 100 responses when they test two versions of their homepage in one week. However, given that their homepage receives 10,000 visitors a month, the sample size for that test is too small to make any conclusions. (The math whizzes among us call this “statistical significance.”) Remember to judge the feedback you receive from AB testing with an eye on the amount of responses received. There is no golden number. Samples can be too small—but they can never be too big.
Without specific metrics in an AB test, little is known other than which version was chosen over the other. The why one was chosen over the other remains a mystery. Vital information can be gleaned from a more in-depth look, however.
A more sophisticated collection of feedback, including embedding feedback within a product, can give marketers an idea of why customers chose to click on one version of the webpage instead of the other and what types of people preferred which page. This more complete vision allows product marketers to create more advanced and specific objectives, since they can finely tune and test every aspect of their online presence.