A/B Testing Errors You DON'T Want to Make

Louise Armstrong
by Louise Armstrong on August 19, 2013 in Analytics
Share on Twitter Share on Linkedin Share on Facebook

A/B testing can be a powerful tool for improving your website design and website conversion rates. If done incorrectly, though, A/B testing can prove to be a waste of time. It could even wind up doing more harm than good.testing errors

Below are some all-too-common A/B testing errors that you do not want to make.

  • Testing random variables. Say you want to use A/B testing to optimize a landing page. Where do you begin? If you just start changing random variables on the page, without any thought as to why or how those variables should be changed, you will spend a lot of time chasing wild geese. All good experiments start with a hypothesis, or an idea of how a particular change might affect the outcome. The hypothesis may turn out to be wrong, but at least then you have actually learned something about your target market and how they think.
  • Optimizing to a local maximum. If you start with one page design and test many small changes to it, you will eventually end up with an optimized version of that design. The problem is, it may be the best version of what is fundamentally a bad design. This type of optimization is called a local maximum. It is similar to climbing a mountain--each test brings you to a higher elevation, but when you finally get to the top, you discover that it was a false peak. The true summit, which would bring you much higher results, requires you to go back to the base and start over with a completely different design. The lesson here is to test big changes, not just small ones.
  • Not choosing a sample size. A/B testing is a statistical exercise. Based on a limited sample, you can predict, with a certain level of confidence, what the results would be if you applied it to the entire population. The problem is that you need a large enough sample size to have that confidence in the results. With a small sample, anything could happen; you could flip a coin 10 times and have it land on "heads" only once. That may look significant, but it is clearly not. If you stop the test as soon as something looks significant, you could very well come up with the wrong answer.
  • Not trusting test results. You should not play favorites when testing. You may be thoroughly convinced that option "A" is better than option "B." If the test, performed correctly, shows that more people prefer option "B," accept the fact that you were wrong. "A" may look better to you, but that just means that you are the statistical outlier. 
  • Optimizing for the short term. If you continuously test and optimize for conversions, you will end up with a better conversion rate for the page being tested. But you may also end up overselling your product's benefits. After all, promising the moon is one way to get people to sign up. However, you are then left with a large number of unsatisfied customers, which can hurt you in the long run. Instead, you should focus on the true benefits to the consumer and optimize towards a positive long-term customer experience.

By properly utilizing A/B testing and avoiding these testing mistakes, you can effectively discover the best way to market your business online.

* Image courtesy of freedigitalphotos.net

lets_talk2.jpg
Louise Armstrong

Louise Armstrong

Louise is a Senior Digital Strategist at Bonafide. A pop-culture addict with a passion for all things digital. She's Scottish by birth, but don't ask if she likes haggis...