When designing your site, you want to know which features perform better than others, specifically, which ones produce the best results in the form of click-throughs and purchases.
Whether the focus is your graphical elements, your content or your calls to action, A/B testing is a simple way for site owners to compare different layouts, design and text, to determine the success of your online marketing campaigns. By splitting traffic between the two different options, and choosing a metric that will determine success, you can see which option outperforms the other.
And the best part is, you don’t need tech knowledge or expensive software to do it. Here are 5 rules for A/B testing that will help you get the most out of your site:
Start with a theory.
It’s not rocket science, but it is scientific method. As with all testing, you should start your A/B testing with a theory that you’re trying out. For example: “the wording on this call to action will prompt more click-throughs than the wording on another type of call to action”.
This will define what exactly you are testing for and what the variable is that you’re testing against.
Choose your variable.
You have your call to action. Now you need a variable of that call to action to test against. Choose only one and make sure that all other factors remain the same. In other words, don’t change the page layout or graphics, don’t reword the content and don’t change colors or font sizes. All other elements on the site must stay constant so that they don’t affect the results.
Define success.
How will you measure the success of the control against the variable? Define one metric that defines success. It could be the number of clicks, the conversion rate, the number of pages viewed.
Keep in mind that having too many metrics muddies the waters and makes it difficult or impossible to measure the success of your variables.
Test simultaneously.
Testing the control before the variable or vice versa can skew results. What if other factors, such as a sale or other promotion occurring during the testing of one but not the other, are affecting the performance of each? Always test simultaneously to eliminate the possibility that differing external factors could affect results.
Keep testing.
Give your A/B testing a good run. Don’t conclude your tests too early. Doing so might keep you from getting the full picture over time, and thus prevent you from being able to make an accurate assessment. After all, a spike in conversions could be the result of other factors unrelated to your testing, and may artificially inflate results for one test object.
If you’re using an A/B testing tool, it most likely will report what’s known as “statistical confidence” regarding the results. This is the measure of confidence in the results that comes from having thoroughly tested the control and variable over time.
If done correctly, A/B testing gives site owners and businesses an accurate picture of which design and content elements are working, and which ones are not. It offers data that is critical to on-site optimization and, ultimately, increased leads and sales.