An icecream on the ground

A/B Tests are the best tool to improve the conversion rate of any site (sales, subscriptions, or any other action your might be interested in). After creating two or more versions of any page or post on your website, you can know with data and not assumptions, which one works best.

ab-testing
In its simplest form, A/B testing proposes to randomly split the traffic to the site in two groups so that 50% of the visitors see the A design while the other 50% sees the B design. By monitoring how users in each group react, we can calculate the conversion rate of each group and, in case there is a statistically significant difference between the two, declare a winner design.

But while the theory is fine, watch out for the practice. Before you start making A/B tests of your pages make sure you don’t make mistakes that could lead to illusory results. That might end up being a waste of money and time that you should avoid at all costs.

Let’s see what mistakes you should avoid when creating A/B tests…

#1 Comparing Different Time Periods

It can be tempting to analyze in detail the behavior and conversion of your visitors on any of your pages for a period of time. Then, make changes to the web, and re-analyze the detail of the behavior on your site.

In this context, it seems quite easy to conclude that if you have obtained better results during the first or second period of time it is because the version tested during that period is better than the other.

Gif of woman saying Actually

Well, I’m sorry to tell you that’s not a good idea. To begin with, the quantity and quality of traffic that reaches your website can vary between days and weeks. The same page can one day or week convert 15% and the next day or week 12%.

This change may be due to factors totally unrelated to your website (economic climate, the mood of your visitors, etc). For example, thanks to a tweet that goes viral you increase traffic on your website. But the type of traffic is of less quality and after reaching your website, it converts less. Another example: you launch a new ad on Google with an offer and the percentage of visitors who come to your website through the ad and then convert is very high.

Therefore, the only way to take into account all these factors is to perform an A/B test where each variant is shown to a proportional number of visitors during the same period. By randomly showing the two (or more) versions to visitors in the same week, there is a much greater probability that the results are statistically relevant.

So forget about doing a before and after test as you will not know for sure how relevant the results are to decide which version is better.

#2 Finishing The Test Early

Now that you know that you have to do A/B tests, you have just installed Nelio A/B Testing and are excited to launch your first page test.

Gif of a man saying OMG I can't wait

You start collecting results and quickly the variant starts to have more conversions than the original page. Why waste time? Why not make a quick change and make the new variant permanent? This will increase conversions faster…

Volume

A/B tests are based on statistics and, as such, it’s important to keep in mind that your sample size matters—it must be representative. In other words, we must ensure that we have obtained enough results from our test for the test to adequately reflect reality.

If you use a specialized tool to perform A/B tests, such as Nelio A/B Testing, the tool itself will inform you if your test results are statistically significant.

Results of a test in Nelio A/B Testing
Results of a test in Nelio A/B Testing

Then you might think that, since you have a website with a lot of traffic, you will quickly get results from what works best.

Duration

Watch out! There’s another factor to consider when creating A/B tests. Not only is it a matter of volume, but make sure your tests last long enough to cover a cycle of your business. That is, if, for example, the type of traffic you receive is different on weekdays and weekends, make sure that a test lasts the minimum period of time for all types of traffic are represented.

I recommend the CXL calculator for you to explore a little more the duration and size that an A/B test should have in order for its results to be statistically significant.

Remember: drawing conclusions ahead of time is the same as making decisions by intuition.

#3 Ending The Test Too Late

Now we go to the other extreme: you have just launched a test and you’re getting different results. Since you know that you can’t come to conclusions too soon, you decide you have to wait until you get conclusive data, right? Days go by and no variant seems clearly better than the other, so maybe you have to keep waiting…

Well, we’ve already said not to rush it, but if you’ve been three months without conclusive results, you’re probably wasting your time. And you’re wasting it for the simple reason that you’re missing out on other types of tests that could probably give you much more relevant information on how to optimize conversion.

#4 Relying On The Tests Of Others

We can and should learn from others but I am afraid that the statistical data of others do not necessarily apply to your website. That is, you’ve probably read some article commenting that the color of the call to action button that works best is red.

Gif of someone clicking a red button

The theory is very good but the reality is that you don’t decide that this result is going to be useful without first testing it on your website. Other people’s case studies are not useful to make decisions about your particular case. They will be useful for you to have new ideas of what things you could test.

#5 Changing a Running Test

Perhaps one of the worst mistakes you can make after launching an A B test on your website is to stop it, make certain changes, and resume it. At best, fix a small typo that you detect but don’t create new variants or change the ones that are already running. These changes can totally invalidate the results obtained.

If you find yourself in a situation where you would like to make changes to a running A/B test, stop it completely and run a new test. The two tests separately will give you valid results. If you want, try to interpret the results of both together but be aware that you will not be able to obtain reliable results if you make changes to any running A/B test.

#6 Running Many Tests Simultaneously

Keep your life simple with A/B tests. If you try to analyze too many changes at once, you will find it especially difficult to interpret the results of each test. You can create two or more variants of a page that show radically different versions of it. It’s OK to do this, and in fact, it’s been studied that major changes can have a greater impact on conversion.

However, do not try to perform many tests simultaneously. For each new test you create, a new division of traffic will be created to the different possible combinations of variants together with those previously created. And precisely the more variants you end up having, the more your traffic will be divide and the more it will cost you to obtain statistically significant results.

#7 Not Knowing The Purpose Of Your Test

And this should be the most important mistake you should avoid.

The goal of an A/B test is to optimize the conversion. Okay. From here, apply common sense. All pages and posts have a purpose on your website: to inform about the products and services you offer, to make yourself known, to explain details about how the product or service works, to ensure that you comply with the data protection law, etc. Although all of them are important, some of them are much more critical in the conversion funnel.

For example, if you have an e-commerce, the most relevant pages will be the ones that show all the products, the ones that show the detail of a product, and the page where the payment purchase is made. So, to start with, don’t get tangled up by analyzing other pages and focus on optimizing these ones.

Also, think well about the most important objectives in each of them:

  • Let the visitor buy that product.
  • Let the visitor add that product to the wish list or the shopping cart.
  • Have them look at other related products.
  • To recommend the purchase of those products to third parties.
  • To contact you for more information.

Identify your goals well and think about what improvements can help you achieve them. From there, the tests you believe will make sense and help improve your website.

Featured image by Sarah Kilian on Unsplash.

Leave a Reply

Your email address will not be published. Required fields are marked *

I have read and agree to the Nelio Software Privacy Policy

Your personal data will be located on SiteGround and will be treated by Nelio Software with the sole purpose of publishing this comment here. The legitimation is carried out through your express consent. Contact us to access, rectify, limit, or delete your data.