A/B Testing is a great method for figuring out the best strategy to engage your customers. But, unfortunately, it’s not a magical, instantaneous tool—it relies on your understanding of your business, customers, and website, and setting up new tests and getting meaningful results always takes time. In order to maximize your marketing efforts, you might be tempted to run multiple A/B tests at the same time.
Nelio A/B Testing is able to run more than one test at once. What does it mean? If you test your landing page and your pricing page at the same time for two weeks, you’ll get results faster than if you test one page first for two weeks and then the other one for two additional weeks. Since you want to grow faster, that approach makes perfect sense, doesn’t it? Well, I wouldn’t be so confident! Will this approach really speed up your testing program or will one experiment affect the outcomes of the other, thus resulting in useless results?
Today we’ll discuss the pros and cons of running multiple tests at the same time, and we’ll see what can be done within Nelio.
When Running Multiple A/B Tests, This is What You Should Keep in Mind
So, you’ve decided to run two tests at the same time: one on your front page and one on your pricing page. Let’s assume a visitor comes to your website, landing on the front page. She’s now part of the first test. She takes a look at what you offer and goes to the pricing page. She’s now part of the second test too. Finally, she likes so much what she sees that she ends up buying something. That’s a conversion! “Great!”, right? Well, not so fast…
Try answering the following questions:
- Which test should get credit for the conversion?
- Did one test affect the outcome of the other?
The first question is about attribution. When you have more than one experiment running and they all try to fulfill the same goal, it’s very difficult to tell (if not impossible) which one was the actual responsible of a conversion. In our example, did your visitor complete a purchase because of the landing page or because of the pricing page?
You might be thinking: “well, I’m running two different experiments with different goals, so I can give credit to the proper experiment without a doubt”. Let’s take a look at the running example, and let’s assume that the goal of your landing page test is to direct users to the pricing page, whilst the goal of the pricing page test is to complete a purchase. In this scenario, you’d say that the visitor completed a purchase because of the pricing test (and that she went from the landing page to the pricing page because of other test). Are you sure that’s right? If you do, then…
There’s another thing called interactions (related to the second question above). Can you tell me, for sure, that the visitor completed the purchase because of the pricing page test? Maybe the landing page test convinced her of the excellence of what you offer, and it was the landing page the one “making the sell”. Therefore, it doesn’t really matter which version (A or B) she saw on the pricing page—the landing page is crucial for completing purchases, even if you didn’t foresee it. Or maybe a certain version in the pricing test works better if the user saw a certain version of the landing page, and the former test worked because the latter had an impact on it. One test can impact on the results you get on another test and, unfortunately, it’s not always easy to tell when that’s happening.
Hopefully, it’s now clear that running multiple experiments at the same time entails some complexities you have to be aware of. They make it difficult to tell which test has to get credit for a conversion and, anyway, there can always be hidden, unexpected interactions between them that you should not ignore.
Successful A/B tests depend on several factors. You need to have a good understanding of your business, your customers, and your website to come up with interesting hypothesis to test. The overall success of your marketing strategy based on A/B testing will depend, in the end, on the number of tests you run, how many of them produced successful results, and the impact they had on your sales, subscriptions…
If you decide to run fewer tests to reduce data pollution (that is, the impact one experiment has on the others), you’ll get more accurate results, but you may end up growing at a slow pace. If, on the other hand, you run multiple tests at once, you’ll face all the “problems” we’ve discussed before, but you will be testing a lot of stuff all at once and, hopefully, you’ll find combinations that will help you grow faster. So, what should you do?
Unfortunately, there’s not a one-size-fits-all answer. It really depends on what your needs are at each point… that’s why I’m sharing some of the basics you need to know before making any decision. In general, there are multiple strategies you can follow to deal with this issue.
1. Assume Tests Are Isolated
This is probably the easiest solution… and the one you’ve probably been applying until now. We’ve just discussed that, in principle, you can’t guarantee that one test won’t affect the results of the other. But that doesn’t mean you can’t assume that they don’t. If you run two tests at the same time on two different pages with two different goals, you might want to assume that there’s no way one experiment impacts the other. This is a valid assumption, though it might be wrong.
There are situations in which the overlap of both experiments is pretty small—in other words, the vast majority of visitors that participate in experiment 1 won’t participate in experiment 2, and vice-versa. In those cases, tests are clearly isolated one from the other, and therefore the assumption we’re making is completely right.
What I’m trying to say is: I want you to be aware of the assumption you’re making. If you design two experiments and you’re willing to “take the risk”, you probably thought about it and decided it’s a good course of action. Go for it! What’s really important in all testing strategies are thoughtful tests.
2. Mutually Exclusive Tests
You thought about your tests and realized they do overlap, so you need to isolate them somehow. The solution is easy! Run one test first and then the other, right? But imagine you want to run a couple of tests during the Christmas campaign or the holidays season because, for some reason, it’s when you get more visitors and the tests can have a greater impact. What then?
If you want/need to run both tests at the same time, you have to make sure they’re mutually exclusive. First of all, keep in mind that, depending on the capabilities of the testing tool you’re using, you might be able to make tests mutually exclusive or not. The idea is quite simple: you’ll have to split your traffic into as many groups as tests you’re running, and make sure that each group of visitors participate in one test only. This way, there’s no way that a visitor can pollute the results of one experiment with the variant she saw in another, simply because, from her point of view, there’s only one experiment running.
3. Multivariate Test
Finally, there’s multivariate testing. In multivariate testing, you’re testing more than one component at the same time and you get results on all the possible combinations of those tests. The typical example of a multivariate test is a call-to-action button, for which you want to try out different colors (one test) and different labels (another test).
For instance, our landing page has an orange call to action labeled “Join Now”. We may want to try an alternative color (white) and two additional labels (“Start Today” and “View Pricing“). With such a setup, we end up having 2 x 3 = 6 different combinations (three labels on an orange button and the same three labels on a white button). Now, we’ll have to divide our traffic into 6 groups and wait for the results.
The more components you want to test all at once, the more combinations you get, and be aware that the number of combinations grows very fast: three aspects to test with three alternatives each? That’s 27 combinations. Just one more aspect to test? Now you have 81! Clearly, the problem with multivariate testing is that you can quickly end up testing a lot of combinations, and that might take a lot of time too.
Multivariate testing is especially useful if you aim to measure the same goal and all the tests are in the same flow (in the call-to-action button, they’re all on the same page). With these tests you’ll get insight about all the possible combinations, which means you’ll get accurate and valid data. However, it’ll probably take much longer to produce meaningful results, and we both now that time is precious.
What About Nelio A/B Testing?
Nelio A/B Testing includes multiple types of experiments, and (usually) you can run them all at the same time. Thus, for instance, you can run two page experiments on two different pages at the same time, you can run a headline experiment and a CSS experiment, and so on. However, we implemented a couple of limitations that you should be aware of:
- You cannot run multiple tests on the same element at the same time. Thus, for instance, you cannot run two experiments on the landing page at the same time, or you cannot run two theme experiments at the same time. If you think about it, that makes complete sense: if you’re testing your WordPress‘ front page with two different tests, and a user lands on that page, which alternative should she see?
- You cannot run global experiments at the same time. Global experiments include CSS, Menu, Theme, and Widget experiments. A global experiment impacts all the pages on your website (that’s why sometimes we call them “global experiments”) and, for the sake of simplicity, we decided that you can only run one global experiment at a time. Why? Well, think about a Theme experiment. Each alternative theme might define its own (alternative) menus or widgets, and have its own CSS rules… It doesn’t make sense to run them all at the same time, doesn’t it?
Once we know we can run multiple tests at once within Nelio, we need to answer one last question: which strategies can be implemented using Nelio A/B Testing?
Nelio and the Previous Strategies
In order to overcome all the issues posed by multiple tests running all at once, we discussed three strategies: (1) assuming tests are isolated, (2) enforcing their isolation, and (3) running multivariate tests. Let’s take a look at the support the current version of Nelio A/B Testing (4.2.6) offers for each of them:
- Assuming Isolated Tests. This is fully supported. We’ve already seen that Nelio let’s you run more than one test at once, so you can always assume that one test does not interfere the others.
- Creating Mutually Exclusive Tests. Nelio A/B Testing does not permit it (yet).
- Multivariate Testing. Within Nelio, you can test as many aspects as you want of a given page and, therefore, you can easily run multivariate tests. However, there’s one thing you should be aware of.
Split testing tools with multivariate support usually work as follows: you define the alternatives of one aspect (two colors) and the other (three labels), and the tool generates the six possible combinations.
In Nelio, on the other hand, we use a different approach: if you want to test different colors and labels of a call to action button in your front page, you’ll have to create the combinations you’re interested in manually.
Running multiple experiments all at once entails some complexities—it’s not always easy to know which test produce an uplift in your conversions or if there are any hidden interactions between your tests. But this isn’t a problem per se. Just remember: proper A/B testing mainly depends on your thoughtful decisions, so be aware of all these complexities and try to select the best strategy for each case.