Pearl river fire.. by Loco Steve

In this blog we’ve widely discussed all the advantages A/B Testing brings when it comes to improving the conversion rates of your website and understanding your visitors better. And, to be honest, this shouldn’t surprise you, because, you know, that’s what we do for a living. I’m also pretty sure that you’ve read plenty of articles all over the web presenting amazing uplifts in people’s conversion rates just because they run a successful split test.

What you probably don’t know is that split tests fail too. And they tend to fail a lot. Does this mean that split tests are fraud? Absolutely not! Keep reading and you’ll learn everything you need to know to improve the effectiveness of your CRO efforts.

Success Stories Are Great, But…

…they set high expectations. It’s like winning the lottery. Or being a successful entrepreneur. Everybody talks about that guy. The one who, out of nothing, suddenly got a billion dollars. Wow! Who doesn’t want to be that guy? I’m sure as hell I want! 😉 But what about all those people who tried and failed? Those who didn’t make it? No one’s like to admit they’ve failed and, yet, here they are.

Having faith in what you’re doing and using a split testing tool appropriately can definitely contribute to outstanding results. But mark my words when I say some of your tests are going to fail. If you’re not ready for it, then you or your client might get discouraged and give up on a certain test (or even split testing in general) too soon, ignore the results of any test that didn’t match the expectations, even though you can learn a lot from them, or evaluate your conversion rate based solely on the exceptional—yet rare—wins. So let’s get rid of these false expectations and make sure that, whatever we do, we lose no time, money, or opportunities, shall we?

A/B Testing: the Last Step in CRO

After two years of Nelio A/B Testing, there’s one thing I can tell you: those customers who come to our tool expecting “magic” results, barely ever see the success they were expecting. Why does this happen?

A/B Testing is only one step in the conversion optimization journey—and, by the way, it’s at the end of the journey. So what’s this “conversion optimization journey” all about? The ultimate goal you pursue when subscribing to a split testing service like ours is to increase the effectiveness of your website. Your real goal, thus, is to sell more products, or get your visitors to share more posts, or get more subscribers to your newsletter. How do you do that? How can you be more effective at communicating your message and engaging your visitors? Well, nobody knows!

The first step of the conversion optimization journey is understanding your visitors. You need to know who they are, what they’re looking for, why they are in your website in the first place. What they like, what they don’t. What’s bothering them from or what’s missing in your website. All these insights should be acquired before designing a split test so that you can hypothesize what you should change that will lead to better conversion rates.

Screenshot of Heatmaps with Nelio A/B Testing
Heatmaps are one of the most powerful tools available to comprehend what users do in your website.

Use tools like heatmaps or a usability test suite to highlight the weak spots of your website, and then simply imagine how to strengthen them. It’s that easy! Look. Think. Test. That’s the key to successful tests!

Now that you know that unsuccessful tests aren’t uncommon and how split testing should be addressed, let me wrap up this section with just a few notes on what you should really expect from your split tests:

  • Split Testing works in the long run. Think about split testing as if it were a marathon, not a sprint. A/B Testing is a powerful mechanism for improving your conversion rate… but you’ll rarely see huge uplifts all at once. Each test counts. Look for the long run and be ready to work little by little for weeks, following a disciplined process.
  • Huge wins rarely occur. I’ve already mentioned this, but it’s really worth emphasizing. If you’re lucky, I’m sure someday you’ll run a test that will boost your conversion rate. Just remember this is uncommon, and even though small improvements (+5%) might seem, well, “small”, they can have a huge impact in your annual revenue.
  • Neutral and negative tests matter. When you test a change in your website, you expect it to be better than your control version. Unfortunately, this is not always true. Sometimes, the results will be inconclusive (that is, both versions are equally effective at converting your visitors) or even worst than those you already had. Don’t despair! We’ll shortly discuss how to get insights from those tests too 😉
  • What works for me might not work for you. Looking at your competitors, reading conversion optimization blogs, and looking for new ideas on what to test in your blog is always recommended. But don’t blindly copy those tests and expect the same results—you need to understand why those tests work and think whether that reason might work in your website too.

Understanding Failure

I’ve been talking about “failure” and “failed tests” for a while, now, and I haven’t even defined what those terms mean. But I’m pretty sure you got the idea. When we talk about “failed tests” we’re talking about a split test whose outcome doesn’t match our expectations. So we created this test, expecting the alternative to be far better than the control version, and… Oops!, its conversion rate is worst. Or it isn’t, but it’s not better either.

As you can see, there are two types of failure:

  1. Neutral tests. The test resulted in neither a winner nor a loser. Results are “inconclusive”, for none of them is better than the rest. It’s like we wasted our time! Here you can see a list of 6 A/B tests that did absolutely nothing for the guys at Groove.
  2. Negative tests. The variant we created resulted in fewer conversions. Better remove this test before anyone sees it, right?

If a test didn’t end up as expected, never discard it without looking into it—ask yourself why it didn’t match your expectations, try to understand where things went “wrong”, and get ready for the next split test!

Embrace “Failed” Tests

Failed tests are no failure. They simply tell us that our original assumptions were wrong. So, for example, if you ended up with a neutral test, try to answer the following questions:

  • Are the variants different enough? Sometimes, we test small details of our pages. The color of a button here, the order of a set of icons there… These changes may have an impact in your conversion rates, but they may as well be so subtle that the test results in a draw. You’ll need to be more creative and broad the scope of the experiment.
  • What hypothesis does this result invalidate? You created this “failed” test because you thought it’d modify your visitors behavior. But it didn’t. For instance, you decided to add a “30 day money back guarantee” in your pricing page, to reassure your visitors that, should they not like your service, they can unsubscribe and get their money back, risk-free. But this didn’t improve your conversion rate! So maybe you tried to answer the wrong question. Maybe your visitors are not worried about getting their $10 bill back. Something else is bugging them, and that’s what you need to find out.

What about negative tests? Well, similar questions apply:

  • Was the hypothesis wrong? Again, the results are exactly the opposite of your expectations. Why? What were you expecting exactly and what happened in the end?
  • Why did the original version perform better? Look at the original version and compare it to your alternatives, and try to understand what makes the former better.
  • What does this test teach us about our visitors? You just learned your assumptions were wrong, and that’s great. But can you get further insights?

Key Takeaways

In the end, the most important question is how to turn a failed A/B test into a win?

  1. You must have a clear goal. It’s the first step of A/B testing. Propose a hypothesis (e.g. “if I [change this], then [the number of visitors that do that] will increase/decrease [because they…]”).
  2. Test changes that are relevant. Be bold in your tests; don’t change things that will go unnoticed or have nothing to do with your goal.
  3. Wait for statistical significance. Never jump to conclusions too soon.
  4. Rinse and repeat. If your test failed, don’t dismiss it. Try to understand why it failed, go back to your original hypothesis, rethink it, and prepare a new experiment that will test that again.

Finally let me suggest the reading You Can Learn More From Failure Than Success by Max Nissen.

Featured image by Loco Steve.

Leave a Reply

Your email address will not be published. Required fields are marked *

I have read and agree to the Nelio Software Privacy Policy

Your personal data will be located on SiteGround and will be treated by Nelio Software with the sole purpose of publishing this comment here. The legitimation is carried out through your express consent. Contact us to access, rectify, limit, or delete your data.