[fusion_builder_container hundred_percent=”yes” overflow=”visible”][fusion_builder_row][fusion_builder_column type=”1_1″ background_position=”left top” background_color=”” border_size=”” border_color=”” border_style=”solid” spacing=”yes” background_image=”” background_repeat=”no-repeat” padding=”” margin_top=”0px” margin_bottom=”0px” class=”” id=”” animation_type=”” animation_speed=”0.3″ animation_direction=”left” hide_on_mobile=”no” center_content=”no” min_height=”none”][fusion_alert type=”general” accent_color=”” background_color=”” border_size=”1px” icon=”” box_shadow=”yes” animation_type=”0″ animation_direction=”down” animation_speed=”0.1″ class=”” id=””]Last updated: October 16, 2014.[/fusion_alert]

[/fusion_builder_column][fusion_builder_column type=”1_1″ background_position=”left top” background_color=”” border_size=”” border_color=”” border_style=”solid” spacing=”yes” background_image=”” background_repeat=”no-repeat” padding=”” margin_top=”0px” margin_bottom=”0px” class=”” id=”” animation_type=”” animation_speed=”0.3″ animation_direction=”left” hide_on_mobile=”no” center_content=”no” min_height=”none”][toc]

Running multiple A/B tests on different pages of your website seems attractive because it saves you time and lets you test things faster. Let’s say you sell a SAAS product as freemium on your website. You could run a test on your Homepage with the aim of improving its Conversion Rate (CR) for sign-ups and at the same time run a test on your Pricing page to optimize your CR for paying plans.

This sounds great and logical, but like all things CRO, cutting corners will often backfire. In this case you will end up with false results on both your tests.

And if you don’t realize this, you will actually be in a worse position than your competitors who are not testing. You will think you have solid results and modify your website accordingly, when in fact your results are false.

Run tests with 100% independent traffic flows

So, how many tests should you run on your whole website at the same time? In the vast majority of cases, only one. In some cases, two. For landing pages, it can be one per landing page.

The main rule is the following: you cannot have the same visitor passing through more than one test, so you must have independent traffic flows for each one of your running tests.

Let’s take the example I introduced above in the context of a website selling a SAAS product on a freemium basis.

The main goal of the Homepage is to drive your visitors to sign-up for the free product. The main goal of your Pricing page is to get them to convert to one of your paying plans.

Let’s say you are testing 2 Variations of your Homepage, A and B in one test, and the only change you make is the color of the sign-up button.You are also testing 2 variations of your Pricing page 1 and 2 in another test, with variation 1 being a much cheaper pricing scheme than variation 2. And you run them at the same time.

A typical flow for freemium SAAS website sis the following:

If a visitor sees the Variation 1 of the Pricing page when they look at the pricing, then they will be more likely to sign-up of course, because the pricing is cheaper. So when they go back to the Homepage to sign-up, it doesn’t matter which variation of this test they see: they are already ready to sign-up because of the cheaper pricing they saw. So your results for the test running on the homepage will be heavily skewed by the test running on the Pricing page, and your results will be wrong.

For example, if a visitor sees the Homepage with the same color of sign-up button after seeing the cheaper Variation 1 of the Pricing page, they will be more likely to buy than if they saw the expensive Variation 2 of the pricing page. This has nothing to do with the color of the button. If this color has an effect, you won’t see it in the test results, which will be pure noise.

You cannot even rely on the accuracy of the Pricing page test, because visitors seeing the Pricing page have likely seen the Homepage before and the test running there will skew their action on the homepage. For instance, if you test 2 variations of your value proposition on the Homepage, and one of them is much more effective than the other, then visitors seeing the most effective Homepage variation will convert better no matter which Pricing they are shown.

The only solution to avoid this type of testing issues is to keep the traffic flows going through your tests completely independent.

In some cases, you can run more than just 1 test

For most websites, this means that only one test can be running at the same time, because visitors access many pages of your sites in the same session.

Test simultaneously the back-end and the front

For websites with a backend whose CR can be optimized for upsells, cross-sells, etc. (for example SAAS products sold as freemium) then you can run 2 tests simultaneously:

Landing pages

When using landing pages, you can of course run 1 test per landing page, since the traffic going through one landing page is usually completely separate than the traffic going through other landing pages. The unit is in fact the landing page, and of course you cannot run more than 1 test simultaneously on each landing page.

Trickiest: testing simultaneously two different events

If you are very experienced with A/B testing and CRO, you can still actually test 2 different events at the same time. The key is in making sure that the conversions for each events don’t impact each other. Having independent traffic flows is the only way to ensure 100% error-free testing, but with experience you can still test multiple events at the same time on the same traffic flow, but I wouldn’t advise it in the vast majority of cases.

A classic example is to test for purchase CR on an ecommerce site and testing newsletter subscriptions at the same time. But you need to ensure that newsletter subscriptions cannot impact the purchase flow, which is rare. But if you can ensure that, then you can run 2 tests on the same traffic flow. For experts only!

Multivariate testing

If you want to change your website in several places and test those changes at the same time, there are essentially two ways of doing that:

1. Running a multivariate test, and not a simple A/B test

Multivariate tests are designed for this use case, and will let you change independent elements and measure the results of each change by running a complex statistical test where each combination is presented to visitors and each element is evaluated against each other. My advice is to avoid this kind of test until you have built solid testing experience with simpler A/B tests first, because it is complex to configure and mistakes are easy to make.

2. Running a multi-page test, where you test your current website against 1 variation that is incorporating all the changes to all the pages you want to modify.

The main aspect to consider here is whether those changes should be tested together or not. If you want to test the hypothesis that your visitors will like seeing cats on your website, you surely can do a multi-page test where you add some cats on your homepage and some cats on your pricing page, and so on. It won’t be as good as 2 separate tests, but it will take less time to get to the results and visitors should either like cats everywhere or not. But if it’s cats on homepage and dogs on pricing page, then testing this a multi-page test is likely a bad idea and you should test them separately.


3 Responses

  1. Hi, thanks for this article.

    I don’t understand why testing on the pricing page will make you take a bad decision on the sign up button since both variations of the sign up button will receive the same proportion of ‘eager to login’ users.

    I do understand that your sign up a/b conversions will rise, but on equal proportions, isn’t this true?

    What I do believe is that the re is a risk if one of your pricing variations has a lot of traffic and favors one of your sign up variations more than the other, finishing the sign up test first and the pricing test later will liarla make you make a mistake.

  2. Hi Julien,

    Have you considered that by running tests “separately” or in silos, you’d actually be releasing untested combinations? The reason is that you will have variations that were never experienced by the users in combination, but once you release the winners of the separated tests, that could be exactly what the user will be seeing. For a more detailed explanation of this, see https://blog.analytics-toolkit.com/2017/running-multiple-concurrent-ab-tests/

    Also, MVT is not really a solution, unless you want to run completely concurrent tests (they completely overlap in time). Such a solution will also suffer from baseline inflation, if the tested pages are different, making it much less powerful and thus wasting time/resources.

  3. I do not agree the point of view of this article.

    In the example above, what you should do is setting up the 2 tests orthogonally. What this means is you will have the equal proportion of people seeing higher price in both variants of the homepage test.

    e.g. both homepage and price test are 50% v.s. 50% traffic test, then you will have:

    25% people seeing big sign up button and higher price,
    25% people seeing big sign up button and lower price,
    25% people seeing small sign up button and higher price,
    25% people seeing small sign up button and lower price.

    In this way, when you read the test result of homepage(sign up button), the impact of high v.s. low price equally impacted both of your homepage test variant and control group, so the impact is offset. The same is true when you do test read for the pricing test.

    Put aside the math, given the development cycle and iteration speed these days, asking companies to do 1 test a time to achieve “statistical perfection” is simply not practical.

Leave a Reply

Your email address will not be published. Required fields are marked *