Your landing page does not work well; email newsletters and push notifications are ineffective. What is the cause of it? What should be changed to improve efficiency? Even the most experienced marketer will not answer this question with 100% accuracy. Guessing the answer leads to the risk of losing time, money, and prospective customers. 88% of users will unlikely open your website again after a bad experience. Besides, mistakes are expensive. Companies from all over the world lose $1,420,300,000,000 due to poor UX design.
How can you find the right content option? The answer is relatively simple. Use the data of A/B/n and MVT tests instead of guessing. In this piece, you will learn the difference between these types of tests, how to conduct them and why they are vital for your business.
What is A/B/n testing, and why you should conduct it
A/B/n testing is a method that involves a comparison of the effectiveness of several content options on your website page, in your email newsletter, and in other campaigns. Individual elements or several different templates are tested during this process. A/B/n testing is aimed at understanding which version works better conversion-wise.
In general, A/B/n testing is A/B testing, but there is a difference. Split testing (another term for A/B test) allows you to test only two versions, and the traffic is divided 50/50% for each one.
During A/B/n testing, you can compare more than two versions, and the traffic is divided equally between each one.
For example, a marketing team offers four versions of a new website design and cannot decide which one is better. In this case, they use A/B/n testing, during which each version will get 25% of the traffic.
Some examples of elements that can be tested by the A/B/n method:
- a full-page design;
- text and conversion buttons design;
- the layout of forms for data and button positions;
- sizes of objects: buttons, text, forms;
- prices of goods;
- a product description;
- headings in product descriptions;
- product illustrations;
- text length;
- pop-up windows;
- lead magnets.
Why should you conduct this testing? You will not have to rely on your intuition and marketers’ guesses. You will make data-based decisions. This approach:
- will reduce the risks of time and resources lost due to errors;
- will indicate the most effective content options;
Advantages and disadvantages of A/B/n testing
Reducing the risks of financial losses. For example, the easier it is for a customer to find the necessary product and place an order, the more likely they will buy from you. Conversely, inconspicuous buttons and unclear navigation lead to customer attrition and a loss of potential income.
Increase in conversion. Using this test, you will find the most clickable options that will lead your customers to purchase faster.
Testing different ideas. A marketing team can argue while developing a new design. Whose idea is better? Split testing will answer this question. It will test hypotheses and resolve the conflict peacefully.
Increase in traffic. It will happen if you reduce the Bounce Rate, which search engines take into account when ranking a website. A convenient landing page will delay users longer. This is a good sign for search engines. If users are interested, they stay on your website, which means this page can be shown more often.
Resource-saving. This test will show which options are working faster than arguing and testing one version of the content at a time.
New ideas for future campaigns. The behavior of users with different design options is a source of fresh, sometimes unexpected insights.
A large amount of traffic. For regular A/B testing, the traffic is divided into two parts. For A/B/n testing, it is divided into several equal parts. Therefore, the results will be valid only if there is enough traffic for each option.
Testing only one element. The test will show which option is better (e.g., the dark button is clicked more often than the light one). Other factors that have influenced the result will remain unknown.
Search engine sanctions. Search engines may suspect you of cloaking (search spam when page options differ for the user and the search engine) when you conduct testing of the website design options.
How to conduct A/B/n testing
- Identify the problem
We are looking for elements that work inefficiently or do not bring any results.
For example, users rarely click the «additional goods» button on the website or do not go to the order page from email.
Google Analytics tools, WebVisors, and analysis of users’ requests to technical support will show the “weak spots.”
- Form a hypothesis
The hypothesis reflects which action will improve the performance.
For example, if you change the color of the «additional goods» button to a brighter one, the number of clicks will increase two times.
- Create options for the test
You can test one hypothesis at a time.
For example, there are several different colors for the «additional goods» button: red, blue and yellow. You cannot change the font size and the button position at the same time.
However, you can test the whole design at once.
- Check the metrics and elements for the test
The test results cannot be evaluated without reference to the metrics (see, altcraftdotcom) — the number of impressions, clicks, and others that you plan to improve. The tracking tools should work before the test starts. Do not forget to check the buttons, forms, and display of the elements you will test.
- Determine the sample size
For a representative result, you need a certain number of visitors who will see your page. How to calculate it? Use unique online calculators such as Optimizely, AB Testguide, and others.
A/B/n testing example
Let’s see how it works:
- We indicate the current conversion.
- Then we enter, as a percentage, how much we want to increase the rate (the minimum visible effect).
For example, the conversion rate is 5% at the moment. We want to improve it by 10%. We enter the details into the calculator and get the number of unique views that are needed for each variation — 31 000.
- Start testing
All options must be tested simultaneously. The outcome depends on the time of day, day of the week, season, and other factors. Stop the test when each option gets the required number of views.
- Calculate the result and make a decision
Then we go back to online calculators. This time, to calculate the statistical significance of the results. To do this, we specify the number of conversions for each option and the sample size. The calculator will show whether the results are different or there is no significant difference.
We decide according to the results: to implement a new design/a design element/change the content entirely or to start further testing.
What is Multivariate Testing (MVT)
MVT, or multivariate testing, simultaneously tests the effectiveness of several combinations of different elements on the website pages and other Internet resources. For example, MVT testing will show which combination gives more conversions.
The same elements as in A/B/n tests are tested here. However, MVT testing is more profound, and these elements can be combined with some others. MVT testing does not compare the design completely.
For example, we want to test the effectiveness of the CTA button. There are two options, «order» and «purchase», for which we use red or yellow and semibold or normal font. There are also two caption options: «now» or «five people are viewing the product.» Thus, we get 16 options.
MVT testing: advantages and disadvantages
Increase in conversion. Multivariate testing will demonstrate which combination of elements your audience perceives better (i.e., people click on your products and make purchases or make another conversion, for instance, a subscription).
Optimization without global changes. The test improves the performance without a complete redesign.
Time-saving. Several different options of variables and their interaction are tested during the MVT test.
This process takes a large amount of traffic. The greater the number of combinations, the bigger the audience needs to see them. Therefore, it is possible to conduct MVT testing only when:
- you have a sufficient contact database (e.g., for testing an email newsletter);
- you have a large amount of traffic on the website when you test the effectiveness of its pages.
It is a time-consuming process. You will have to wait until you get the correct number of views.
There is a possibility of false positives. The more options you have, the higher the risk of accidental clicks.
How to conduct MVT testing
The most common methods of MVT testing are full factorial one and fractional factorial one.
Full factorial multivariate testing
This method allows you to test all combinations with the same amount of traffic. This is a statistically accurate method, but it requires a large amount of traffic. Let’s see how it works: Several factors are tested. Let’s take A, B, C, where A is the CTA button; B is the inscription on the button; C is the background of the section where the button is positioned.
According to the complete factorial method, each factor has only two options (let’s denote them as +1 and -1).
For example, the CTA button is yellow for factor A (+1), and the orange one will be -1 in this case.
Then all the variants of factors are combined with each other. It looks like this:
Fractional factorial method.
Factors and their variants are divided according to the same principle as the full factorial method. However, only some variants of combinations are tested. As a result, the accuracy of this method is lower, but less traffic is needed.
Stages of MVT testing
The sequence of actions for MVT testing coincides with that of A/B/n testing (see above for more details):
- Identify a problem. What difficulties does the user have on your website? Which section of the website is inefficient?
- Formulate a hypothesis. Will a change of color, size, and font of the button increase your conversion rate?
- Create options for testing.
- Check the metrics and operability of the elements for the test. Are all the necessary metrics tracked? Are all the elements of your website/email and other objects (buttons, forms) working correctly?
- Determine the sample size. Use one of the online calculators—for example, VWO.
- Start testing. Conduct the test until the moment when the necessary traffic is collected.
- Analyze the results. First, check their validity. Use online calculators for it. Then we draw conclusions: change the website or run a new test.
Both types of testing have standard features:
- you can test 2+ variants of elements on the website using traffic distribution;
- you can test all options simultaneously;
- these tests help to increase your conversion rate;
- their testing stages coincide;
- both tests require a sufficient amount of traffic.
Despite all these similarities, A/B/n and MVT testing have significant differences:
|Test object||One element or the whole form. There may be a difference, for instance, only in the background-color: red, blue, white. Or completely different page designs are tested.||Combinations of different elements.|
|Purpose||Testing ideas for radical changes.||Testing ideas for optimization after these changes.|
|Time investment||You can quickly get the result.||Long cycle due to a large number of elements.|
- A/B/n and MVT tests are aimed at the same thing, increasing the number of conversions. But they differ in the object of testing. For example, A/B/n tests several variants of one content element, while MVT tests the effectiveness of combinations of different elements.
- It is reasonable to conduct MVT tests as an extension of A/B/n tests. A/B/n testing is conducted for global changes, while MVT testing is conducted for optimization.
- Both types of testing require a large amount of traffic, therefore the testing time can be increased. However, the result is worth it. All decisions on changes are made based on this data, not the opinions of marketers. This approach saves the team time in searching for compelling content and reduces the risks of errors.