Split testing with results

A good marketing campaign is not only made out of the content itself. It is often the way this content is presented. So how do you know what works? The bad news is, there is no single answer. The good news is, you can test your way to the best setup for your brand.

Testing different types of layout or content has many names: Split testing, AB testing, Multi-variant testing etc. However, the overall idea is the same: Test your content and layout across your users and monitor a point of success, like clicks or sales. At the end of the test, the most successful variant will be picked as the preferred way for any future campaigns.

A scientific approach

Preparing, executing and reporting on a split test requires a very logical way of thinking about your content. Try to keep your end-users in mind when making a split test. Using a clear scientific approach should also help you to calculate the success of split test itself – besides giving you the most successful variant.
First off, try to have reference data. This could either be your first variant in your test, or it could be reporting from a recent or similar period. Was there a rise in sales? What was the click rate of the previous newsletter? How many visits does the website usually get in a week? This reference data can be very important to clarify if any of your variants even makes a difference. Who knows, maybe you had the right setup all along.

What do you expect?

Before setting up the details of your split test, make it clear what you expect from this test. Do you expect a specific variant to outperform the others? By how much? Making your expectations clear is an important part of verifying the split test itself. Also, prepare what your actions will be, if the split test does not live up to those expectations. What do you do, if your new variant is expected to perform 25% better but it only performs 15% better? You could settle for the 15% boost. You could try and run further tests. You could change the variations and try again, until you hit at least 25%. Or you could even scrap this version all together and stay with the current setup.
If the cost of implementing a new variant to existing and new campaigns is outweighing the performance boost, it may not make sense financially to implement at all.

The difference of a single click

When planning your split test process, the variants are not the only important factor. You also have to consider the users, who will be used for this test. Remember to consider both quality and quantity.

 

In this case, the quality is how dedicated the users are. How often do they open, click, visit websites and make purchases? Selecting a higher than average quality of users for your test could affect the result to make your results look better than they actually are.

 

Let’s assume I want to test a few new variations of my ordinary newsletter. The primary goal of my email is to get a higher click rate, compared to my current newsletter. In my current newsletter to all subscribers my click rate is around 15-16%. But when I make my split test I only test on users who have clicked in an email in the last 3 months, rather than all my newsletter subscribers. By purposely selecting higher quality users the click rate of all the variants will be a lot higher than normal. This may be fine if I only compare the different variants in the split test, but compared to my current newsletter format, it would seem like an amazing improvement, even though it might not be. Remember to keep your tests as true to the real thing as possible!

The quantity of the users could also matter a lot. In some cases, it’s tempting to test your variants on a fraction of your total target group, and use the winner for the remaining users. While this can be a quick way of starting to use the winning variant, it would mean you are testing on a lower number of users. The lower the number of users, the more important a single user becomes.

Let’s look at the numbers …

We’ll assume that I want to test three variants of newsletters on some users. Normally I’m sending out to 100.000 users, with a 15% open rate and a 10% click rate. This means a normal newsletter would show 15.000 unique opens and 1.500 unique clicks. With an even split, we would assume around 500 clicks per variant. But if I only perform my split test on 20% of my total target group, I suddenly expect 3.000 unique opens and 300 unique clicks, with 100 clicks for each variant. This means a single click is suddenly 1% of the entire performance boost of each variant. With these high percentage per click, each users decision makes a great impact on the final result. This means minor random actions could appear as big differences between each of the three variants.

The human factor

Always consider the pure randomness of your end-users. Consider your own private inbox. How often do you open a marketing email because you have some time to kill? Random actions for each of your users could affect if your email is opened and clicked or not. This randomness can be hard to quantify but should be considered when making a split test. Is a 5% improvement for a specific variant really due to the variant or just a random distribution of clicks?

Should you even test?

Most split tests are done to test if the content or layout of a campaign can be improved. But the actual improvement could be something other than clicks and sales. Maybe it is an improvement in workflow for our team. Or better implementation of your brand. Or simply introduction your brand’s new logo and colors. Remember not all changes to the content of your campaigns needs to be tested, if there is a clear reason for changing the content in the first place.

Want to get started?

If you want to make smarter campaigns and test use the best content for your brand, contact Agillic today. With the Agillic platform, you can run split tests, subject line tests and easily implement different types of content to your personalized campaigns.