Now we defined the variations, lets look at how they can differ when it comes to implementation; what are the advantages, and shortcomings, of each and how do we best use them.
Key considerations to identify the right test for you:
- Best version gets sent to the most contacts
- Ability to test multiple changes with one test
- Requires a large volume segment to be accurate
- Wait time is needed between initial tests and the winning option being sent
- Can be more complex in setup and reporting
- Smaller segments can be viable options
- No winner wait-step
- Usually, can only test two variations
- Half the audience will get a less compelling version
- Easier in both setup and reporting
These are the key differences, however, there are always more factors that need to be accounted for before proceeding with either test.
Remember A/B tests require a bigger segment of data to accurately test against. You want to be using somewhere around 10% of your data segment in the test stage. This should be a minimum of 1000 contacts (10,000 total) and the test should last at least 24 hours.
Having more contacts and leaving the test running for a longer period will always add to the value of any A/B test – one of the very few times that bigger is better in Marketing!
Let’s look at a few examples. Example one is an email where the main metric is ‘opens’ – so the two main elements would be subject lines or preview text. Let’s test out different subject lines: one has a professional tone, and one with a more informal (could be friendly) tone e.g.“Increase ROI with A/B Tests” and “Let’s talk about A/B tests and boost your ROI!”
For this instance, either test would work. However, an A/B test would probably increase the open rate and give you more insight into your customer base if you had a large enough data segment. It’s not a time-sensitive email and having a rolling test plan would continue to increase returns i.e. continue A/B testing for the next 6 months of emails and really drill down on what works with your customer base.
Another example is a landing page with a form for an upcoming webinar. We want to test the layout of the page with the CTA at the top, and the bottom of the page. We need to understand that it’s a time-sensitive page (for this example, we’re not taking on-demand into account).
For the landing page, we would use a split test. Considering the timeframe and the fact that a webinar will usually have a smaller data segment, there would be no benefit to using an A/B test. You can still use the information from the split test moving forward to increase activity in the preferred version too.
Remember – using A/B and Split testing does not mean immediate growth and increased ROI. Rather, you must spend time assessing the results of each test moving forward; make real, actionable changes to your content to drive the most visible increases.
So, does testing work?
In most instances, introducing A/B and Split testing correctly creates an invaluable source of information and can positively impact conversion and growth. For example, HubSpot gathered 131 additional leads thanks to a small, 0.53% increase in open rate, achieved through an A/B experiment. While the percentage increase may sound insignificant, 131 new leads on one test is nothing to be sniffed at!
Examples show that successful A/B testing can bring a 50% increase in the average revenue per unique visitor for ecommerce sites. Bing improved its annual revenue per search by 10-25% due to the A/B testing. Increasing revenue through use of well-conducted A/B testing is a definite win for marketers!
Start your testing journey!
Hopefully, you now feel confident to start designing your own A/B and split tests – and start to deliver improved conversions and revenue. Remember, testing means your contacts are filling out a questionnaire telling you what they like, without having to go through the trouble of writing one. Once you start to implement testing, make sure you constantly review the results as different audience groups might behave differently, and we hope you see a discernible increase in your conversions.
Josh Tyler is a Marketing Technology Specialist at WoolfHodson. You can learn more about him here. If you would like to find out more about how our team can enhance your marketing performance and improve business operations, contact us today at firstname.lastname@example.org.