Testing – The marketer’s holy grail.
Regardless of the industries we operate in, the one thing that is common throughout any sort of business venture is this – testing. Businesses live and die on the decisions they make, and the most informed decisions are made using what we learn through testing. If you have read books like The Lean Startup, then the importance of testing should be no stranger.
Because no one truly understands their customers or visitors, it doesn’t matter how smart or how great a marketer you think you are, we are all figuring it out one bit at a time, refining our processes and systems as we go along. Testing gives us concrete insights on what content and marketing appeal to our targeted audience.
In the past, this may have been an arduous process spanning generations – with one craftsman or business owner passing the baton on to the next generation to continue the process of improvement. But now, with the aid of technology and new methods, we are able to draw upon massive amounts of data quickly to make the right business decisions. We have the platforms and tools to accelerate the testing and improvement process. This helps us iterate through the creative process faster and in greater detail than any other period in human history.
What is a Split Test?
Say we have an experiment we want to conduct on a marketing asset. It could be a new product, new funnel, or new price strategy. But what exactly should we change? And would we want to risk it all and expose everyone to this new, untested change?
When marketers and businesses design their campaigns and marketing assets, some end up making decisions based on gut feel and instinct. It can work, but it is risky and inconsistent. This is where a Split Test or A/B Test comes in.
These 2 terms are thrown around in the industry quite interchangeably, but Split Testing and A/B Testing are different tests that utilize similar concepts.
Split Testing is the process of comparing different versions of your marketing asset, and measuring which one performs better. Usually, we split test campaigns or websites with distinctly different designs or features.
A/B Testing is the process of comparing 2 different versions, where only 1 variable is changed.
- Red button vs Green Button
- Dark color scheme vs light color scheme
- 30% OFF vs 50% OFF.
In both Split Testing and A/B Testing, the traffic reaching your pages is randomly split over the few versions that you are testing. We let one group of visitors see one version and the other group see the other version. Our split-testing tools then collect data, which we then analyze to identify which one does better – the old or the new.
After measuring which page does better (could be in conversions, clicks or shares etc.), we can now understand why it succeeded. This helps you understand what works with your audience.
When it comes to A/B Testing, you can drill down to see which exact changes contributed to the difference in performance. This gives you granularity and the insights you gain from this will help immensely when making future decisions.
But when you want to test something massively different, A/B Testing one improvement at a time can be rather slow. So some marketers run Split Tests to figure out which design works better in general. After deciding on a certain design, they would then use A/B Testing to fine-tune the details – copywriting, Call To Action, images etc. In other words, Split testing is the blunt tool, and A/B Testing helps you make smaller adjustments.
To avoid splitting hairs over the terminology and naming differences, we will refer to both tests under the umbrella term “Split Tests” moving forward. The important concept is running the same traffic to different versions to compare them.
In any test, we have a Control and a Variant.
The Control is the page, email, design, ad creative etc. that you are currently using. This is what you are comparing your new creation against.
The Variant, or also called the Challenger, is what you are introducing to your system. You do not have enough data on how this will perform, and that’s why you are conducting the test.
The most basic form of testing is this – the 50/50 test. And it is as simple as you think.
In a 50/50 split test you run 50% of your traffic to the Control, and 50% of your traffic to your Variant set up.
Running half of your traffic to each side makes comparing things easier. We can just compare the results directly, regardless of the success metric being conversions, engagements etc.
There are options on most tools to run different percentages of your traffic to each version. E.g. 20% to the Variant and 80% to your Control.
Some tools do allow a 3-way split among 1 Control and multiple Variants. With these set ups you may have to run the test for longer before seeing conclusive results, but they help you lower the risk you expose your business to.
What to Test?
There are so many things to test, but to run a meaningful test, there are a few things that affect conversions more than others. So here’s a list of important changes that you can test to see significant improvements. While we list a few things that you can change, we recommend changing just one element at one time. Running too many tests at once may confuse the data that you are receiving.
When it comes to marketing, the subject line or headline of your marketing material is the first thing people see. If it’s a bad subject line, people won’t even give you the time of day. They’ll just scroll on, delete your email, or even report you for spam.
Be impactful with your first impression.
Spark curiosity with your introduction and try your best to understand what pain points your audience has. Your headline should provide an answer or trigger your audience to want to learn more. When it comes to emails, your open rate is directly influenced by your subject line, which is why we can see a whole spectrum of open rates throughout the industry, with some brands seeing over 30% open rates, and poorly performing Solo Ads vendors seeing under 1% open rates.
Ad creatives make a massive difference in your conversions. They determine the type and quality of the traffic reaching your offers. Even if you are using stock images from Unsplash or Shutterstock, test your image selection. Use them purposefully to draw attention to important buttons or a Call-To-Action. If your creatives are not in line with your product or offer, conversion rates will suffer.
That’s why it is important to test your images, videos, infographics, and everything that you put on your ads or your landing pages. Facebook supports A/B testing so do practice and use this feature when running Facebook ads.
Your CTA can be that one last push a customer needs to buy, or for sign up for a test drive. It directs your visitors or email readers towards what you want to achieve – a sale, a subscribe etc.
Your marketing material needs a clear CTA with a powerful anchor text, and a great amount of value that your visitors can access if the desired action is taken. Every aspect of your button from its color and anchor text to its size, shape makes a difference. If there is one thing that you must test, it will be your CTA.
Copywriting and Content
Copywriting is another major influence on conversion rates. People want to know what they are paying for. And different audiences respond to different depths of content and levels of persuasion. It depends on the industry that you are in and the audience that you are appealing to.
For example, B2B website visitors are specifically seeking informative content, as business decisions need to be made with enough information on your services. However, some business pages perform better with clear and concise overviews of their content, while others see higher conversions when they establish themselves as an authority with deep, detailed writeups.
E-commerce sites, on the other hand, can have short product descriptions for bite-sized and interesting content that hypes up the product. These product pages can benefit from more images, stylized fonts and more paragraph spacing to make descriptions easier to read. But other products like health supplements may require more detail to allay customers’ doubts and answer the questions they may have about the supplement facts.
At the end of the day, this is dependent on your target audience, and everything should be backed by test data – be it long-form articles or short-form paragraphs, or the use of bullet points and emojis. Do not underestimate the impact simple things can have on your conversion rates.
Reviews and Social Proof
“87% of buying decisions begin with research conducted online before the purchase is made.” – CXL
Social proof can see your conversion rates soar. It not only helps with feeding the Fear Of Missing Out (FOMO), it also builds trust in your brand. But it must be done right.
If all of your reviews do not have photos, are all 5 star reviews with repetitive text and look fake in general, it can instead hurt your credibility. So test your testimonials. Run tests to see if video testimonials, photo proof or quotes work better. For B2B services, showing that you are featured by media partners, or displaying the logos of your partners can also benefit you.
Loox allows businesses to receive reviews automatically and it also helps to display them on the product page. This really helps to increase conversion rates, and we have personally seen conversion rates double from the simple addition of Loox.
To sign up for Loox and enjoy an extended 30-day free trial, click the link below!
Landing Page Design
Your landing pages are the first page that you visitors see after clicking on an ad or searching you on Google. This means that it has to retain these visitors and pass them along to your conversion step. If your landing page cannot convert, or takes too many clicks for a visitor to finally reach checkout, you will lose potential sales.
There are a few tools out there that provide heat maps and visitor recordings. Heat maps show hot-spots where your visitors tend to click on your landing pages. You can also see where your visitors scroll down to before closing the window, or even watch recordings of your visitors in real time.
Creepy? I know. But this is incredibly useful when deciding which button or banner to change for your test. You can pair this with the data that you get from Google Analytics like page time and where your drop-offs occur. This will help you narrow down which page to test and which elements to change.
Don’t test what you want to test. Test what people are already looking at.
Your A/B Test
Now that we know what are the common elements to test, we can move on to how to actually start testing. All these tests aren’t too complicated. You just need a plan, and all your tools handy to gather meaningful data.
So let’s start with your strategy.
Step 1: Decide on Your Hypothesis and Goals
Every experiment needs something that you are testing for. This is the one change that you are making to your webpage to see how it will perform. It is advisable to start from the elephant in the room. Start testing the landing page that has the greatest impact on your business. Use Google Analytics or other tracking software to see what contributes the most to your revenue. Then decide on changing the main element that has the most attention on it. This is where heatmaps come in again.
After that, it’s important to conduct the test with specific success metrics that you are looking out for. For example, in email marketing, your hypothesis may be on testing how a new subject line affects your open rates. In Search Engine Optimization, you could be testing how different keywords or how long-form and short-form articles affect your click-through-rates. If you are in e-commerce or affiliate marketing, you are most likely testing in your Conversion Rate Optimization (CRO) journey by changing your Call-To-Actions, or ads.
Focus on one major success metric that you are testing for. Keep tabs on other sources of information to get additional insights, but compartmentalize your data analysis and measure your major success metric by itself. It makes for cleaner analysis after the test, or when deciding whether you should continue the test.
Step 2: Duplicate Your Control and Create Your Variant
Now that you have planned out your test, it is time to execute. Start with duplicating your control and then make the singular change to make that duplicate your variant. If it’s the button, only change the button. Nothing else should be different. This helps you drill down to what exactly works for you.
Step 3: Run Your Test and Analyze The Data
There is an abundance of tools out there that help you run your split tests. Crazy Egg and Visual Website Optimizer (VWO) are great tools that give you clarity on your visitors through heat maps, scroll maps, conversion goal tracking and much more.
But if you already are using ClickFunnels, or a Content Management System like HubSpot or GetResponse’s page builder, then you already have access to in-built split testing features.
Just need bare-bones features without all the special tools?
Then Google Optimize is freely available to help you run things lean. Google Optimize is Google’s free A/B testing tool. It doesn’t offer as much as the other tools listed above, but if you only wish to run a simple test, you can just link your Google Analytics account to Google Optimize and create an experiment following the instructions Google provides.
Let your test run for a while. Revisit the data regularly to see if you can draw any conclusions with the data you have. Some people provide a specific number of days to run your tests, but we recommend that you let your tests run until there is statistical significance – where you see evidence that there is a clear winner. This is just a fancy way of saying that you should let it run until you can tell that one version is consistently performing better.
If your tests are too short, there will not be enough iterations for the data to prove your hypothesis right or wrong.
Step 4: Rinse and Repeat
What we learn from our split tests set the baseline for further improvements. When you keep learning what your audience responds best to, you will keep seeing improvements that can be made to further optimize things. And then you test those improvements and learn more. This is an ongoing process of growth that helps businesses keep scaling 10X or even 100X in the long run.
Advanced Split Testing
But what if we have multiple variants?
Can we target different visitors?
Let’s get into the fun stuff!
Multivariate testing, is similar to the usual Split Test, but is for testing more than 2 things concurrently. Since more than 1 variable are seeing changes now, you would need more than just 1 control and 1 variant.
We usually recommend only running 1 test on 1 variable because it makes things cleaner, and easier to draw out insights. But when it comes to ad testing for Facebook ads, we often see multivariate testing on multiple levels. Typically, different audiences and interests are tested on the ad set level, and different ad creatives and the like are tested at the ad level.
When running multivariate tests, we will have to create variants for every possible combination of the changed variables. To illustrate, if you are testing 2 variables A and B, you would have to create versions for:
- A1 + B1
- A1 + B2
- A2 + B1
- A2 + B2
That is 4 different versions with 2 variables changed. A quick formula that you can use is this:
Number of versions = [Number of variables for asset 1] x [Number variables for asset 2] x ….. [Number of variables for asset n]
If you are running 3 different types of Facebook ads to 2 different interest groups, you would have a combination of 2 x 3 = 6 versions.
If you are testing 2 email subject lines, 3 images and 2 Call-To-Action buttons, you would have 2 x 3 x 2 = 12 versions.
Segmentation and Targeting
A further development of this is by segmenting your audience and tracking performance based on each audience segment’s response. Each audience segment you are testing can have their own winning version. This means that while there may be an overall winning variant, another variant may perform better within a smaller segment. Therefore, instead of having just 1 winner like in traditional testing, you can have multiple winning variants.
For example, your entire audience may have higher response rates to Email A. But after running a test that tracks specific segments in your audience, you may find that Email A performed especially well for women, but when it comes to men, Email B out-performed Email A. The strategy now shifts from sending just Email A to your entire list to sending 2 emails. Email A will go to women and Email B will be sent to men.
Do note that if you are conducting a segmented test, the test has to be designed to split and track the traffic to each segment from the start. It is more complex as you are running a multivariate test with the potential of multiple winners. But if done correctly, you can optimize your marketing and run targeted campaigns that convert well.
If you are confident in segmented tests, you can even run tests with different combinations of customer attributes like gender and interests at the same time. It can reveal more, and this is dependent on how confident you are with categorizing and drawing conclusive insights from data that may not be as clean as simple A/B tests.
Example time. Let’s run what we have learnt through this example, and add a small tweak to the scenario near the end. This caveat will illustrate how measuring different metrics when running tests can help you learn a lot about your audience.
A business wants to run a promotion and send an email campaign to its email list of 20,000 subscribers. Now to test things out first, it sends 2 versions of the email to 4,000 subscribers first. These 2 versions have different buttons. Everything else like the design, sending time, and subject line is the same to keep to our rule of only 1 change for A/B Testing.
Now if the business runs a 50/50 test, 2,000 of the subscribers involved in the test will receive the Control version, while the other 2,000 will receive the Variant.
- Control – Get 30% OFF now!
- Variant – 5 Days left! Offer lasts 3 Days!
At the end of the test, they see that the Control gets 75 sales, while the Variant gets 100 sales.
At first glance, the Variant is the winner – its conversion rate was higher at 5% compared to the Control’s 3.75%. Maybe it was due to the element of urgency that was used in the Variant Call-To-Action.
But if we take a closer look, and measure Click-Through-Rates of the buttons, maybe the Control may have performed better. More people clicked on the button because they saw the massive discount. But maybe they didn’t buy because they didn’t have that sense of urgency.
If the business only looked at conversion rates, they may have missed the chance to increase the number of clicks they get from their emails. Here, they could use the Control button, but added an urgency countdown on their offer page. After running a confirmation test, they could have even higher sales.
This is why while it’s good to define the one success metric you are testing for, do still keep tabs on other important metrics that can reveal fresh insights on your audience.
While it’s good to have a clear goal to test for, each of your controls and variants, and each of your marketing assets will feed you new data about your audience. That’s why it is important to keep tabs on a few success metrics. Tracking different data points besides your primary goal will expose many new insights into your customer’s journey through your platforms.
Lastly, never forget to keep testing. Continue running tests and reviewing the results of each test. The journey of optimizing your systems and designs is one of continuous refinement, so keep learning more about your audiences and keep iterating.
So should we keep testing?
Red Pill | Blue Pill
Test and find out what works for you!