Guest Post from Raphael Paulin-Daigle. Raphael is a full stack marketer specializing in conversion optimization. Today, he focuses on increasing the conversions and online sales of e-commerce stores and Saas companies through his conversion rate optimization services.
A/B testing is used more than ever before. From blogs to startups to corporations, a large portion of online businesses perform split tests to experiment with website changes, hoping to increase conversions.
But what’s one thing that’s as important as improving a website but tested much less frequently?
Email marketing campaigns.
With an average ROI of $38 on every $1 dollar spent, email marketing is one of the most effective channels for businesses to warm up and convert leads.
Does that mean your email marketing campaigns should be tested as much as your website?
You bet!
Email A/B testing is no more complicated than split testing your website.
In this post, I’m going to break down 3 key things you need to know before you take the leap into email split testing.
Some things are more important to test than others
It has been repeated over and over by conversion optimization experts that elements to A/B test should not be picked at random.
Button colors, fonts, and minor changes are often easy picks, and unfortunately, inspired by numerous case studies that can be found on almost any marketing blogs. Ill-advised marketers are frequently over-reliant on case studies to find what they should test, resulting in ineffective tests, and mediocre optimization results; something you should look to avoid.
Whether you’re testing a website or testing an email campaign, the ideal concepts and processes used to identify elements worthy of being tested remains similar.
In this case, what should you test in your emails?
First, let’s take a look at a concept called the “Pareto Principle”. The Pareto Principle, very widely used in A/B testing, consists of finding the 20%, that produces 80% of your results.
The following question can now be asked: “What are the few elements within your email campaigns that produce the majority of the results?”.
Depending on how your email campaigns are set up and executed, your answers will vary, but generally, there are a few things that stay the same:
Deliverability
This one is quite obvious but often overlooked. If your emails have a low deliverability rate, nothing else would matter…
I’ve noticed that many people think deliverability is entirely in the hands of the email marketing service provider; although it is true to some extent as some providers will have lower deliverability than others, there are other factors that are completely in your control, as shown by the elements in green in the following graphic:
Subject line
In April 2015, in order to find if clickbait email subjects really produced higher open rates, Return Path analyzed over 9 million messages sent to more than 2 million consumers.
What they found was the opposite what you’d probably have expected: clickbait subject lines aren’t effective. In fact, subject lines containing words like “secret” or “shocking” had much lower open rates than their variations without the clickbait-style!
These types of studies, in addition to the fact that subject lines are the first thing an email recipient sees of your email, proves that testing your subject lines should be high on your list of priorities.
Call to action
Humans in general needs to be given directions. If you’re sending an email to your list promoting a webinar you’re soon hosting, you can’t just tell them about the webinar. If you want them to register, ask them to, and provide a clear path to the registration page.
Too few call to action could lead to too few of your recipients doing what you want them to do, and the opposite is also true: too many different calls to actions will lead to analysis-paralysis.
Offer
Let’s say you’re selling online courses: You have one on SEO, another on brochure design, and another on book writing.
They are 3 totally different types of offers catering to 3 different audiences (hence why segmentation, which will be discussed later in this post, is extremely important).
If you send an email about your brochure design course to a novelist interested in book writing, chances are your conversion rate will be very low as the offer won’t be compelling to the person at the end of your communications.
Here’s another example: you’re launching a course about certain aspects of SEO, such as on-page optimization. If your list did not confirm that they were interested about that particular topic and you didn’t validate it beforehand, your launch emails might flop. After all, maybe your subscribers were interested into a completely different aspect of SEO. Maybe link building, or maybe your price-point wasn’t right for them either…
Testing your email offers will enable you to gain greater clarity on what your audience really wants and you will get insights on how you can effectively sell your products or services to them, insights that can be leveraged to make your future emails much more effective.
Body copy
Your copy is everything. Without it, you have no email.
There are many different types of copywriting, and some styles will undoubtedly be more popular with certain audiences than others.
Cheesy copy or serious copy?
Long copy or short copy?
Definitely worth testing.
Personalization
A study by Experian Marketing found that personalized emails generate an average of 6x higher conversion rates; however, only 30% of brands leverage personalization. Are you one of them?
Personalization can manifest itself in many ways. You could try a personalized subject line, open the email with the recipient’s name or go even further with completely personalized content based on the recipient’s interests, previous purchases or actions.
Don’t just limit yourself to mentioning the person or company’s name. Constantly test different possibilities and you might find some golden nuggets.
Layout, Format & Images
In a recent study by HubSpot on the performance of HTML vs Plain Text emails, it was found that in nearly every A/B test that was conducted, simpler, plain text emails won with statistical significance.
In addition, emails with even as few as one image instantly reduced click-through and open rates….
Does that mean you should stop adding images to your email or turn your pretty HTML templates into plain text?
Not necessarily, although the Hubspot study found a simple email with a good copy typically perform better than the pretty and visual kind, it’s safe to say that it’s generalized and that there are always exceptions.
The only way to know if your audience and emails are an exception to the findings is to test it, so what are you waiting for?
Do you know where to start?
Now that you know these are the elements you should focus on – and why, how do you know which one to start with?
Honestly, there’s no “absolute” answer to that question. If your deliverability rate is below average, you should probably look into what is causing the problem and experiment with a few changes.
Whereas if the majority of your emails are getting delivered and opened, but no one ever clicks on your links, then the call to action, copy or offer might be things to put higher on your things to test.
But here’s the deal:
Before even reaching that stage, you should know your WHY. Why are you testing? What are you trying to achieve? Of course, we can all say we’re testing because we want better, more effective email marketing campaigns, but that won’t bring us anywhere.
A better WHY would be something like: “I’m testing my email campaigns because I have noticed a certain segment of my list places orders of higher value than the majority of other subscribers. I want to know what in my emails is making that segment behave that way in order to possibly emphasize on it, or to learn from it and be able to adapt my emails to my other segments to increase their order value too”
You see that? This “why” statement not only focuses on making emails better, but it aims to improve a KPI (in this case, order value) important to the business. It also comes from a previous observation that a certain segment placed higher value orders.
The reasoning behind that WHY is founded on the company’s KPIs, goals, success metrics, and previous analysis.
If you know what you’re really trying to achieve with your email marketing campaigns, and clearly defined the metrics that are important to your company, then choosing something to test within that 20% should be much easier.
2. Segmentation will give you better results
There are two ways to A/B test your email campaigns: One is to test using your whole list, the other is to test segments.
Testing email sent to your whole list requires much less effort, as you only have one audience to focus on. It will also allow you to get a broad view of what really works best throughout your entire audience.
But let’s face it, if you have a large list and want to go further than just the broad improvements, segmentation is probably your best bet.
Not too sure what’s segmentation? Here’s Wikipedia’s excellent definition of the term:
“Market segmentation is a marketing strategy which involves dividing a broad target market into subsets of consumers, businesses, or countries who have, or are perceived to have, common needs, interests, and priorities, and then designing and implementing strategies to target them.”
Now, know that segmentation is not a way to make your tests necessarily better; it’s an email marketing best practice that will increase the effectiveness of your campaigns in general. When combined with testing, that’s where you might see unprecedented improvements. The reason why segmentation works is far from witchcraft: it’s simply because people on your email list aren’t all the same.
Some people will be on your list because of a free ebook download you offered, others because they purchased a product or service from you or maybe some are just interested in receiving blog updates.
On top of that, they are probably located all around the world…
So what does this mean?
It means that in your bucket of email addresses, you have many people with different goals, behaviors, purchase intents and expectations; thus, causing your messages to perform differently for each different subset of subscribers.
Remember earlier in the post when I said personalization in email marketing was found to be responsible for 6x higher conversion rates? That’s why you segment, to provide content that is more personalized than what you’d send to everyone.
Now, back to A/B testing, there are many reasons why paying attention to segmentation is important for your experiments.
First, let’s say you decide to treat your whole list as one big group and decide to test your email campaigns you send them. Due to their goals and demographics being different, there are a few things that could slow down your optimization efforts and minimize the potential impact.
For example, let’s assume you sell electronics and send a promotional email to your list. The email you send has the following subject line: “PC accessories on sale today”. However, 60% of your audience use Macs…
This could lead to your open rate being much lower among that 60% of Mac users, compared to the 40% of your list that uses a PC.
If you’re A/B testing your subject line, your data could even be slightly skewed, i.e., not showing the full potential of the campaign. Here’s why: the sent email only applies to a segment of your audience (PC users), but since you also sent it to a large percentage who aren’t PC users, your open rate will be generalized among everyone.
In other words, if you would have sent email to a segment of PC users only, your open rate would have been much higher due to the email being relevant to more people.
Now think about that on an even broader level. If different segments of your audiences are affected by the subject line, it also means your call to actions, body copy and whole funnel, when tested, will give you averages that hide the true improvements.
An open rate or click through rate could have decreased for Segment #1, but will have highly increased for Segment #2. The improvements in Segment #2 might have brought you more sales and increased your revenue too. Yet, with no segments, your email marketing tool will calculate an average and might tell you no improvements have taken place.
Don’t just take my word for it, as shown in the graphic below, a research by Mailchimp proves that segmented email lists increased opens and click-throughs while decreasing unsubscribes in nearly every case.
In short, segmenting your list and performing test on them will reveal the true impact of your changes, in addition to creating a clearer path toward how to best optimize your funnel.
3. Never Stop Measuring
How can you expect to improve if you don’t know how your marketing efforts are performing and affecting your business?
Before even starting to A/B test, you should have determined your KPIs, success metrics and important goals you’re trying to achieve. These are things you simply cannot forget to keep track of during your testing efforts.
A/B testing as a concept that is fairly straightforward, but there are a few things people forget to measure while at it. Here’s a scenario that regularly happens:
A SaaS company decide to redesign their homepage: the Control (original) version is tested against the new version, Variation 1. For the test, the company set the click-through rate of the buttons that bring visitors to the next step as the conversion goal.
When the test is over, they notice their click rate on these buttons is higher, but that the total amount of paid signups decreased.
Here’s the problem: The company assumed that an increase of clicks on the buttons leading to the next step in the funnel immediately qualified as an improvement; however, they failed to keep track of how these changes would affect other stages of their funnel. In this case, clicks increased, but the new homepage lead to less qualified visitors also clicking through.
The exact same case applies to email A/B testing. Even though your emails might only serve as a start to your funnel, it could change how people behave in it’s other stages; therefore, it is absolutely vital that you track how one change can affect your other goals and KPIs.
Bryan Massey of conversion sciences recommends testing for total conversions instead of relying on click-through-rate or open rate.
My advice for email marketing is, “Test to conversion. Don’t rely on click-through-rate or open rate.”
We did a split test for a firm selling a web-connected thermostat, like the Nest, only not nearly as cool. We were testing subject lines. We sent four identical emails with four identical landing pages. The only thing we changed was the subject line. The two winning subject lines had similar open rates and identical click-through rates. However, one of these two subject lines generated 24% more conversions — purchases of the thermostat — than the other.
Had we relied on the CTR, we would have had a 50/50 shot at picking the right subject line.
Don’t forget that when tracking and measuring your A/B tests, your email marketing service provider won’t be able to give you all the number you need to know in order to determine which test performed better.
As shown in the above ActiveCampaign split testing dashboard, basic metrics such as click-through rate, unsubscribes and open rates are in most cases, the majority of the metrics you’ll be able to see. For that reason, you will need to make sure you integrate your email campaigns to your Google Analytics account.
Having your email campaigns integrated into your analytics will enable you to segment your data and see which email variation lead to the best on-site performance, and which ones actually makes you more money.
From knowing the metrics that matters to your business, to keeping a close eye to your tests and numbers before, during and after your optimization efforts, it might sound like it’s a lot of work…
The truth is that yes, it’s indeed a lot of work and you’ll make mistakes along the way. But now that you have a better idea of how to measure your experiments, your next tests already have a much higher chance of success.
The Bottom Line
Email marketing is without a doubt one of the marketer’s preferred channel for getting messages across, but with recipients becoming used to being bombarded with promotions, and let’s admit it, spam, more work than ever is required on our side in order to achieve the best results.
Simply sending an email is not enough. Relevance and engagement are key for emails to be read. Segmentation for greater personalization is now a need, not a “nice to have”.
For your emails to keep being effective and get good returns on your marketing dollars, A/B testing your email marketing campaigns is a skill to master.
About the Author
Raphael Paulin-Daigle is a full stack marketer specializing in conversion optimization. Today, he focuses on increasing the conversions and online sales of e-commerce stores and Saas companies through his conversion rate optimization services.
Entrepreneurs and marketers looking to learn more about how conversion optimization works and how it can be applied to their business can also grab his free mini-course here.
The post A/B Testing Your Email Marketing appeared first on Sujan Patel.