Let's Take This to the Inbox
Sign up for our news, resources and updates. The inbox is our favorite place after all. We’ll make sure it’s worth it. (You can unsubscribe at any time, but you probably already knew that.)
Great email marketing is not about having the BEST idea, it’s about the rapid & consistent execution of good ideas. World-class email programs are not built overnight, but instead require a commitment to constant innovation by experimenting, learning & adjusting. Testing ensures we take a data-led approach to optimization; the fastest route to increased engagement and conversions.
This post provides a basic overview of testing methodologies, best practices and tips for email campaign optimization. This information should help get started running effective campaign tests including what to take into consideration.
Below are 5 easy steps to execute an effective test. Have a test in mind already & just want to make sure it will be effective? Skip to the “Is This a Good Test?” checklist below for some general questions to ask yourself.
What are you hoping to learn from this experiment? This should go beyond “I want to learn which version will win,” and instead focus on what does or does not resonate with your subscribers.
If you are testing to see which of 2 versions of a send will outperform the other, you’re not considering longevity of your learnings & will likely be asking yourself “now what?” after your test has concluded.
When you are specific with exactly what you hope to learn from your test, you’re able to use your learnings to apply updates to future emails or existing automations to gain quick wins with data-driven optimization.
For your general testing purposes, your tests should include a single element. This way, when there is (hopefully) a winner, we can attribute to what exactly drove that winning metric.
Many marketers step into the pitfall of testing multiple variables at once, but are not able to attribute the results to anything actionable. This does not mean that multi-variate testing is not possible, it just more complex to set up correctly & requires high audience volumes to gain learnings. A great alternative is to set up a series of single-variable tests, continuing to iterate on the winning version each time.
How are you going to decide your winner? The KPIs you choose to measure your test’s success will depend on what is being tested.
Although choosing a primary KPI that will determine your test’s success, it’s important to look to secondary metrics as well in order to understand the full story.
For example, when testing a pre-open element, such as subject-line, you’ll want to look at open rate as the main testing KPI, but also look to clickthrough rate & conversion rate in case those who opened the test version were much less likely to take another action.
Before you start running a test, creating your hypothesis really just means that you are clearly stating what change do you want to make, why do you want to do so, and it’s expected impact. Defining these details will provide a clear direction on how to gain meaningful insights from your results, hopefully getting you closer to your desired destination of a better understanding of your subscriber.
Your hypothesis should be made up of 3 important components: defining your goal, the expected outcome, and a way to measure success:
“We believe (defined goal) will result in (expected outcome). We will know we have succeeded when (success metric) is achieved.”
Although it’s tempting to check your tests soon after the campaign is deployed, your metrics should accumulate for at least 24 hours prior to calling a winner. You have to give subscribers enough time to interact with your email – not everyone is staring at their inbox 24/7.
Observe how your test version performed in your primary KPI metric compared to the control: Was it a large variance making the results very clear or was the difference minimal? Did enough people engage with the email to be helpful?
Put simply, statistical significance lets you know if you can trust your test results. This can be a result of your test audience being too small or the test version simply not making a big change in subscriber engagement.
How big does my audience need to be?
It’s important to note that you need to always consider how big your audience sample size should be. In order to reach significance, we recommend 400 “conversions” (could mean opens or clicks, depending on the KPI) per version. This allows you to do some quick calculations based on historical engagement rate average to estimate if your audience is large enough to produce meaningful results.
For example, if you have a campaign that generally receives around a 5% clickthrough rate & you are testing two different button colors, your audience should include at least 8,000 recipients per version (8,000 x 5% CTR = 400 “conversions”).
Another general rule of thumb is the smaller the audience, the bigger the treatment difference. If you know you’re going to send to a small audience, then the difference between your two tests should be pretty significant. A tiny tweak in the subject line between two small groups of subscribers will likely never reach significance. If you have a large audience, then you can be a bit more conservative on your treatment variation.
An iterative testing strategy will ensure a much greater likelihood of an impact on your email performance. Test sequentially within one email program or one email type (with a similar audience), allowing the results of each test to drive the next iteration.
Did your test not reach significance? You have a couple of options: either retest with a larger sample size (for automated campaigns, this just means letting the test run a longer period of time), or move on. Not reaching significance is sometimes a result on its own; it simply means your test version wasn’t a significant enough change to change subscribers’ behaviors.
Did you have a winner reach significance? Your test’s winner becomes your next control. When sending your next email of a similar type & audience, apply the winning treatment (or if automated, simply update your control), and prepare for your next test. Keep testing each element with a different treatment until you can no longer beat your control, & then move onto the next test type.
This kind of ongoing testing should result in a perfect–but temporary–formula for every element of your similar campaigns moving forward. Plan to test every program no less than annually to see what has changed & how you can continue to optimize.