Article

Two Views That Undermine a Healthy Email Testing Strategy

Andrew Kordek

I’ve been working in email marketing for many years now, on both the client and agency sides. In both situations, I’ve gone head-to-head with two types of attitudes that can be destructive to a healthy email testing philosophy:

  1. “We’ve tried that before and it didn’t work.”
  2. “We’ve always done it that way.”

These sayings demonstrate an attitude that can undermine the growth and optimization of any email program.

“We’ve tried that before and it didn’t work.”

Yes, I’m an advocate for learning from your mistakes and not repeating them. That being said, times change and sometimes those strategies and tactics that didn’t work in the past may prove fruitful in the present.

When faced with this attitude, start by clarifying what was done in the past, and how it was measured.

  • How do you quantify/qualify what “it didn’t work” means?
  • What did you test?
  • What was your goal?
  • What did you measure and how?
  • Did you ever retest?
  • Where is the data that shows all of this?

Sometimes a test doesn’t work because we don’t define our terms properly or optimize toward our ultimate goal (e.g. engagement or revenue). Sometimes the failure lies in the gap between what we want to measure and our ability to measure it at the time. Over time, greater sophistication in technology and tools merits a mulligan. All of these things, in addition to perpetual shifts in audience behaviors and lifecycles, can contribute to the failure of a test, which under a different context may reveal new, more profitable results and findings.

Just because you tried something 12 to 18 months ago doesn’t mean it wouldn’t work now. Unless you have conclusive data that it did or did not work, keep testing different email strategies. If this “tried it before” attitude persists, your window of things to test and learn becomes thinner and thinner.

“We’ve always done it this way”

People love to talk about change, and even recognize that it’s necessary to break the stagnation of a program. Yet, when it comes to making changes, sometimes those people who challenge you to improve the numbers are most hesitant to change.

Also, when you inherit a program or takeover a new program, certain “insights” have been passed down along with it (e.g. always send on Thursday at 6 a.m. or hide the unsubscribe link at the bottom). In the chaos of all the other things that need their attention, program directors often don’t take the time to test those legacy findings. This is when you need to rely on data-centric marketing.

Demonstrate the need for change and present a proposed test based on what the data shows. Then be prepared to defend the test using the data, no matter the results. You don’t need to build a grandiose plan (e.g. template overhaul). You need to start small when you’re proposing a break from tradition. Identify one important aspect that can potentially improve your results and develop a testable hypothesis. Think in terms of optimization, not change.

About the Author(s)

Andrew Kordek

Andrew Kordek is a Co-Founder of Trendline Interactive.

Let's Take This to the Inbox

Sign up for our news, resources and updates. The inbox is our favorite place after all. We’ll make sure it’s worth it. (You can unsubscribe at any time, but you probably already knew that.)