9 Testing Mistakes Direct Mail Marketers Make — and How to Avoid Them

Who says direct mail is old news?
Done well, direct mail has always been an effective and profitable way to sell. And this classic channel continues to prove its worth today, even with the growing popularity of digital marketing.
Maybe you’ve already tried direct mail and you’re still not convinced. In that case, we have but one question:
Have you done any testing to compare the response rates of your package using different mailing lists, offers, or creative executions?
Or maybe you’re new to direct mail and you’re still determining the best program strategy. In that case, we have but one statement:
A strategic testing approach is crucial to establishing an optimized direct mail campaign.
In either situation, smart testing is key to direct mail success. Testing is what tells you whether Audience A responds better to your marketing efforts than Audience B; whether Offer X outpulls Offer Y; and which creative elements — layouts, headlines and graphic designs — move the needle the most.
In fact, testing helped a leading consumer brand build a successful direct mail program from zero to its #1 most profitable acquisition channel, boosting response rates by 270% and reducing cost per acquisition by 50%.
Test, and you shall receive. It’s true! We’ve laid it all out for you in our free e-book, “9 testing mistakes direct mail marketers make.” Keep reading for a preview of the first eight pages and learn more about nine typical reasons – and remedies – for stagnant direct mail program performance.
———
9 testing mistakes direct mail marketers make—and how to avoid them
“If you’re not getting better, you’re getting worse.” —Pat Riley, Five-Time NBA Champion Head Coach
If your direct mail campaign was failing to reach hundreds of thousands of would-be new customers every year, how soon would you want to know that? Yesterday, right?
The reality is that without an advanced knowledge of sophisticated testing strategies, you could spend years investing in a non-optimized control that leaves millions of dollars on the table annually. So whether you’re a DM pro trying to unseat an existing control, or your company is new to mail and looking to structure a launch test, the insights in this report will put you on the fast track to maximizing ROI in the direct mail channel.
Mistake #1: Not testing
“In any moment of decision, the best thing you can do is the right thing, the next best thing you can do is the wrong thing, and the worst thing you can do is nothing.” —Theodore Roosevelt, 26th U.S. President
Seems like an obvious mistake, but you’d be surprised. When our marketing strategists begin a conversation with a prospective client, we walk in the door knowing there’s a 10% chance that the company is not performing any testing whatsoever in the direct mail channel.
There could be a number of reasons for this. Perhaps they don’t have the creative resources. Maybe they don’t know how to structure a test matrix. It could be that they don’t have enough room in the budget. But most of the time, if they’re not testing, it’s because inefficient testing methodologies have created a risk-averse culture within the organization.
A toxic testing culture
“If your regular promotion produces a response rate of 2 percent, and your test produces only 1 percent, you will be accused of wasting the company’s money by doing the test. If your test produces 3 percent, you will be accused of wasting money by mailing your regular 2-percent promotion. If you do no testing at all, no one will complain.” —Arthur Middleton Hughes, Database Marketing Author
Based on Hughes’ observations, it’s easy for us to understand why it can be difficult for you to get tests approved—especially if performance isn’t fatiguing enough to raise eyebrows. But every wave eventually comes to shore. Fatigue will set in—it’s just a matter of when. And by the time you notice it’s happening, and you’re actually feeling the effects of it, you could be three to six months away from a fix.
The irony of mistake #1 is that if you’re making mistakes #2 through #9, you’ll likely wind up making this one, too. Testing is risky, and after a few failed attempts, you’ll assume you can’t do any better than this list or that creative. But in reality, your problem doesn’t always lie in what you’re testing, but in how you’re testing.
Mistake #2: Having the wrong definition of success
“I have not failed. I’ve just found 10,000 ways that won’t work.” —Thomas Edison, Inventor
The end goal of a test is knowledge—not profit. But sadly, many direct marketers judge the performance of a test based on the aggregate response rate or cost per acquisition of the entire testing effort.
The only way to maintain sanity when testing is to look mainly at the best performing test cell, and compare its performance with that of your existing control. If that promising new combination of creative/offer/list proves itself in a backtest, imagine the impact that one cell could have on the overall profitability of your next rollout. Plus, your cost per piece will never be as high at rollout scale as it is at test volumes.
No wasted test cells
It’s worth acknowledging that, yes, to find that winning cell—or ideally, those winning cells—you’ve spent most of your testing dollars on combinations that didn’t yield the ROI of your old control or new winners. But guess what? That money wasn’t a waste at all!
In that process, you’ve learned precisely what doesn’t work. If you’ve structured the test properly, you’ll know the exact concepts, offers and lists that had a negative relative impact on performance—and you never have to “waste” money on those data sets again.
Now, if only you could figure out a way to weed out those losers before spending any money testing them… (We’ll answer that riddle in the sections ahead.)
SR, CPA, CPL, LTV… OMG
Once you have your expectations in line, it’s time to figure out which key performance indicators you’ll use to measure success. Many direct marketers obsess over response metrics, when in reality, measures such as cost per acquisition or lifetime value might be much better indicators of victory.
The lesson there is to always look for test variables that attract the best possible class of customer—don’t rule out an element simply because it lost the sales-rate game at face value.
———
Don’t stop here! Keep reading to discover seven more mistakes direct marketers make—so you will know how to avoid them.