How a Modern Direct Mail Testing Strategy Can Save Your ROI
David Ogilvy, the Father of Advertising, once said, “Never stop testing, and your advertising will never stop improving.”
Your current direct mail campaign may be profitable, but how do you know a different format, offer, or list wouldn’t perform even better? The only surefire way to answer that question is to validate with a regular direct mail testing strategy and program cadence.
The problem is testing has a bad rap. It can be an overwhelming and expensive process with countless attributes to be tested, tracked, and analyzed at a given time. This can lead to frustration, causing marketers to abandon testing – or even direct mail – altogether. Plus, knowing where to begin can be a brainteaser.
With proper guidance, you can build a sustainable testing program to discover your best audience, learn what motivates them, and refine your campaigns. The right direct mail testing strategy will allow you to maximize your testing budget and your program profitability. Here is how to get started.
Direct Mail Testing Mistakes to Avoid
Are you making the most common direct mail testing mistakes?
- Not testing
- Having the wrong definition of success
- Relying on A/B or multivariate testing
The list goes on. Whether you’re a seasoned pro or just starting out, our free e-book, 9 Testing Mistakes Direct Mail Marketers Make, will put you on the fast track to maximizing your direct mail ROI. Inside, you’ll learn how to identify and crush the most overlooked testing fails and how the channel’s top brands achieve scale in the mail channel.
Determining What to Test
Direct mail testing does not have to be complicated to be effective. However, it should be calculated and systematic. This takes planning and commitment.
Start by devising your testing ground rules, and how you will balance the risk and reward of campaign testing. How often will you test? At what volumes? What is an appropriate test vs. control ratio? How often will you test incrementally vs. swinging for the fences? (Hint: you should do both.) Each of these depends on the maturity of the channel and how control performance is achieving the KPIs. Set these parameters and stick to them.
From there, identify the order of importance (potential impact) of your testing categories: list, offer, creative, and digital integration, which are the primary variables in direct mail and are typically tested in that order. When asked about new ideas and strategies for future campaigns, at least 79% of marketers indicated that they are likely to test at least one (if not all) of these elements.
Take inventory of what has worked in the past and what failed. Review competitive samples through a tool like Competiscan or Mintel’s Comperemedia. Then, use this information to develop a testing roadmap to outline and prioritize test opportunities within each category, complete with a test rationale, hypothesis, and performance projection.
Generally speaking, your mailing list(s) has the biggest impact on campaign performance (optimization and scale). Therefore, it should be prioritized and account for most of your testing. Testing new data sources, new modeling approaches, different customer segmentation, and more are surefire ways to move the needle.
Also, examine how you can integrate additional direct marketing channels, such as digital or CTV, with your direct mail to boost campaign performance. Scrutinize the creative package for enhancement opportunities, paying close attention to the offer, CTA, format, images, colors, and fonts. It is also important to consider the timing and cadence of your prospecting and retargeting campaigns, such as frequency, seasonality, and delivery day. All these variables can impact response rate, conversion rate, purchase value, customer lifetime value, and overall program ROI.
Testing Strategies for Direct Mail
With your testing roadmap in place, it is time to devise your direct mail testing strategy (design). The two most common designs are A/B and multivariate.
In A/B testing, you are testing two versions of one element (e.g., creative), with one single variance between them, such as the offer or personalization. Although an affordable testing option, A/B can be a timely approach to determine the best-performing combination of list, offer, and creative. This testing strategy works best for mature campaigns with acceptable performance.
Multivariate direct mail testing allows you to test multiple program components simultaneously. This strategy accelerates the process but may also dilute the benefit, as tests are often conducted in small sets due to available budgets. New mailers working to quickly identify a control or those looking to achieve a step-change improvement benefit most from this strategy — provided they can afford the lofty price tag.
Recognizing a gap in the two approaches, some agencies offer proprietary direct mail testing strategies that aim to deliver multivariate test results but at a fraction of the cost. By isolating test elements and leveraging indexing, direct marketers can identify the variables that have the greatest relative impact on campaign performance to accurately identify a control combination — whether or not that combination was tested. This strategy is attractive to brands new to the channel and those looking to improve the performance of existing programs – even if they have more modest budgets.
Measuring Your Direct Mail Test Results
The end goal of a direct mail test is knowledge, not profit. Regardless of the outcome, there are no wasted results. You will learn what works and what does not — and you should never have to spend precious budget to test non-performing data sets or combinations again.
Your overall test objective (i.e., lower cost-per-acquisition or improved ROAS) should not only drive which tests you prioritize but also determine how you will measure and analyze your results. Rather than obsess over response or sales rates, we recommend you focus on striking the perfect balance between cost-per-acquisition and lifetime value. After all, one should inform the other.
To easily track your investment, include directly attributable elements on your mail piece, such as a unique URL or promo code. But don’t stop there; be sure you pair those results with a matchback analysis comparing your sales file to the mail file for a comprehensive view of campaign performance and attribution reporting.
Take caution against making snap judgments based on directly attributable performance alone. Allow ample time to perform your matchback analysis. Due to its longevity, a direct mail read period often lasts 60 to 90 days. Performing a backtest to validate initial test winners can help bring statistical validity to your results. Finally, let the data inform your future campaign strategy. Log the results onto your test roadmap – and keep testing!
Adopting a Results Mindset
Consumer behavior and preferences are ever-evolving, and the direct mail industry is no different as marketers eagerly search for new ways to increase engagement and loyalty. The key to a sustainable and profitable direct mail program is a commitment to a regular testing schedule and adopting a results mindset. Even if your current direct mail campaigns are successful, remember the words of David Ogilvy: continue to refine the elements that make your campaigns great and always watch for opportunities to improve.
Direct Mail Testing Fallacies
Fallacy #1: Direct mail is dead
Let’s debunk the most crucial myth: Despite living in a world ruled primarily by digital content, direct mail is still very much alive and thriving.
Your potential customers cannot absentmindedly scroll past a physical piece of paper like they might do with promotional emails. Direct mail effectiveness lies in our subconscious, grabbing consumer attention from the get-go so that when your corresponding email offer pops up in their inbox, they are less likely to ignore the message. It resonates with those who are not digitally active and consumers who don’t own smartphones, and there is also enough evidence to suggest that even millennials and Gen Z respond well to direct mail. With such positive responses, many marketers still rely on direct mail to achieve their top marketing objectives, including customer acquisition and overall growth.
Fallacy #2: Testing is not necessary
Without testing, you could unknowingly repeat the same failures. The reports may show solid performance, but you cannot be sure that you are actually maximizing your return on investment without challenging the results. It is inevitable that direct mail content — or any marketing collateral for that matter — will eventually experience fatigue, and testing errors can lead brands to either continue using stale content or fail to choose the best replacement. With this in mind, it is better to avoid assumptions, protect your ROI, and invest in testing efforts that can bring such factors to light.
Fallacy #3: The risk is not worth the long-awaited reward
We agree that testing can be risky, and it is easy to become indifferent when the first few attempts reap little reward. Plus, getting approval from company decision-makers can be challenging when the ROI is not clear or immediate. Why deal with the backlash if you don’t have to? That is a fair point, but the real solutions lie in performing better testing and making a compelling case for the efforts.
Remember that learning what does not work is just as valuable as learning what does. When you know what is not working, you will never waste your funds or efforts on combinations that are not yielding positive results. Testing is an effective way to gain this impactful insight.
Fallacy #4: Creative is the most important element
Of course, you want your direct mail to be thoughtfully written and beautifully designed, but the appearance is trivial if the content does not reach and resonate with the right audience to encourage the desired action.
The 40-40-20 rule suggests that 40 percent of a test’s success is in the quality of the list, 40 percent is in the offer, and 20 percent is in the creative. While there is some truth to this, our data scientists see it from a 60-30-10 perspective. Since your creative is built on your audience insights and offer strategy, the ultimate value lies in testing multiple lists and offers.
Fallacy #5: A/B or “split” testing is the only way
Many companies use the A/B testing method, mainly because it is straightforward and inexpensive. However, it takes way too long to yield results — and those outcomes often lead to only minor improvements in the direct mail strategy as you test one element at a time. Pricey multivariate testing is a plausible alternative, but why limit yourself to two average options when there is a third method that has proven to be both effective and affordable?
The SeQuel FaQtor Test
SeQuel’s proprietary FaQtor Test methodology is a hybrid strategy to testing. It provides direct marketers with the success rate of a full-scale multivariate test near the investment level of an A/B test. We do that by strategically choosing a few creative/offer/list combinations to test at sample volumes, measuring which variables impact performance most, and extrapolating those findings to predict the performance of various combinations our clients didn’t have to pay for to actually test. It’s rooted in a renowned statistical experimentation practice called ‘fractional factorial design.’ View our FaQtor test video for a closer look at its unique ability to lower your cost per acquisition (or launch the direct mail channel!) faster than any other testing method in the direct marketing industry.
“Through the process of indexing and handicapping, we can show how various combinations of lists, offers, and creatives would perform based on the relative performance of a specific list, specific offer, and specific creative tested in isolation,” explains SeQuel’s President, Erik Koenig. “Indexing what those combinations would look like, and handicapping it based on actual and projected volumes, leads us to the ultimate control combination of list/offer/creative, which may not have been tested initially at all.”
Are you tired of inefficient direct mail testing methods and missed opportunities? Most new controls SeQuel has built were developed via an index proven in a back test, rolled out, and then proven again — let us show you how quickly your direct response program can win.