So, Uh, Are We Doing This Right?
For a number of years now, there has been a trend in the advertising and marketing industries toward a greater accountability for dollars spent. The recent recession has accelerated this trend. Thus, what used to be the darling unit of an advertising agency — general television and display advertising — is now being asked to “prove” that expenditures produce a return on those investments.
Meanwhile, those who have toiled in the fields of direct response, where everything was and remains measurable, are being asked to measure more complex marketing scenarios involving multiple channels.
This is no less true in fundraising. Not so long ago, in many nonprofits the “communications” team could do “brand building” without the discipline of measured ROI. Not a single piece of direct mail could be sent without a budgeted return on investment, but four-color, bound annual reports could be mailed at a cost of $6 each without any measure of their return.
Now, with tighter budgets and greater demands for accountability, every activity is measured for its productivity, either on behalf of the mission of the organization or its fundraising. So, direct-response fundraisers are being asked to measure more carefully in an increasingly complex environment.
If the integration of multichannel fundraising efforts is the holy grail of direct-response fundraising, then one of the biggest challenges to achieving this pinnacle of success is figuring out what combination of channels and what tests within any channel “worked” or produced the best ROI. This actually poses two challenges in today’s environment: The first is to capture the data in a single location so you can design tests and conduct measurements, and the second is to design tests that meaningfully measure what you are trying to understand.
An ‘Integrated’ Database
The desire for an “integrated” database is as long-standing as multichannel marketing. Managers want to know when a donor makes a donation via mail, Internet or telephone; buys a ticket for an event; and volunteers for a “walk.” They don’t want five databases; they want the information in one location. For many, this is an objective that has not yet been reached.
However, more and more organizations have managed to create single, shared databases that capture information concerning all solicitations and all donor transactions in one location. These solutions are mostly offered by independent direct-mail database service bureaus that have expanded beyond direct mail and linked to vendors working in other channels. Online vendors and those that track direct-response television (DRTV) generally don’t have the capabilities to manage direct-mail programs.
Very often these solutions require periodic data syncing or downloading of data from a single-channel vendor (e.g., online) to the main database (data processing service bureau). There are very few truly integrated and automated database solutions, and they are generally quite expensive and often require you to work only with that vendor’s channel marketing partners. The best solutions are either commonly used direct-mail database solutions that can be expanded or linked using “open source” software with which you or any consultant or vendor you hire can plug in the data from a new channel without massive fees, programming costs or database design limitations.
Designing Meaningful Tests
If you are fortunate enough to have created a single, integrated database, your next challenge is to design meaningful tests. The old, single-channel, single-variable “split tests” may not be sufficient.
For example, if you are wondering if direct-mail copy A is better than direct-mail copy B, you could simply run an A vs. B split test to a random sample of the universe of names being mailed. Assuming you designed the test correctly and only changed one variable at a time (here it would be the copy), you could easily measure which copy “won” or did better according to appropriate measures like percent response, average gift, income per 1,000 letters mailed or long-term value. Thus, judging the results is straightforward.
But how do you do that same copy test when you are simultaneously engaged in: 1) telemarketing to some of those same donors, 2) running DRTV ads to recruit monthly donors, and 3) e-mailing some of those same donors?
For example, if you experience an increase in direct-mail response rate when you mail package A (compared to package B), how do you know that the increased performance of the package wasn’t a result of the DRTV messaging, the e-mail messaging or the telemarketing messaging as opposed to what you would have concluded from a pure A vs. B split test? Or, for that matter, how do you know if some donors gave via Internet because of your direct-mail letter, which contained the organization’s URL?
As you can see, in a multichannel world, the process of designing and executing tests and reading results becomes more complicated. What are some things you can do to more accurately measure test results under these circumstances?
1. Develop precise solicitation and donor coding and transaction measures for each single channel, and maintain those data in your integrated database. For example:
- You should know if Mrs. Smith received a direct-mail solicitation from you. You also should know when she was solicited; if she replied or not; and if she did, when and in what amount.
- You should know whether the same Mrs. Smith received an e-mail solicitation from you. You also should know when she was solicited; if the e-mail bounced; if it was opened; if there were any clickthroughs; if she donated; and if so, when and in what amount.
- You should know whether the same Mrs. Smith was solicited by telephone. You also should know when she was solicited; if the call was completed; if there was a pledge or a donation by credit card; and if so, when and in what amount.
- You should know whether the same Mrs. Smith responded to a DRTV advertisement by calling your inbound call center. You also should know which ad she responded to (i.e., which toll-free number or in what time frame); if there was a pledge or donation made by credit card; and if so, when and in what amount. Or, did Mrs. Smith go to a special website landing page and donate in response to a DRTV advertisement — if so, when and in what amount?
Or, did Mrs. Smith go to the charity’s general donation page and donate within a few minutes of the running of a DRTV advertisement that was seen in the geographic area in which she lives — if so, when and in what amount?
In other words, to do multichannel testing, you must first establish methods to capture data within each channel, and you must think, in advance, of all the ways in which donors might respond to your solicitations — not just using the channel in which they are solicited, but using whatever channels the donors prefer.
In this process, you must try to determine how you could separate responses by source of solicitation including:
- distinct toll-free numbers for each direct-response solicitation;
- distinct landing page URLs for each direct-response solicitation;
- coded reply devices for direct-mail solicitations;
- coded reply devices for special event solicitations;
- coded reply devices for public speaking engagements; and/or
- business rules regarding time frames of responses and ZIP or geographic codes of donors for public relations activities, local events, DRTV, display advertising.
2. Once you have your coding and database, you can develop test scenarios. Here are some examples:
Testing both pre-postal mail and post-postal mail outbound e-mails saying, “Watch for our letter in your mail!” or “Did you receive and respond to our recent letter?”
You will, of course, split the outbound mail file into those who will and those who will not be receiving e-mails. But you need to be sure you are splitting the file of those who have e-mail addresses, not just those on your mail file. There is a strong argument that the mere presence of an e-mail address on your file is an indicator of a more dedicated donor.
- Thus, your split test is: Of those who have an e-mail address on record, do they respond better when receiving only a direct-mail piece or when receiving an e-mail reminder? You would further split this test based on testing e-mails before, e-mails after, and e-mails both before and after. Thus, counting the control group you would have a four-way split.
- You then need to attribute revenues from those who receive the letter and an e-mail reminder but then donate via e-mail rather than via mail. So each outbound e-mail link needs to be coded to a different donation page or otherwise tracked so you can determine that the donation was in conjunction with the e-mail reminder and postal mail package. Here, you really don’t have a “control,” but you do have the before, after and both three-way split.
- When doing your final analysis, you need to amalgamate the income or calculate overall response rates for each scenario (control, before, after, both) across both channels in order to measure the effect of using two channels instead of one.
Testing outbound telephone calls designed to alert the prospective donor to the postal mail letter.
This is similar to the previous example, but it uses a combination of the intelligent mail barcode (IMB) on the postal mail and telephones as opposed to e-mail.
- Just as noted above, you need to split the postal mail group into the test and control panels, but you will want to split only those who have telephone numbers as part of their records. This is because the presence or absence of a telephone number may indicate a different level of involvement of that donor when compared to a donor with no telephone number.
- Then you would use the IMB to track the test group so you would know within 24–48 hours of when the letter will be delivered to the prospective donor. You would then telephone the test group to encourage it to read and respond to the letter, but you would also measure: who and how many could not be reached, who and how many answered but declined further communication, who and how many pledged or donated via telephone, and who was reached and then later did or did not donate.
You would then compare the data on all of these “segments” with the control. Perhaps those who were reached but declined further communication performed significantly worse than those in the control. Does that mean a phone call damaged the relationship between the charity and that donor? Or perhaps those who were reached and spoke via telephone to a charity solicitor but didn’t donate by phone responded significantly better to the direct-mail package than those who were never called.
Testing additional channels or media in a national DRTV environment.
DRTV, when conducted nationally, is a wide-ranging medium targeted at demographic groups (who watch a particular network at that time of day). Unlike direct mail, e-mail or telemarketing, it is not targeted to individual donors. As a result, understanding whether DRTV lifted response rates to direct mail or whether direct mail lifted response rates to DRTV is difficult but not impossible.
- Conduct an analysis of the primary medium on a city-by-city basis. For example, if you are testing to see the effect on DRTV response with the addition of direct mail, analyze your DRTV responses on a city-by-city basis to try to find two cities (“matched city pairs”) that have historically performed very similarly. Having identified some matched city pairs, try introducing one new channel at a time in two or three matched pairs.
- Thus, if Omaha and Syracuse perform similarly before the test (i.e., have the same baseline), try introducing direct mail only in one of these two cities and then measure the overall results (combined effect of DRTV and direct mail in the test city vs. DRTV only in the “control” city). This assumes there were no external effects (like a front-page story in The Omaha World-Herald about your charity) that would skew results. When you then measure the differences between the cities (if any exist), you are theoretically measuring the effect of introduction of the new channel.
3. Measuring results: When you are interpreting multichannel test results, you will have more “noise in the system” or random and unattributed transactions than you would normally have in a single-channel test.
For example, there will be more “white mail” or unattributed responses simply because when exposing donors and prospective donors to multiple touchpoints, there is a greater likelihood that any single donor will take steps to reach out to you as opposed to responding to your solicitation through the channel used in a specific solicitation.
Another source of unattributed responses is an increased number o hits on your main website and donations or registrations for e-newsletters.
Sometimes, one can put time frames around these as one does with DRTV.
A national PR event or announcement and accompanying news coverage can cause a temporary spike in website hits, newsletter registrations and unsolicited gifts. One can establish a business rule that the increase above the norm in such traffic, occurring within 48 hours of the event or news coverage, is attributable to that PR. Thus you can monetize the value of that PR activity.
Multichannel fundraising is more effective than single-channel fundraising because:
- It broadens the demographic reach of your message: Mail reaches an older audience; Internet reaches a younger audience and broadens the age range; telephone reaches a middle-aged to older audience; DRTV reaches a younger audience than mail or phone.
- It creates multiple touchpoints with the prospective donor, reinforcing the message.
- It allows the donor to choose the channel through which to respond.
It’s difficult to attribute the general growth in visibility of the organization’s brand as a cumulative result of multichannel marketing, yet there is little doubt that increased visibility results in greater numbers of donations. If you have established a baseline of giving via each channel and overall, you can measure those trends in general.
However, it is still important even in a successful fundraising program to measure, as specifically as possible, what is “working” and what is not. Therefore, measuring results, despite the complexity of doing so, is an important part of accountability in fundraising. FS