NTEN Webinar Roundup: Stepping Up Your E-mail Marketing, Part 4
Part 4 of the NTEN webinar series — Evaluating Your Email Marketing Efforts for Success! — featured Lauren Miller and David Leichtman, senior strategist and director of e-mail programs, and director of analytics and technology, respectively, for Blue State Digital, the firm highly recognized for its work on President Obama's campaign, sharing tips on evaluating e-mail marketing efforts.
Four years ago, e-mail was a one-way communication vehicle ,and the only thing you could do with an e-mail list was send out a "blast." The measure of success was the size of your list and how many people were opening the e-mail.
Today, it's all about action oriented, advocacy-driven e-mailing where the purpose is getting a two-way dialogue going. Organizations' e-mail efforts should have two broad goals:
- To build online communities that empower members and get them to take action. "Nonprofits are in it for the long haul," Leichtman said. "So you're trying to build an ongoing conversation that can last."
- To make online organizing accessible, measurable and repeatable.
Four ways organizations can accomplish those goals are:
- Create an ongoing conversation with members
- Use voices and personalities in e-mail
- Be timely, i.e., don't send e-mail just for the sake of sending e-mail
- Include an action so you can test
There are three steps to e-mail evaluation:
According to the presenters, segmenting your e-mail list is one of most important aspects of creating an e-mail program. This involves breaking down groups by characteristics, e.g., activity level, high-dollar donors and geography. You might want to time e-mails so that everyone in every time zone gets it at the same time. Segmenting allows you to target your ask so people feel a personal connection.
Always have a metric to measure. What is the purpose of the e-mail? If it's signing a petition, then the number of signups is your metric. If it's donations, the donation dollars you pull in are your metric.
Set goals based on past performance and what has worked before. What are you trying to accomplish? Know in advance if there are budget restrictions and if your organization needs to hit certain milestones.
Important stats to pay attention to are:
- Open rate — A good open rate varies from organization to organization. The presenters recommended doing a comparison over time with your own e-mail list to determine who your audience is and what they're interested in.
Measurement of open rates is never precise, they noted. For example, according to Leichtman, a lot of people have the automatic loading of graphics turned off, so that won't register an open, and if someone previews an e-mail in Microsoft Outlook, that will register as an open. Compare open rates of e-mail campaigns against each other.
- Clickthrough rates — The presenters said this metric is more important than open rate, as it shows, of those who opened the e-mail, who clicked through.
- Action conversions — Who clicked through and actually took action, e.g., sign a petition or make a donation? You want the conversion rate to be high. If it's not, the presenters said this could indicate that maybe text of your e-mail wasn't clear enough about what you wanted people to do, or you had a bad landing page with the action bogged down in too much text.
- Your main metric
2. Target, segment and test
Targeting ensures that your e-mails go to the right list. Targeting should be based on data (e.g., demographics, dollars). For example, if you're organizing an event, make sure your e-mail goes to people who can actually attend.
Testing is the bread and butter of evaluation. Break up e-mail factors into test groups and send e-mails to small, but significant segments. The presenters recommended testing as much as you can, but not over testing.
Come up with four subject lines and send an e-mail with each to four different segments of your list, and see which performs the best. Miller and Leichtman suggested throwing things at the wall to see what sticks.
Thing you can test are:
- Sender name. Edward or Eddie?
- Subject line
- Content length. Maybe you need extra length to explain what you're doing at your organization, or maybe your readers just want something short and quick that they can just take action on. Test this.
- Images. They might compel people to take action because they explain what's going on. Or they might not help at all or, worse yet, hurt.
- Image color. A big, red donation button might work well, or maybe not. Maybe a blue donate button works better. Or green.
- Placement of images and links. Which performs better, links within a paragraph or on their own?
- URL vs. text links. Do you show the exposed URL or a hypertext "click here" link. Leichtman said he's seen varying success with each.
- Formatting options. Should you use a template? HTML e-mail vs. plain text or both?
- Phrasing. Should your ask be aggressive, e.g., "Make a donation now," or a milder ask, e.g., "Will you chip in"?
3. Evaluating for success
When evaluating your e-mail program's success, the presenters advised using real, statistical significance and comparing and retesting. And if you're not sure, test again.
But they warned not to overevaluate your program. If you already know something works, don't test it. Test in a limited sense and only if you know that you stand to benefit from it. Keep your eyes on the prize, they said.
"Don't test all of this at once. Only test one thing at once so you know if it's statistically significant," Leichtman said.
Landing page optimization is important to do as well, as getting people to take action is very dependent on what your Web page looks like. Test your landing page format, e.g., a one-column format vs. a two-column format, and video vs. no video.
Also consider how recent and how active your list members are (and the correlation between the two). People tend to be more likely to donate if they took action more recently.
And at the end of the day, if you don't have time to do a/b testing and can't do feedback in real-time, the presenters advised cutting your list in half and sending one e-mail to one group and another to the other. You can always do an ex post facto look back in your data and make an experiment out of it later on.