Who Watches the Charity Watchdogs?
It is Friday, Feb. 19, and Michael Thatcher sits before hundreds of nonprofit leaders, fundraisers and marketers at the DMANF Washington Nonprofit Conference, ready for questioning. To say that the former Microsoft official and current president and CEO of Charity Navigator has entered the lion’s den would be inaccurate. He has thrown himself in—willingly, as the centerpiece of the event’s networking luncheon Q&A—walked up to the biggest lion he could find, and jumped into its open jaws. The head of the nonprofit sector’s most controversial organization facing his harshest critics, on their turf. A lion tamer without the whip and chair.
Thatcher is here to discuss Charity Navigator’s plans for the future: an overhauled ratings system, continued focus on how charities report joint cost allocation, a rebranded watch list, greater emphasis on donor privacy. The moderator, Shannon McCracken, chair of the DMANF Advisory Council and vice president of donor development at Special Olympics International, asks him a question about list-sharing—specifically, Charity Navigator’s push for donor opt-in.
“At this point, the position stands as-is. And we really want it to be an opt-in, primarily focused on donor advocacy from that perspective,” Thatcher says. “I understand that this is increasingly where we’re heading, and that you’re no longer communicating with a qualified list of potential donors, and I understand that. But I also understand that, in what we stand for—which is promoting intelligence in giving—we don’t know that mass mailings necessarily go in stride with that. This may be an area where we have to agree to disagree.”
The crowd gasps.
“Or maybe not agree to disagree. Just disagree.”
This is Charity Navigator’s relationship with the nonprofit sector, distilled. Both sides agree that accountability and transparency are necessary. They disagree on just about everything else.
Chief among these grievances is that Charity Navigator is overstepping its bounds as a watchdog, pushing a narrow view of what it believes philanthropy “should be,” rather than enforcing the well-worn and long-accepted standards already in place. That’s why Thatcher’s remarks at the DMANF luncheon caused such a stir—if Charity Navigator’s vision for “intelligent giving” runs counter to everything nonprofits are currently doing, then what does that say about the sector as a whole? What does it say about Charity Navigator?
“I can’t think of any credible nonprofit that would argue against accountability, defined as good governance and ethical best practices, and reasonable transparency in making critical information about the organization easily accessible to donors,” McCracken told us. “I’m all for a system that helps donors make good, educated choices. But the system shouldn’t feel arbitrary or biased in the way it defines good versus bad. There’s a fine line between wanting what’s best for donors and deciding what’s best for them.”
Charity Navigator wants to be at the forefront of that discussion. It has a vision for the nonprofit sector. But is it the right one?
Is it even possible?
There are a number of watchdog organizations that monitor nonprofits—too many to list if you include the smaller and newer groups, like GiveWell and GreatNonprofits, or those that cover specific verticals within the sector. But the big ones are as follows: CharityWatch, Better Business Bureau (BBB) Wise Giving Alliance, GuideStar and Charity Navigator.
None of these organizations are perfect. CharityWatch, which bills itself as “America’s most independent, assertive charity watchdog,” analyzes a variety of financial statements and assigns a letter-grade efficiency rating based primarily on program percentage and cost to raise $100. But the organization does not include joint-cost solicitation expenses as a program expense, as most charities do, instead treating those costs as a fundraising expense in its grading formula. It has earned its self-designated “assertive” label, drawing fire from nonprofits for its, at times, overly aggressive reporting. (One website that rates the charity watchdogs called CharityWatch a “one-man crusade.”) And despite support from individual donations, the site charges a fee for access to most of its ratings, and rates considerably fewer charities than its peers.
BBB Wise Giving Alliance uses a 20-point checklist, divided into four areas: governance, effectiveness, finance and fundraising. Charities must meet all 20 standards to achieve accreditation. The organization doesn’t consider itself as a “ratings” site, but the pass-fail nature of its accreditation policy essentially makes it one. And while it is free to use and rates a healthy number of nonprofits—about 1,300 national nonprofits at its main site and some 10,000 smaller charities on its state-specific sites—it has a few issues. It grades heavily on processes and documentation, making accreditation more difficult for smaller, less-conventional organizations. And it charges a healthy fee for nonprofits that want to use its seal—up to $20,000.
And then there is GuideStar. The venerable ratings site is not actually a ratings site at all—which explains its high standing in the nonprofit sector—positioning itself instead as a neutral observer. (From its website: “Many people think that we are a charity evaluator or a watchdog. We aren’t.”) The organization has information on file for about 1.8 million nonprofits, and recently completed a redesign that made its already robust nonprofit profiles easier to read and more visually appealing. It’s also free; a one-time registration grants users access to almost all of the site’s vast resources. GuideStar’s biggest problem? It’s not Charity Navigator.
None of the watchdogs are. Charity Navigator’s web traffic reached 9.1 million visitors in 2015, up 21 percent from the prior year. On Alexa Internet, a web traffic analytics company, Charity Navigator ranks 4,822 in traffic among all U.S. websites. CharityWatch ranks 24,565, BBB Wise Giving Alliance 39,469. SimilarWeb, another web traffic and analytics site, ranks Charity Navigator sixth in traffic among all U.S. philanthropy-related websites. GuideStar is the only ratings site that rivals Charity Navigator in web traffic, ranking higher on Alexa and close to or even with Charity Navigator elsewhere.
But Charity Navigator dominates in influence. It is the media’s go-to watchdog, its ratings—and its name—inevitably popping up whenever there’s a story on Charities Gone Wild. (Since 2005, The Washington Post has referenced “Charity Navigator” in 342 different articles; it has referenced “GuideStar” in 52.) It is the first Google search result for “charity ratings,” “best charities” and “where should I donate?” It is the closest thing the sector has to a household name, with charities routinely touting favorable ratings in press releases and elsewhere. Consider this anecdote from Marc Gunther’s profile of Michael Thatcher for Nonprofit Chronicles:
Not long ago, he was on his way to a meeting with a disgruntled charity in New York when he was stopped by a "chugger"—a street fundraiser, or charity mugger—and asked to donate to a well-known nonprofit. He explained that he likes to research his charitable giving. In response, the young woman opened a folder to show him that the group has a four-star rating from Charity Navigator.
In other words: Charity Navigator is huge.
Like the other watchdogs, Charity Navigator is not perfect. But its size and influence magnify its flaws. Critics believe the organization—which is uniquely positioned to champion charities and donors alike—has misused its platform, focusing on the wrong metrics and attempting to oversimplify the sector’s complicated inner workings, with disastrous results.
“There are two fatal flaws in the Charity Navigator approach: One, it misleads the public even when purportedly accurate, and two, it is often inaccurate,” said Geoffrey W. Peters, pro bono general counsel, American Charities for Reasonable Fundraising. “The public, press and politicians are encouraged to believe that Charity Navigator is an ‘impartial evaluator of publicly reported financial, accountability/transparency and results reporting’ that exists to ‘guide intelligent giving’ and ‘advance a more efficient and responsive philanthropic marketplace.’ Yet its choices of ratings criteria are instead—whether intentionally or inadvertently—designed to mask rather than reveal what should be intelligent giving choices. And its public behavior is anything but impartial, intelligent or promoting efficiency.”
Peters, a respected fundraiser and industry veteran, stressed that his comments were his own and do not reflect the views of any organization with which he is involved. But his feelings on Charity Navigator typify those of the fundraising community at large.
The biggest issue is Charity Navigator’s notorious use of overhead and fundraising ratio as primary factors in its ratings formula. This information is easy to obtain, but tells little about how effective a charity truly is and favors organizations that spend little on fundraising or administrative costs. It’s also easy to manipulate.
“This has led to Charity Navigator rating organizations that are all but overt scams with four stars and others that are leading the sector in implementing impact measurement with lesser ratings—as if Charity Navigator could really distinguish between a three-star level of effectiveness and a two- or four-star level,” said Peters. “Thus, the public is misled into believing these ratings have meaning, utility and accuracy, when the truth is the measures chosen are nothing more than conveniently and easily obtained from the Form 990.”
Other criticisms leveled at the site:
• It does a poor job of actually identifying when a nonprofit is crooked. Writes Jan Masaoka for Blue Avocado, an online nonprofit magazine: “When a nonprofit is extremely badly managed or is run by crooks, the charity raters are typically the last to know. Paradoxically, the raters wait for The New York Times to identify bad apples, and then they jump in to call the apple rotten. The Central Asia Institute (Three Cups of Tea) was found to be crooked by ‘60 Minutes’ at a time when it boasted four-star ratings (the top available) from the raters.”
• Despite its status as a nonprofit, Charity Navigator does not rate itself. While it plans to in the future—and freely offers its financial and results data on its website—it’s a bad look for an organization that advocates for transparency.
• It does little to defend charities from inaccurate reports or wrongful criticism. “What our sector needs is more courageous leaders to help shape pubic discourse and respond to false and misleading media headlines, not to wrongly follow the herd in order to gain favor with the media,” said Peters. “Charity Navigator, once again, missed that opportunity when a truly worthwhile charity recently was attacked in the national media, and instead of using its supposed expertise in reading the Form 990 to clearly refute the allegations, it instead placed the charity on its Watch List despite the correct information being readily ascertainable from [the organization’s] Form 990 and website.”
On this last point, Peters is referring to Wounded Warrior Project, the subject of investigations by CBS News and The New York Times alleging that the veterans charity spent $26 million on opulent staff parties and events. The investigations leaned heavily on financial data obtained from Charity Navigator, and while much of that information appears to have been misinterpreted—a closer reading of Wounded Warrior Project’s Form 990 reveals that $24.4 million of its events spending was a program expense—the damage was already done.
Three days after the reports were published, Charity Navigator added Wounded Warrior Project to its Watch List and Donor Advisory. At the time, Wounded Warrior Project had a four-star accountability and transparency score—Charity Navigator's highest.
Peters argues that Charity Navigator should have stepped in or spoken up to address the issue. But this goes against the watchdog’s stated policy. From Charity Navigator’s Donor Advisory methodology page:
The Donor Advisory Issuance Committee does not have the capability to independently assess the veracity or accuracy of the information, nor does it attempt to do so. The committee views its role solely as determining whether a donor might find such information relevant in considering whether to make a contribution to the organization. Charity Navigator makes no representation nor takes any position regarding the accuracy or completeness of the reports referred to in the Donor Advisory or contained in the external links to which donors are directed.
Charity Navigator views itself as a neutral observer, passing no judgment and adding charities to its Watch List and Donor Advisory only as a means of alerting donors to potential issues. And to its credit, it has been consistent in this policy and transparent in its methodology. But that hasn’t stopped the public—or the media—from viewing the lists as a condemnation, exemplified by this CBS News headline in the days following its initial Wounded Warrior Project report: “Wounded Warrior Project on Charity Navigator’s Watch List.” The post has 3,000 Facebook shares. A month later, Wounded Warrior Project was already losing major donors.
“When is the last time you recall Charity Navigator defending a misconceived attack by the media or a politician?” said Peters. “Perhaps that tells you something about its leadership in the sector.”
But if nonprofits view Charity Navigator as an adversary, the feeling isn’t mutual. The watchdog has been vocal in its opposition to overhead as the definitive ratings metric, partnering with BBB Wise Giving Alliance and GuideStar on “The Overhead Myth,” a website aimed at debunking the notion that financial ratios are the definitive measure of nonprofit performance. Launched in 2013, the site includes various resources and prominently features a pair of open letters—one addressed to donors, the other to nonprofits—urging a greater focus on impact and more attention to “transparency, governance, leadership and results.” Art Taylor, president of BBB Wise Giving Alliance, Jacob Harold, president and CEO of GuideStar, and Ken Berger, then president and CEO of Charity Navigator, signed off on each letter. All three organizations posted it to their websites.
Thatcher, too, has been vocal. Since taking over as president and CEO of Charity Navigator in mid-2015, he has pushed hard for a revised ratings system that focuses less on overhead and more on results. He has encouraged charities to do the same, calling for a unified effort to change the way the sector measures impact.
“We’ve always told donors not to judge charities solely based on their financial efficiency, which is reflected in our methodology with the financial health of a charity accounting for just 50 percent of its rating,” Thatcher told us. “We are working toward including measurements around results-reporting, so financial metrics will play an even smaller role in each charity’s rating. As you know, rating impact is a challenge and the sector needs to work collaboratively to get this right and to scale. Despite industry talk about the importance of judging charities on outcomes, our research into 3,000 charities with all types of missions revealed that few charities are actually measuring and reporting publicly on their results.”
He also seems acutely aware of Charity Navigator’s reputation among nonprofits, and has worked tirelessly to address it. He has routinely made himself available for interviews in industry publications, and has faced criticism head on—most notably at the DMANF luncheon—in an effort to keep an open dialogue. He takes industry feedback seriously. Thatcher says Charity Navigator:
• Plans to explore a vertical-based approach to ratings, in an attempt to better account for nuances in each nonprofit sub-sector. “Our first set of discussions will be taking place in April of this year, with humanitarian relief-related charities,” he said.
• Is open to working with sector-specific accreditation groups on ways to better report outcomes. Thatcher noted that a member of InterAction—the global NGO collective whose work with the International Aid Transparency Initiative is highly regarded—serves on Charity Navigator’s advisory panel. “I just met with [InterAction] last week in D.C. to explore additional ways in which we may be able to collaborate,” said Thatcher.
• Wants to revise its Watch List and Donor Advisory policies. “One element we are considering, based on the concerns raised at DMANF, is giving an organization more time to address a ‘red flag’ issue before posting it on the Donor Advisory and Watch List,” Thatcher explained. Currently, charities have just two days to respond to a potential issue before Charity Navigator adds them to one of its lists. (Note: After this article was originally published, Charity Navigator extended its Watch List response time.) There’s also some confusion over what each list means. At the DMANF luncheon, Thatcher said that the Watch List is viewed as “more toxic,” but the Donor Advisory is actually a more damaging claim. Charity Navigator plans to address this, though Thatcher didn’t say how.
More than anything, Thatcher genuinely seems to believe in the cause. He wants what’s best for donors. And he wants charities to be better. The question, for nonprofits, is whether Charity Navigator’s vision for philanthropy is realistic—and if Charity Navigator is even qualified or capable enough to deliver on it.
“The challenge is that one size doesn’t fit all when it comes to measuring impact,” said Shannon McCracken. “Charity Navigator has a rather small staff and it wants to significantly expand the number of organizations it is rating. There isn’t enough horsepower there. It seems like there will have to be reliance on the nonprofits to self-report their own impact—at which point Charity Navigator’s role becomes less clear. Are they grading our report or grading our impact? Are they attempting in any way to validate the accuracy of our information, and should that even be their role?”
The Impact Revolution
The bigger question might be whether anyone is capable of delivering on Charity Navigator’s vision. Measuring impact is difficult even for the largest nonprofits. It is an exhausting, expensive undertaking. And it is largely ambiguous. How do you truly quantify impact, anyway? A charity might be able to report exactly how many hot meals it provides for homeless veterans. And it might be able to show how many of those homeless veterans go on to get jobs after receiving the charity’s services. Maybe there’s a strong correlation between those two factors.
But how many of those homeless veterans also received mental health services from another nonprofit? How many were in a transitional housing program? Now imagine the original charity also runs an extensive awareness campaign for homelessness issues. How much does that factor in? Charting impact, here, would be messy. “The question for Charity Navigator is whether the immediate limitations on an organization’s ability to fully measure its positive impact should or would be treated as a delinquency in an impact-based ratings system,” said McCracken.
Practicality aside, not everyone agrees that measuring impact is the answer for the nonprofit sector. On March 1, Caroline Fiennes, founder and director of Giving Evidence, and Ken Berger, the former Charity Navigator head, published an article in Alliance Magazine titled “Oops: We Made the Nonprofit Impact Revolution Go Wrong.” It is a mea culpa of sorts. Fiennes and Berger have been staunch advocates for impact, the former serving on the boards of both Center for Effective Philanthropy and Charity Navigator, the latter spearheading Charity Navigator’s initial push for better impact measurement. In the article, the pair argues that the entire idea of measuring impact is built on a shaky foundation.
“Nonprofits and their interventions vary in how good they are,” they write. “The revolution was based on the premise that it would be a great idea to identify the good ones and get people to fund or implement those at the expense of the weaker ones. In other words, we would create a more rational nonprofit sector in which funds are allocated based on impact. But the ‘whole impact thing’ went wrong because we asked the nonprofits themselves to assess their own impact.”
This creates two problems. One, it encourages publishing only reports that show a positive result, and two, it requires nonprofits to invest in research they might not be able to execute effectively, if they can even afford it. This last point is illustrated in the homeless-veterans example earlier, but it is the first point that is most problematic. If an organization undertakes an impact study that shows a negative or inconclusive result, what incentive does it have to publish it?
“The dangers of having protagonists evaluate themselves is clear from other fields,” Fiennes and Berger write. “Drug companies—who make billions if their products look good—publish only half the clinical trials they run. The trials they do publish are four times more likely to show their products well than badly. And in the overwhelming majority of industry-sponsored trials that compare two drugs, both drugs are made by the sponsoring company—so the company wins either way, and the trial investigates a choice few clinicians ever actually make.”
If impact isn’t the answer, what is? And what does it mean for the ratings system—and the nonprofits that live and die by it? For better or for worse, Charity Navigator will try to find out.
“Part of what I think needs to happen is to create a desire for impact,” said Thatcher to the hushed DMANF luncheon crowd. “We’re looking at creating a means to rate that and we want that to be a key part of the ratings system. Because the most important part is how you articulate the results.”
He adds: “It’s going to be essential to moving forward.”