Why don’t foundations talk more about failure?
It’s not me that’s asking that question, but Michael Remaley, a frequent contributor to this blog, in a post on the Foundation Center’s Transparency Talk. While his post is directed, more or less, at foundation leadership, the points he raise make his underlying concerns must reading for communications staff as well.
“Failure,” he writes” is part of the equation of philanthropic experimentation. And, as he adds, “The hard sciences learned the importance of sharing candid assessments of ‘failed’ experiments centuries ago. In fact, scientists seem to treasure results that do not meet expected outcomes even more highly than those that confirm what is already believed to be true.” But for foundations, “you almost never hear them talking about outcomes that failed to meet expectations, and even more rarely, those that call their basic strategies into question.”
Remaley does more than just restate what others have said before about what creates such discomfort for foundations to talk about failure, such as what Pittsburgh Foundation President Grant Oliphant, told us in a video conversation with the Communications Network here.
Instead, he goes the extra step to show that despite the commitment of foundations to openly discuss what they are learning from their work, it’s still virtually impossible to find more than a handful that make discussion of failure routine.
To substantiate my assertion, I decided to do a little systematic poking around.
I figured the 21 largest supporters of the Center for Effective Philanthropy (most of which are also supporters of Grantmakers for Effective Organizations) would be the foundations most attuned to the value of self-reflection, evaluation, and sharing results that defy expectations, and also those that would have budgets big enough to support substantial evaluation efforts. I spent many hours exploring the nooks of crannies of these foundations’ web sites. I looked at numerous publications and evaluation sections of the sites, and I searched each site on the terms failure, failed, unmet expectations, unmet objective, unmet goal, experimentation, mistake, lessons learned, and assessment.
What I found was that few foundations make it easy to learn from projects that didn’t go as spectacularly as planned, let alone talk frankly about what has been learned from the shortcomings of foundation strategy or execution. Many of the 21 foundations I examined made no mention at all of evaluation criteria and organizational outcomes, even though their association with CEP and GEO implies that they demand that kind of forthrightness from grantees. The majority of the foundation sites I examined had a few project evaluation reports scattered among other foundation supported research – and many of those evaluation reports were laudatory with pablum like “real collaboration is a challenge” tacked on at the end.
Some notable exceptions are found, including the Robert Wood Johnson, the William and Flora Hewlett, and the Wallace Foundations. He notes that these three foundations “not only make it easy to find many project evaluations that are balanced in presenting positive and negative outcomes along with what was learned through the process, but also present self-critical examinations of foundation strategy and progress as whole.”
Remaley, though, gives “gold star” status to the James Irvine Foundation.
The evaluation section of its site describes its approach to evaluating grantee success and links to all of its individual evaluations of initiatives. It also links to a Foundation Assessment section that has foundation annual progress reports for the last four years. These progress reports are exceptionally detailed and well-documented, as well as frank about successes and failures. Irvine has also produced “Insights: Lessons Learned” publications with candid assessments of their experiences with collaborations and other grantmaking practices. A search of the Irvine site on “lessons learned” produces lots of useful and interesting evaluative information and insightful critical analysis.
In the end, he says — and it’s something communicators need to own, too — is that it’s likely that more good will come if more foundations talk “humbly about their shortcomings” — an activity which can serve the twin goals of advancing a foundation’s mission and social progress as well as improving its organizational reputation. What clinches it for me, though, is his concluding thought: We’ve seen no evidence that talking forthrightly about the real-world circumstances leading to failure damages nonprofits or the foundations involved. Read the full post here, and then answer the question:
What more can foundations do to talk about failure?