News & CommentaryArchive
Apr 15, 2010
How Not to React to Rigorous Evaluation
The past few weeks have provided some insight into the impact of rigorous evaluation of philanthropic programs on charities, donors and policymakers. Unfortunately those insights show that we’ve still got a long ways to go if the goal is evidence-based philanthropy and policy.
First, as we’ve covered extensively, the first two highly rigorous evaluations of the impact of microcredit have been published in the last six months. Both studies (one in India, lead by Esther Duflo and Abhijit Bannerjee of J-PAL; the other in the Philippines led by Dean Karlan and Jonathan Zinman of IPA) found some positive, but quite small, effects. They did not find the “escape from poverty, empower yourself, send all your children to school, invest in your business, create jobs, access quality healthcare”-type benefits that are the staples of microfinance charity marketing. This week, six major microfinance charities issued a response to the studies. That response has been thoroughly parsed, and found wanting, by David Roodman of CGD, Rich Rosenberg of CGAP and Sushmita Meka of IFMR. I’ll join them in noting that the response is incredibly disappointing for two reasons: 1) it fails to take responsibility for creating the heightened expectations of microfinance that have led many casual observers to conclude that microcredit is a “failure” because it did not do all these miraculous things that were claimed on its behalf. In fact, the majority of the response is simply a repetition of the same nice stories that caused the distorted perceptions in the first place. Indeed, my summary of the response would be, “Who are you going to believe, our nice stories or their lying data?“; 2) by using an array of passive-aggressive language and phrasings (e.g. welcoming further research by “qualified academics”), the charities essentially said, “We’re not interested in engaging with outside experts except to undermine them. We’ll just keep doing what we’re doing.“
And that’s really the message from the microfinance charities in their statement. They are going to continue telling nice stories and operating exactly as they have been despite the data from rigorous evaluations.
On the other hand, this week also saw the release of early findings from the rigorous evaluation of New York City’s Opportunity NYC program. Modeled on conditional cash transfer programs that have shown success in Mexico, Brazil and other developing countries, Opportunity NYC was a pilot program funded by foundations, not by local government. It aimed to determine whether paying recipients for positive behaviors (school attendance, gaining job skills, making primary care visits) would yield the kind of outcomes that have been so elusive in welfare programs. The results were quite similar to those of the rigorous microcredit evaluations. Some interventions yielded meaningful, but small, benefits. Others didn’t seem to make a difference at all. In contrast to the reaction from the microfinance charities who plan on steaming ahead regardless of the data, the Opportunity NYC program is being canceled. The logic seems to be: “This didn’t entirely change the lives of the poor in 12-18 months. Therefore it is worthless.“
This reaction is every bit as wrong as that of the microfinance charities. But both responses to rigorous evaluation stem from the same source. An illogical belief that large, dramatic changes in short time frames are even possible. In the case of microfinance, the charities seem determined to continue to defend the illusion of such change. In New York the authorities apparently think they can still find something that will yield miraculous changes. As I’ve written before, this is the reason that Patient Optimists are so necessary. We need a cadre of charities, donors and policymakers who will look at rigorous evidence that shows small gains and celebrate, not lament—and then look at the data again to see how we can extend or expand those small benefits.