Evaluation professionals will all no doubt agree that the evaluation of PROGRESA is the most quoted and described professional reference at a number of events during which this community gathers for an exchange of practices.
If this evaluation has become a reference, it is because of its methodological rigor, which is emblematic of rigorous impact evaluations, its conclusive results on social variables and its use by decision-makers, for whom it is the archetype of influential evaluation.
If we take a brief look back at the facts: in 1997, the Mexican government launched the first social benefit program, for which payments were conditional on social behavior, such as sending children to school, first and foremost girls, and regular visits for preventive medicine consultations. Women were the priority for these transfers in order to enhance their empowerment.
The logic is simple: to make social transfers instruments to fight poverty in the short term via cash transfers, but also in the long term by increasing human capital and social capital.
The Mexican government was aware of the innovative aspect of this program and set up a pilot program, which it tested in randomly selected villages.
The evaluation was conclusive and the whole program was a huge success. The PROGRESA program was pursued and extended by the following government, under the name of Oportunidades, and the conditional transfers were reproduced all over Latin America and beyond. The evaluation went on to become the golden standard of the profession and has made a major contribution to promoting rigorous impact evaluations.
We now have some distance from this experience and specialists (Duflo and Barnejee: Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty, 2011) now think that it is highly likely that the story could have been told in an extremely different manner. Indeed, several studies, using the same methods, have highlighted that unconditional transfers have the same type of social impacts (Baird, McIntosh, Ozler, 2009; Benhassine, Devoto, Duflo, Dupas, Pouliquen, 2010). Consequently, it is highly likely that the conditions posed for Mexican social benefits only had a marginal impact and that the improvements observed were above all due to the transfers themselves. It will undoubtedly be impossible to decide between these two assumptions as the PROGRESA evaluation tested “conditional transfers against no transfers” and not “conditional transfers against unconditional transfers”.
This very famous and influential example provides an excellent summary of the way in which the development community learns from its experience. One can have different interpretations of the same sequence.
The first is the optimistic interpretation. There is certainly an accumulation of knowledge over time. With the increasing number of experiments and evaluations, the knowledge of what works is becoming more nuanced, more complex and richer.
The second interpretation is resolutely pessimistic. What was held to be one of the most firmly established truths in the development field, and has been a source of numerous programs in different countries, has in fact produced a fragile and questionable result. One might have the feeling that the succession of truths is sometimes more important than the accumulation of evidence.
The third interpretation is intermediary. It is based on the vision of a complex “knowledge market”, represented by stakeholders, which one cannot bring together as a cumulative set of more or less firm “evidence” available to decision-makers.
There are communities of knowledge who publicize information and influence decision-making: issue networks, epistemic communities, policy communities and advocacy coalitions.
They are the interfaces that often play a key role in the way in which questions are asked, methods are selected and knowledge coagulates.
Between the positivist vision of a continuous growth of a vast reservoir of knowledge, a public good available to all, and the cynical vision of a succession of modes with no memory, there is the realistic vision of a group of stakeholders – scientists, evaluators, decision-makers, think-tanks and lobbyists. They exchange evidence and arguments and an experience that is built irregularly through ups and downs, successive steps forward and steps back and trial and error. Learning from experience may be unattractive on the face of it, but it is nevertheless effective as long as the exchange process remains open to all the relevant stakeholders.