3 July 2018

Evaluating government action in Canada and the Western world

Summary: It was with the rise of the welfare state that evaluation emerged as an academic discipline. Even today, the notion of social impact measurement owes much to the jargon developed in the world of evaluation.

In Quebec, Canada, the United States and the Western world at large, the first initiatives aimed at systematically evaluating the impact of an intervention on society seem to have developed within the framework of the evaluation of government projects and programs. This practice can be traced back to as early as the 18th century and then broken down into various periods, depending on the authors of interest.

  • Hogan (2007, p. 46), drawing on Madaus, Stufflebeam and Scriven (1983), speaks of ages of reform (1792–1900), efficiency and testing (1900–1930), Tyler (1930–1945), innocence (1946–1957), development (1958–1972), professionalization (1973–1983) and finally expansion and integration (1983–present).
  • Vedung (2010) speaks of four waves of evaluation, each linked to a more general political ideology: the scientific, rationalist and experimental wave in the 1960s; the dialogical, more constructivist and deliberative wave in the 1970s; the neoliberal wave from the 1980s; and, finally, the wave of evidence-based judgment since the 1990s.
  • Finally, Guba and Lincoln (1989) speak of four generations, those of measurement (1890–1930), description (1930–1967), judgment (1967–1979) and negotiation (1979–2000). For more details, see the box on this subject.

Generations of evaluation

In a widely cited book on the subject, Guba and Lincoln (1989) identify four generations of evaluation, which correspond to as many approaches. We will quote Fontan (2013) to summarize these:

  1. According to these authors, the first generation, from 1890 to 1930, is a period known as the measurement period. The function of evaluation is then to measure success by means of various tests. The role of the evaluator is technical” and the evaluation “allows to observe a gap between the objectives sought and the reached results.
  2. The second generation, from 1930 to 1970, [seeks to] describe what is being evaluated. […] The function of second-generation evaluation is not only to measure gaps, but also to explain the observed distance between objectives and results. The role of the evaluator expands to allow him or her to take into consideration elements outside of what is measured.
  3. The third generation, from 1970 to 1980, […seeks] to arrive at a neutral judgment on the object being evaluated. It is then asked to establish criteria for effectiveness. [and] to make a judgment as to whether the object under evaluation meets the identified criteria. The evaluator’s role is to judge the value and merits of the evaluated object.
  4. The fourth generation, from 1980 to 2000 “constitutes what is called a negotiated (or constructivist) assessment” (Guba and Lincoln, 2001). Beyond the measurement technique, the description of reality and judgement, the evaluation of an object involves actors with different interests. Therefore, evaluation, in order to be objective, requires that an agreement be negotiated between the parties so that the interests of each are taken into consideration and respected. We then witness a joint construction between the stakeholders of the overall parameters that should govern the purpose and implementation of the evaluation. The function of the evaluation then consists in making a collective judgement on the object evaluated. Assessing the effectiveness of the object evaluated requires that the stakeholders be involved in the evaluation process. The evaluator’s role is one of mediation. He or she must act as much as a negotiator as a researcher.

(Fontan, 2013, pp. 5–7, our translation)

Knowledge of these generations of evaluation is useful insofar as social economy enterprises are increasingly looking to “adapt the evaluation method to the specific characteristics and needs of their organizations,” “attenuate the problems arising from a contractualization involving inequal actors,” and at “using evaluation to demonstrate the legitimacy, relevance and usefulness of an approach that, although desired by the population, is partially recognized by the state and poorly taken into account by the market” [1] (Fontan, 2013, p. 3, our translation). Thus, the increased use of a fifth-generation evaluation would make it possible to move beyond spontaneous evaluation, to move beyond assessment, to internalize evaluative practice, and to take political and ethical ownership of evaluation (Fontan, 2013, pp. 20-21).


It was during the 1960s and 1970s that evaluation became truly professionalized (Duclos, 2007, p. 102 ; Hogan, 2007, p. 6 ; Rossi, Lipsey & Freeman, 2003, p. 9 ; Zappalà and Lyons, 2009, p. 6), and it is this period that interests us specifically.

This professionalization is marked in particular by a systematization of the representations and terms used, such as the logic model, which itself gave rise to the theory of change in the 1990s. This modeling approach, which consists of breaking down an organization’s intervention into expected and observed activities and changes, constitutes the skeleton of most methods related to impact evaluation today.

It is also in this context that the ISQ’s (Institut de la statistique du Québec) Quebec input-output model was developed in the late 1960s. Further, the increase in government spending and, consequently, a demand for information on the results and desirability of these investments gave rise to cost-benefit analysis (CBA). Although this technique was not designed for this purpose, we now see that it is sometimes applied to the evaluation of the activities and effects of smaller social economy organizations. Indeed, CBA aspires to consider all the costs and benefits of an intervention, regardless of whether these are traditionally negotiated on the market (in which case it is easier to establish a price) or not. Indeed, Trelstad (2014, p. 585) argues that it is the combination of traditional evaluation and CBA that is behind contemporary social impact measurement.

Today, the field of evaluation has made progress in certain aspects, and has its own terminology, which, despite the persistence of some confusion between authors and translations (Marceau and Sylvain, 2014), is tending to stabilize. Several of these definitions can be consulted in our Glossary.

According to Mortier (2014), “recent emphasis on the importance of social impact evaluation” draws not only on “public policy evaluation” but also, “in parallel, [on the evaluation of] associations mainly financed by public authorities for missions of general interest. This is particularly the case for development cooperation agencies, which have been asked for many years to evaluate their practices, results and impacts” (p. 3, our translation). The next section presents how evaluation practices of organizations active in the international development and sustainable development communities have developed.

[1] Original quotations: “adapter la méthode évaluative aux caractéristiques et aux besoins spécifiques de leurs organisations”; “atténuer les problèmes découlant d’une contractualisation mettant en scène des acteurs inégaux”; and “utiliser l’évaluation pour démontrer la légitimité, la pertinence et l’utilité d’un mode d’intervention qui, bien que désiré par la population, est partiellement reconnu par l’État et faiblement pris en compte par le marché” (Fontan, 2013, p. 3).