Note: Where no source is indicated, the definition can be considered to have been elaborated by TIESS.
Social impact measurement and evaluation
Evaluation: A systematic approach to assessing the value of an intervention.
Impact evaluation: A systematic approach to estimating the consequences attributable to an intervention.
Social impact measurement: Activity of assessing the effects (or outcomes) of an intervention.
– The word measure literally means “to determine, to evaluate according to a standard, using an instrument.” In a figurative sense, it can also be seen as a synonym for evaluation.
– The word “impact” refers to the results, outcomes, effects or consequences of an action.
– The word “social” generally refers to the consideration of various aspects beyond purely economic considerations.
To learn more, see the Definition and main steps section of the web portal and the entry of the word “Impact” below.
The logic model
Activity: A process or operation that produces the outputs of an intervention from its inputs.
The main activities of an organization may be more or less specific and may include several sub-activities that can be organized in a hierarchical structure. Examples of activities include training, research, infrastructure construction, information production and negotiation (Leblanc-Constant & Bard, 2013, p. 1).
Impact: The portion of the total outcome that happened specifically as a result of the activity of the intervention, above and beyond what would have happened anyway (Clark et al. 2004).*
Input: Human, financial, material or informational resources used to carry out production activities.
These include, for example, staff to deliver the services, financial resources, space, vehicles, software, etc. (Leblanc-Constant and Bard, 2013, p. 13).
Intervention: Action taken with the intention of having an effect on society (Leblanc-Constant & Bard, 2013, p. 13). In reference to the logic model, it is the set of chains of action (processes) that make it possible to mobilize resources (inputs) to carry out activities and produce services in order to achieve results (outputs and outcomes) (Champagne, Brousselle, Hartz & Contandriopoulos, 2011, p. 74).
Logic Model: A picture of how your organization does its work, in other words, the theory and assumptions underlying the program. A logic model links outcomes (both short- and long-term) with program activities/processes and the theoretical assumptions/principles of the program. (W.K. Kellogg Foundation, 2004, p. III).
Outcome (effet): Consequence of an intervention (Marceau & Sylvain, 2014, p. 17).
Output: Observable and measurable goods or services (a briefing paper, park development, information, a grant, etc.). Because their production is usually under the exclusive control of the organization, they are usually easier to report on than outcomes (Leblanc-Constant & Bard, 2013, p. 11).
*The French version of this glossary does not provide a separate definition of the term “impact” because impact is subsumed, together with outcome, under the concept of effet (see the French version of the logic model, below). This means that the focus on differentiating between impact and outcome, as mentioned in Clark et al. (2004)’s definition, is less crucial in the practice of evaluation and measurement in the French-speaking world, which considers all consequences attributed to an intervention (effets).
Moreover, since TIESS began its work on social impact measurement in 2016, the English definition of impact has broadened to include all sorts of consequences of an action, whereby it approximated the French notion of effets (see next paragraph). At the same time, the literal translation, “impact,” as a synonym of effet, is becoming increasingly popular in French with the result that what used to be an improper anglicism is now quite mainstream.
Impact: “A result or effect that is caused by or attributable to a project or program. Impact is often used to refer to higher level effects of a program that occur in the medium or long term, and can be intended or unintended and positive or negative” (Impact Management Project, 2020)
Types of evaluation ‒ approaches
Constructivist evaluation: A form of evaluation that builds on the founding principles of the constructivist paradigm in terms of ontology, epistemology and methodology (Guba & Lincoln, 2001, p. 1). Also referred to as a fourth-generation evaluation by Guba and Lincoln (1989).
Developmental evaluation: An evaluative approach designed to support learning in complex and changing contexts (Gamble, 2008; Meunier, 2013).
Directive evaluation: Evaluation in which the evaluator adopts the role of expert, neutral and distant from the object being evaluated, and makes decisions alone. The role of the other actors is that of a source of information (Ridde & Dagenais, 2009, p. 27).
Emancipatory evaluation: Evaluation focused primarily on fostering empowerment, namely in that the evaluator takes the position of facilitator and the participants make the decisions (Ridde & Dagenais, 2009, p. 28).
External evaluation: Evaluation involving the participation of an individual or team independent of the team carrying out the intervention.
Formative evaluation: Evaluation whose objective is to paint a picture of the operation of activities with a view to improvement during the implementation of an intervention (Rondot & Bouchard, 2003).
Internal evaluation: Evaluation carried out by those in charge and agents of the action (Eme, Fraisse & Gardin, 2000, p. 22).
Negotiated evaluation: Evaluation in which the organization is a recognized partner of a government funding body (used in the context of the evaluation of community action in Quebec) (Gaudreau & Lacelle, 1999, p. 8).
Participatory evaluation: Evaluation where all stakeholders of the project, from the project team members to the users or beneficiaries, have the opportunity to provide feedback on the project and, if appropriate, to influence its development and/or future projects (ÉvalPop, 2016).
Practical evaluation: A type of participatory evaluation that serves a specific purpose, such as solving a problem, improving a program or informing decision-making (Ridde & Dagenais, 2009, p. 28).
Principles-focused evaluation: Evaluation to determine whether the principles are well defined, applicable, the extent to which they are being followed, and whether they are leading to the desired results (Patton, 2017).
Results-based management: A management approach that gives priority to results and puts this principle into practice in all aspects of management (Gouvernement du Québec and Secrétariat du Conseil du trésor, 2014, p. 12).
Summative evaluation: An evaluation whose objective is to draw conclusions and make judgments about the value of interventions. It is carried out at the end of an intervention, usually with a view to accountability (Rondot & Bouchard, 2003).
In this case, the evaluation supports decision-making, for example, to modify a program or to allocate funding.
Utilization-focused evaluation: An evaluative approach that assumes that the evaluation must be designed to ensure that its intended recipients will use it (Patton, 2008, 2013).
Types of evaluation according to the stages of the intervention
Needs assessment: An assessment to determine needs, understood as the difference between the current situation and the desired situation (Leblanc-Constant & Bard, 2013, p. 8).
Evaluation of the relevance: An evaluation of the relevance of the objectives (often related to strategic planning). It involves determining the appropriateness of the link between the explicit objectives of the intervention and the nature of the problem it is intended to solve or address (Champagne et al., 2011).
Implementation evaluation: Also known as process evaluation, implementation evaluation verifies whether the activities were carried out as planned and whether the resources (inputs) and products (outputs) are conform to expectations. There is a wide variety of implementation-related evaluations. Some are more specific and focus mainly on activities, while others are broader and aim to examine observed outcomes in a context-sensitive manner.
Effectiveness evaluation: Evaluation comparing the results obtained to the objectives of an action (Marceau & Sylvain, 2014, p. 23).
Efficiency evaluation: Evaluation linking the results obtained with the resources (inputs) of an action.
Results evaluation: An evaluation that looks at both the outputs and outcomes of the intervention. Sometimes also called a performance evaluation.
Impact evaluation: A systematic approach to estimating the consequences attributable to an intervention.
Vocabulary related to indicators
Criterion: Element, character or property on the basis of which an assessment is made or a judgment is formed (Leblanc-Constant & Bard, 2013, p. 5).
Dimension: A significant aspect of something (Larousse, 2017).
Indicator: A measure used to assess or evaluate results, use of resources, status, context, etc. An indicator is used to assess a phenomenon qualitatively or quantitatively using data or information as a benchmark (Leblanc-Constant & Bard, 2013, p. 12).
Intangible: Characteristic of an effect that is difficult to measure objectively with an external and standardized measuring instrument.
Monetization: The act of assigning a monetary value to outcomes that are not traditionally traded on the market.
Proxy: An indirect measure that represents or provides an approximation of a phenomenon or concept that is impossible or difficult to measure directly (Leblanc-Constant & Bard, 2013, p. 13). These indicators are regularly used in SROI or CBA methods to assign a monetary value to an outcome that is not traditionally traded on the market.
Standard: Documented rules intended to guide and harmonize the activity of a sector or to improve the practice of a profession. A standard may also be a technical specification approved by a recognized body or a data item established as a point of comparison (Leblanc-Constant & Bard, 2013, p. 14).
Target: The measurable performance or level of success expected by an organization or expected from a program or initiative over a given period of time. Targets can be quantitative or qualitative and are appropriate for both outputs and outcomes. A target could be, for example, that 70% of households in Canada will own their own home by 2006 (Treasury Board of Canada Secretariat, 2015).
Variable: An observable or measurable characteristic that can take on different values, both quantitative and qualitative. Variables are said to be “dependent” when they are influenced by other variables and “independent” when they influence or explain variations in another variable. Variables can be nominal, ordinal or numerical in nature (Leblanc-Constant & Bard, 2013).
Vocabulary related to issues
Accountability: “A relationship based on the obligations to demonstrate, review, and take responsibility for performance, both the results achieved in light of agreed expectations and the employed used” (Office of the Auditor General of Canada Government of Canada, 2002).
Attribution: Confirming a causal relationship between observed (or expected) changes and a specific action (OECD, 2002).
Business washing: A situation where a non-profit organization wants to appear more professional and comparable to a traditional business, without necessarily being so (Morley, 2017). Used in analogy to green washing, where a company seeks to appear more environmentally responsible than it really is.
Causality: The relationship between cause and effect (Hutchinson, 2018).
Collective goods and services: Collective goods and services are those that we wish to make accessible to all, notably because their use provides the entire community with benefits that far outweigh their cost. These goods or services can be a source of positive externalities to their consumption or production (Bouchard et al., 2017, p. ix).
Commensuration: The process of transforming qualities into quantities that can be compared because they share a common unit of measurement (Espeland & Sauder, 2007, p. 16). Assigning a monetary value is the most typical example of commensuration, but any ranking or rating has the same effect.
Consequentialism: A set of moral theories that maintain that it is the consequences of a given action that must form the basis of any moral judgment of that action (Wikipedia, 2020).
Contribution: Establishes that, in light of the multiple factors influencing a result, the intervention made a noticeable contribution to an observed result (Mayne, 2012).
Counterfactual: The situation or condition which hypothetically may prevail for individuals, organizations or groups were there no intervention (OECD, 2002).
Demonstrating strategy: A type of evaluation that is oriented toward distinction, which is primarily aimed at differentiation or accountability and is more directed at an audience external to the organization.
Deontology: Deontological ethics or deontologism (derived from a Greek word meaning “obligation” or “duty”) is the ethical theory that states that every human action should be judged according to its conformity (or non-conformity) to certain duties (Wikipedia, 2020).
Ethics: From the Greek ethos “character, custom, morals,” ethics is a philosophical discipline dealing with value judgements (Wikipedia, 2020).
Externality: Externality is defined as the consequences or effects that an activity has on third parties not directly involved in that activity, without these effects giving rise to a payment or transaction (Bouchard et al., 2017, p. 17). The externality can be positive or negative.
Goal displacement: Substitution of goals initially targeted by new objectives directly linked to the indicators put forward.
Information asymmetry: Information asymmetry occurs when some of the participants have relevant information that others do not. Information asymmetries can thus create an imbalance between producer and consumer or between seller and buyer (Bouchard et al., 2017, p. 18).
Learning strategy: A type of evaluation that is learning-focused, with a focus on understanding action and improving management, and is more directed to an internal organizational audience.
Reactivity: Refers to a situation where organizations or individuals alter their behaviour in reaction to being evaluated, observed or measured (Espeland & Sauder, 2007, p. 6).
Self-fulfilling prophecy: Processes by which reactions to social measures confirm expectations or predictions that are embedded in measures or which increase the validity of the measure by encouraging behavior that conforms to it (Espeland & Sauder, 2007, p. 11). For example, if the media publicize a poll claiming that a candidate doesn’t have a chance of being elected and, as a result, voters decide to strategically vote for someone else, the candidate will effectively not be elected.
Vocabulary related to organizational cultures
Evaluative culture: The habit of seeking evidence about the results the intervention is bringing about with the purpose of deliberately learning from this information (Mayne, 2017, p. 8).
Learning culture: When an organization uses reflection, feedback, and sharing of knowledge as part of its day-to-day operations. It involves continual learning from members’ experiences and applying that learning to improve. Learning cultures take organizations beyond an emphasis on program-focused outcomes to more systemic and organization wide focus on sustainability and effectiveness. It is about moving from data to information to knowledge (Center for Nonprofit Excellence, 2016).
Learning organization: An organization where people continually expand their capacity to create the results they truly desire, where new and expansive patterns of thinking are nurtured, where collective aspiration is set free, and where people are continually learning to see the whole together (Senge, 1990).
Organizational culture: A shared and learned world of experiences, meanings, values, and understandings that inform people and that are expressed, reproduced, and communicated partly in symbolic form, and also partly in functional and practical actions (Alvesson, 2010).
Organizational evaluation capacities: A concept that includes both individual capacities (i.e., the evaluator’s technical and interpersonal skills as well as the knowledge of managers and other members of the organization) and the organizational systems, structures and tools used to produce and use evaluations (Bourgeois and Valiquette L’Heureux, 2018, p. 131).
Note: Where no source is indicated, the definition can be considered to have been elaborated by TIESS.
Alvesson, M. (2010). Organizational Culture: Meaning, Discourse, and Identity. Dans N. M. Ashkanasy, C. P. M. Wilderom & M. F. Peterson (Eds.), The Handbook of Organizational Culture and Climate. Sage publications.
Bouchard, M. J., Leduc Berryman, L., Léonard, M., Matuszewski, J., Rousselière, D., & Tello Rozas, S. (2017). Analyse du rôle du réseau d’investissement social du Québec (RISQ) dans l’écosystème d’économie sociale et estimation des retombées économiques et fiscales de ses investissements—1998-2014. Université du Québec à Montréal / E&B Data.
Bourgeois, I. et Valiquette L’Heureux, A. (2018). Le renforcement des capacités organisationnelles en évaluation : une démarche axée sur les parties prenantes. Dans M. Hurteau, I. Bourgeois et S. Houle, L’évaluation de programme axée sur la rencontre des acteurs. PUQ.
Bureau du vérificateur général du Canada Gouvernement du Canada. (2002, 1 décembre). Chapitre 9 — La modernisation de la reddition de comptes dans le secteur public. Repéré à http://www.oag-bvg.gc.ca/internet/Francais/parl_oag_200212_09_f_12403.html
Center for Nonprofit Excellence. (2016, May 11). What’s a Learning Culture & Why Does It Matter to Your Nonprofit? [Text]. Center for Nonprofit Excellence in Central New Mexico. https://www.centerfornonprofitexcellence.org/news/whats-learning-culture-why-does-it-matter-your-nonprofit/2016-5-11
Champagne, F., Hartz, Z., Brousselle, A. et Contandriopoulos, A.-P. (2011). L’appréciation normative. Dans L’évaluation: Concepts et méthodes (p. 87‑104). Montréal, QC : Presses de l’Université de Montréal.
Champagne, François, Brousselle, A., Hartz, Z. et Contandriopoulos, A.-P. (2011). Modéliser les interventions. Dans L’évaluation : concepts et méthodes (2e édition, p. 71‑84). Montréal : Les Presses de l’Université de Montréal.
Clark, C., Rosenzweig, W., Long, D., & Olsen, S. (2004). Double bottom line project report: Assessing social impact in double bottom line ventures. http://escholarship.org/uc/item/80n4f1mf.pdf
Espeland, W. N. et Sauder, M. (2007). Rankings and Reactivity: How Public Measures Recreate Social Worlds. American Journal of Sociology, 113(1), 1‑40.
ÉvalPop. (2016). Lexique. Repéré 12 décembre 2016, à https://evalpop.com/ressources/lexique/
Gamble, J. A. A. (2008). Abc de l’évaluation évolutive. Fondation de la famille J. W. McConnell.
Gaudreau, L. et Lacelle, N. (1999). S’approcher de l’évaluation. Dans Manuel d’évaluation participative et négociée. Montréal : Université du Québec à Montréal, Service aux collectivités.
Gouvernement du Québec et Conseil du trésor. (2014). Guide sur la gestion axée sur les résultats. Repéré à http://www.tresor.gouv.qc.ca/cadredegestion/fileadmin/documents/publications/sct/GuideGestionAxeeResultat.pdf
Guba, E. G. et Lincoln, Y. S. (1989). Fourth generation evaluation. Sage.
Guba, E. G. et Lincoln, Y. S. (2001). Guidelines and checklist for constructivist (aka fourth generation) evaluation. Retrieved January, 23, 2010.
Hutchinson, K. (2018). Evaluation Glossary. Community Solutions Planning & Evaluation. Repéré à http://communitysolutions.ca/web/evaluation-glossary-2/
Impact Management Project. (2020). Glossary. Impact Management Project. https://impactmanagementproject.com/glossary/
Larousse. (2017). Définitions : dimension. Dictionnaire de français Larousse. Repéré à http://www.larousse.fr/dictionnaires/francais/dimension/25585
Leblanc-Constant, M. et Bard, C. (2013). Glossaire des termes usuels en mesure de performance et en évaluation: pour une gestion saine et performante (édité par Conseil du trésor). Repéré à http://collections.banq.qc.ca/ark:/52327/2440384
Marceau, R. et Sylvain, F. (2014). Dictionnaire terminologique de l’évaluation : politiques, programmes, interventions : la dimension conceptuelle. Québec : Les Éditions GID.
Mayne, J. (2017). Building evaluative culture in community services: Caring for evidence. Evaluation and Program Planning. https://doi.org/10.1016/j.evalprogplan.2017.05.011
Meunier, A. (2013, novembre). Qu’est-ce que l’évaluation évolutive? Communagir.
Morley, J. (2017). The impact of « impact »: The effect of social impact reporting on staf identity and motivation at social enterprises and charities in the UK. Working Paper.
OCDE. (2002). Glossaire des principaux termes relatifs à l’évaluation et la gestion axée sur les résultats.
Patton, M. Q. (2008). Utilization-focused evaluation (4e éd.). Sage publications.
Patton, M. Q. (2013). Utilization-focused evaluation (U-FE) checklist. Evaluation Checklists Project. Repéré à http://www.managingforimpact.org/sites/default/files/ufe_checklist_2013.pdf
Patton, M. Q. (2017). Principles-focused evaluation : The guide. Guilford Publications.
Ridde, V. et Dagenais, C. (2009). Approches et pratiques en évaluation de programmes (Les Presses de l’Université de Montréal). Presses de l’Université de Montréal.
Rondot, S. et Bouchard, M. (2003). L’évaluation en économie sociale: petit aide-mémoire. Montréal : ARUC-économie sociale.
Secrétariat du Conseil du Trésor du Canada. (2015). Lexique de la gestion axée sur les résultats. Repéré le 29 janvier 2018, à https://www.canada.ca/fr/secretariat-conseil-tresor/services/verifications-evaluations/centre-excellence-en-evaluation/lexique-gestion-axee-resultats.html
Senge, P. M. (1990). The fifth discipline: The art and practice of the learning organization. Currency.