30 July 2020

10. Strategies for demonstrating and learning

 

Summary: Evaluation, impact measurement and accountability are sometimes used as synonyms while they refer to different exercises. Two main strategies emerge in the field: evaluation to demonstrate our impact (oriented toward an external audience), and evaluation that enables learning (oriented toward an internal audience). Although it is sometimes possible to combine several objectives in a single approach, organizations engaging in evaluation and their funders should keep these differences in mind and agree on the strategy they wish to pursue.

Two main strategies

The section Why evaluate? states that organizations pursue and value social impact measurement and evaluation in order to advance: understanding, learning, planning, improving, supporting accountability and convincing. According to Agence Phare and Avise, two French social economy-specialized think tanks, the two main motivations that stand out in the literature are:

  1. Supporting accountability, motivated by a desire to stand out or be accountable to an external audience. This could also be seen as a strategy of differentiation.
  2. Better understanding the action and improving management. This approach is primarily aimed at an internal audience. It is a strategy designed to orient action.

Source: Dahlab, Mounier, & Kemem, 2017, p. 9

The Ontario Nonprofit Network (ONN) defines two similar strategies by differentiating learning-focused evaluation from measurement work for the purpose of accountability (Taylor & Liadsky, 2017, p. 12).

In fact, most approaches fall between these two major strategies, one being an evaluation oriented toward demonstrating (more directed at an audience that is external to the organization) and the other being an evaluation oriented toward learning (more directed at an audience that is internal to the organization).

Choosing a strategy

The ONN observes that evaluation and impact measurement are often associated with the idea of accountability, which explains why some non-profit organizations consider it to be a cumbersome endeavour that is of little use to their organization and imposed by a funder (Taylor & Liadsky, 2016, pp. 11‒12).


Definition of accountability

Accountability is “a relationship based on obligations to demonstrate, review, and take responsibility for performance, both the results achieved in light of agreed expectations and the means used” (Office of the Auditor General of Canada, Government of Canada, 2002).


 

Avise and the ONN, by contrast, believe that evaluations can be designed in partnership with funders, rendering the task much less cumbersome and allowing for a greater orientation toward improvement, understanding and utilization. Two guides designed for grantmakers detail this vision.

These organizations advocate an evaluation that is focused on learning more so than on demonstrating impact. They explain, further, that this type of evaluation is best developed in partnership with donors, in a collaborative rather than competitive manner.

TIESS agrees with this vision, while recognizing that each organization’s situation is different and hence call for a different strategy.

We point out that although the funder‒fundee dynamics is more typical of the non-profit sector, the notions of demonstrating and learning strategies can also be quite useful in understanding the issue of measuring the impact of cooperatives and social enterprises. Indeed, even if these latter organizations are more likely to be financially autonomous, they may nonetheless seek to differentiate themselves by addressing an external public of potential investors and consumers rather than philanthropic or government funders.


Learn More

Consultations conducted by TIESS have revealed a wide range of views on these two strategies and on the links between evaluation, impact measurement and accountability.

Some believe that impact measurement seeking to convince a donor of the effectiveness and legitimacy of an organization’s action (demonstration strategy) does not have the same level of sincerity as an evaluation that focuses on its own learning needs. Proponents of this view therefore call for the two exercises to be conducted separately, to avoid what Nicholls (2015) calls Campbell’s Law:

The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor. (Campbell, 1976)

Others, by contrast, believe that an accountability measurement that was previously negotiated between the organization and its funder makes it possible to clearly identify the objectives pursued and the expected outcomes, document the organization’s progress, and readjust the intervention accordingly in a useful and relevant manner. In this ideal scenario, promoted by the Avise and ONN guides, there are no contradictions between the learning needs of the organization and the accountability needs of the funder. Impact measurement is therefore used to satisfy both the organization’s and the funder’s objectives.

Reconciling these two seemingly opposing positions requires considering the context in which the organization operates, and particularly the power dynamics in its relationship with the funder. A table from an article by Nguyen et al. (2015, p. 228) does just that. It illustrates two ideal-types: the symmetrical funding relationship and the asymmetrical funding relationship.

Thus, in a context where the power relationship is as symmetrical as possible, based on reciprocity, trust, collaboration and shared goals, as recommended in the Avise and ONN guides, it is conceivable to negotiate measurement systems geared toward accountability that would also serve to inform a learning-focused evaluation.

In cases where this is not possible, it would be better to keep the two exercises separate.



References

 Avise. (2018). Mode d’emploi – Évaluer l’impact social – Un éclairage pour ceux qui financent une activité d’utilité sociale.

Bureau du vérificateur général du Canada Gouvernement du Canada. (2002, 1er  décembre). Chapitre 9 – La modernisation de la reddition de comptes dans le secteur public. Repéré le 9 février 2018 à http://www.oag-bvg.gc.ca/internet/Francais/parl_oag_200212_09_f_12403.html

Campbell, D. (1976). Assessing the Impact of Planned Social Change (Nᵒ 8; Occasionnal Paper Series). The Public Affairs Center – Dartmouth College.

Dahlab, P., Mounier, B. et Kemem, K. (2017). Présentation de l’étude « Expérience de l’évaluation d’impact social ». Paris : Avise.

Nguyen, L., Szkudlarek, B. et Seymour, R. G. (2015). Nguyen et al_2015_Social impact measurement in social enterprises An interdependence perspective.pdf. Canadian Journal of Administrative Sciences, 32(4), 224-237. doi:10.1002/CJAS.1359

Nicholls, A. (2015). Synthetic Grid : A critical framework to inform the development of social innovation metrics (CRESSI Working Papers). Creating Economic Space for Social Innovation (CRESSI).

Taylor, A., & Liadsky, B. (2016). Evaluation Literature Review. Ontario Nonprofit Network.

Taylor, A. et Liadsky, B. (2017). Making Evaluation Work in the Nonprofit Sector A Call for Systemic Change. Toronto : Ontario Nonprofit Network (ONN).

Taylor, A. et Liadsky, B. (2018). Collaborative Evaluation Approaches – A How-to Guide for Grantmakers. Ontario Nonprofit Network.


Want to know more?