Evaluation is not an end in itself. Whether it focuses on the impact of social purpose organizations (SPOs) – social economy, community action, social enterprises – or other aspects of their activities, the main purpose must be, as we have argued here, to learn and draw lessons that will then be transformed into a series of actions aimed at improving activities and outcomes. In this sense, understanding and conducting an evaluation is a way to get closer to the ideal of the learning organization.
To guide you through this process, this section provides some references to determine whether your current practices are conducive to developing a learning culture in your organization. In particular, the questionnaire developed by Taylor Newberry Consulting stands as a concrete starting point for your reflections. The literature available on the concept of learning organization and strategies for developing such a culture, however, goes far beyond the themes of evaluation and impact measurement. In order to stay focused on the subject, this section then introduces content and tools specific to the development of an evaluative culture.
The successful and sustainable integration of evaluation into your organizational processes is not just a matter of culture. It depends on a host of factors involving both individual and organizational capabilities that enable the findings to be carried out. Thus, people who have experience with evaluation, especially those who provide coaching in this area, should appreciate the Organizational Evaluation Capacity Assessment Instrument developed by Buetti et al. (2019), presented at the end of the section.
Learning organization and evaluative culture: what are we talking about?
As you integrate evaluation into an organization’s activities, you move towards the ideal of the learning organization. A learning organization is an “organization where people continually expand their capacity to create the results they truly desire, where new and expansive patterns of thinking are nurtured, where collective aspiration is set free, and where people are continually learning to see the whole together.” (Senge, 1990).
To systematically integrate evaluative processes into one’s daily activities is to develop an evaluative culture. In its simplest form, evaluative culture refers to the habit of seeking evidence about the results the intervention is bringing about with the purpose of deliberately learning from this information (Mayne, 2017, p. 8).
The development of such a culture is not based solely on internal processes within the organization. It is also dependent on the context and expectations of stakeholders outside the organization, beginning with funders, in the case of organizations where a large portion of revenues are tied to a foundation or public administration. To ensure that evaluation is not limited to an accountability tool but effectively used for collective learning, organizations must work closely with their funders to collaboratively build a coherent system of evaluation and foster a culture of learning (Taylor and Liadsky, 2017).
It is with this in mind that the Ontario Nonprofit Network (ONN) conducted an evaluation project from 2016 to 2018. Although this project was aimed at Ontario’s nonprofit sector, its findings are full of useful lessons for Quebec’s social economy.
Highlights from Taylor Newberry Consulting/ONN’s Literature Review (2018):
- For evaluation to be fruitful and mutually beneficial, SPOs and their funders must work together to develop a learning organizational culture.
- Organizational culture is sometimes characterized as “a shared and learned world of experiences, meanings, values, and understandings that inform people and that are expressed, reproduced, and communicated partly in symbolic form, and also partly in functional and practical actions” (Alvesson, 2010).
- A learning culture exists “when an organization uses reflection, feedback, and sharing of knowledge as part of its day-to-day operations. It involves continual learning from members’ experiences and applying that learning to improve. Learning cultures take organizations beyond an emphasis on program-focused outcomes to more systemic and organization wide focus on sustainability and effectiveness. It is about moving from data to information to knowledge” (Center for Nonprofit Excellence, 2016).
A number of good practices observed in learning organizations:
- Learning is a habit. Organizations build a focus on learning into their routine practices. They consciously invite and reward learning.
- Learning goals are clear. Organizations know what they want to learn and why it is important.
- Deep questions get asked. Organizations ask questions about their values and assumptions, not just questions about program tactics.
- The organization is ready to act on what it learns. Organizations are prepared for the implications of what they learn. They are willing and able to alter their practices.
- Learning is inclusive and engages partners. Organizations engage their external partners in the learning process.
- Leadership drives organizational learning. Executive directors, CEOs, presidents and senior managers play an important role in leading by example and in creating space and encouraging learning in the organization, as a whole. (Taylor Newberry Consulting, 2018, p. 14).
Taylor Newberry Consulting (Taylor and Liadsky, 2019) came up with a practical self-assessment tool that was built on their literature review and addresses the following themes:
- organizational habits and behaviours
- leadership and strategic directions
- organizational capacity and resources
To use the self-assessment tool, click here.
In 2019, the Montreal-based Centre for Community Organizations (COCo) and the United Way of Greater Montreal teamed up to launch a laboratory on the learning organization: the LabOA project.
This served as an opportunity to conduct a literature review that concluded that “the learning organization integrates the four dimensions presented in this research: an organizational structure that fosters learning; a climate conducive to learning; an integrated vision of learning, innovation and work; and an ability to navigate complexity” (Angers-Trottier, 2018, p. 32).
A complete coaching strategy on this theme is underway. For more information, visit COCo’s website.
Tools to use in developing an evaluative culture
For evaluation to be useful, your organization must seek the best information possible and be open to using it for learning and improvement. If this recommendation seems obvious, all the better. But being aware of the obstacles that might arise is also important. Here are some examples inspired by Mayne (2017, p. 8) and this article from Évalpop.
- If your team is overly concerned about risks and fears of mistakes, they may be discouraged from raising certain issues.
- Your team may argue that evaluation takes time and resources that might otherwise be better spent on day-to-day activities.
- Your team and/or administrators may not have the skills or experience to implement certain evaluative practices, a problem exacerbated by high staff turnover.
- Evaluation may be perceived as a tool of control imposed by funders, sometimes wrongly, but sometimes also rightly, when it is framed purely within an accountability logic that does not meet the needs of the organization.
- Your team or management could agree with evaluation, “in theory,” without ever becoming seriously involved or spending the time necessary to institute the recommendations, suggesting that they are not entirely convinced of the importance of this approach.
The development of a learning-oriented evaluative culture does not happen by magic; it has to come about through deliberate efforts. Here are some tools to help you reach this goal.
The work of John Mayne
In a summary note, Mayne (2008) presents information that can inspire SPOs in their approach.
The characteristics of an evaluative culture
An organization with a strong evaluative culture:
- engages in self-reflection and self-examination:
- deliberately seeks evidence on what it is achieving, such as through monitoring and evaluation,
- uses results information to challenge and support what it is doing, and
- values candor, challenge and genuine dialogue;
- engages in evidence-based learning:
- makes time to learn in a structured fashion,
- learns from mistakes and weak performance, and
- encourages knowledge sharing;
- encourages experimentation and change:
- supports deliberate risk taking, and
- seeks out new ways of doing business.
Measures to foster an evaluative culture
- Demonstrated senior management leadership and commitment
- Regular informed demand for results information
- Building capacity for results measurement and results management
- Establishing and communicating a clear role and responsibilities for results management
Organizational support structures
- Supportive organizational incentives
- Supportive organizational systems, practices and procedures
- An outcome-oriented and supportive accountability regime
- Learning-focused evaluation and monitoring
A learning focus
- Building in learning
- Tolerating and learning from mistakes
The above information is drawn from Mayne, J. (2008). Building an evaluative culture for effective evaluation and results management. Click here to download the note.
This briefing note is intended for government managers, and some of its language might not fit in with the context of the nonprofit sector (see, for instance, the discussion about the level of evidence expected in the Section Proving Impact: Causality, attribution and contribution). However, we believe the information it contains remains useful for SPOs.
Participatory evaluation, by and for organizations
The best way for an organization to develop an evaluative culture is to, simply, get started! This is the consensus that has emerged from the experience of the experts consulted while drafting this section. Having concrete and fruitful experiences with evaluation, where participation is key and external consultants play a supporting role rather than posture as independent experts, will allow your administrators, managers and users to experiment with evaluative practices and to more fully appreciate their relevance. With its “by and for” approach, the Centre de formation populaire (CFP) fosters such an environment of growth and learning.
For example, a poster, created as part of an Évalpop workshop, discusses several strategies for fostering an atmosphere of ongoing evaluation. Maintaining practices that encourage a focus on learning from results is essential for a living evaluative culture.
Evaluation strategies for community organizations
(1) Prepare a plan
- Draw up an annual evaluation calendar with key dates, and integrate it into your action plan
- Assign the task to a team member
- Proceed by trial and error
- Take it one step at the time
- Have an evaluation and training budget
(2) Prepare tools
- Practice designing a logic model, beginning with a small event or activity
- Streamline, simplify and prioritize your data collection tools
- Use modern data gathering tools, such as online surveys
(3) Conduct the evaluation
- Set up an evaluation committee
- Explain the process to staff in lay terms
- Dispel misconceptions: you’re not evaluating their work
- Make a clear distinction between reporting and evaluation
- Get input from your team’s critical thinkers
- Include the people addressed by your organization’s activities in the process
- Use a popular education approach (lay language, awareness-raising, questioning, etc.)
- Make the evaluation fun
- Make the results widely available in order to publicize your work
- Discuss the process and the resulting recommendations
Source: Centre de formation populaire 2020
This is also what guides Dynamo’s coaching approach in the context of ÉvalPIC. The 5 years of experience so far have shown that there are some favorable conditions for the development of a participatory evaluation culture within neighborhoods, including: a posture of openness and experimentation, a clarification of the reasons for evaluating, simplification and adaptation of the language and evaluation processes to local dynamics, the implementation of iterative and developmental approaches, the clarity of evaluation questions rather than their quantity, and creative, inspiring and unifying moments of collective sharing.
This principle of empowering organizations through a “by and for” approach is a concept that is not only based on the experience of a few Quebec evaluators (although the concept of participatory and negotiated evaluation is indeed an important part of Quebec’s tradition of evaluation in the community sector); it is also found in the scientific literature under the terms of participatory, utilization-focused and developmental evaluation, developed by Michael Q. Patton (Patton, 2008; Patton and Labossière, 2009). For further details, you can explore this checklist that provides an overview of the typical utilization-focused evaluation process (Patton, 2013):
Other useful tools
Canadian consultant Kylie Hutchinson (Community Solutions) has created an infographic that brings together, on a single page, 30 ideas to implement within your organization to strengthen evaluative culture.
The Measuring Up tool, developed by New Philanthropy Capital in the United Kingdom, may be of interest to organizations wishing to establish and maintain “good impact practices” (a concept that relates to evaluation and its integration into a more comprehensive cycle of planning, action and improvement).
Diagnosing organizational capacity in evaluation
“Organizational evaluation capacities include both individual capacities (i.e., the evaluator’s technical and interpersonal skills as well as the knowledge of managers and other members of the organization) and the organizational systems, structures and tools used to produce and use evaluations ” (Bourgeois and Valiquette L’Heureux, 2018, p. 131).
Several authors in the field of evaluation have shown that the ability of organizations to conduct and use evaluation on an ongoing basis depends on many characteristics (skills and knowledge, budget, work process, general willingness to use results, etc.). Professor Isabelle Bourgeois’ text (2017) provides an overview of the main models related to evaluation capacities.
In Québec, Buetti et al. (2019), using the model developed by Bourgeois and Cousins (2013), organized the characteristics and components that influence the ability to conduct and use evaluation in community-based organizations. Buetti et al. (2019) work with the following conceptual framework:
Source: Buetti et al., 2020
The conceptual framework developed by Buetti, Bourgeois and Savard (2019) highlights 6 components and 16 sub-components that influence the ability to “do” an evaluation and “use” evaluation in community-based organizations. This conceptual framework, formed with the data collected from 33 statements, provides a grid of analysis that can serve to diagnose whether the organization appears to have the required capacities for such an evaluation process to be successfully carried on.
The organizational evaluation capacity assessment instrument
This instrument for researchers and practitioners and designed by Buetti, Bourgeois and Savard, was developed as part of a doctoral research project that aimed to identify the characteristics that influence evaluation capacities in Quebec community-based organizations. The grid measures the capabilities of a community-based organization against the six components of the conceptual framework discussed above. However, as the authors point out, it “does not claim to be exhaustive or normative; rather, it is intended to report examples of practices that may demonstrate strong evaluation capacity in community-based organizations. As a result, the statements in the analysis grid can be modified to better reflect the specific realities of an organization or network ” (Buetti et al., 2019, p. 40).
Users are encouraged to complete the grid during a workshop, during which a group of participants must reach a consensus for each statement using a four-point Likert scale (1 – totally disagree, 2 – disagree, 3 – agree, 4 -totally agree). This approach is often as valuable and useful as the results of the grid itself, as it encourages conversation and discussion among group members.
The resulting diagnosis then allows for a better targeting of capacity-building interventions. These reinforcement activities are multiple (workshops, consulting sessions, training, etc.), and it is up to the various consultants and evaluation assistants to define which interventions are most appropriate. Overall, the literature review conducted by Norton et al. (2016, p. 7) tells us that the success factors of an evaluation capacity-building intervention are:
- An appropriate strategy based on a diagnosis of needs;
- The organization’s sincere desire to strengthen its evaluation capacity;
- Practical teaching, based on experience and the application of practical elements;
- Some form of technical support that directly comes from within the workplace.
For further reading:
Buetti, D., Bourgeois, I., and Savard, S. (2019). Modélisation des capacités organisationnelles en évaluation dans le secteur communautaire et implications pour le contexte québécois.
Buetti, D., Bourgeois, I., and Savard, S. (2019). L’étude des capacités en évaluation des organismes communautaires du Québec : proposition d’un cadre conceptuel et d’une grille d’analyse organisationnelle.
 Original quote: « Les capacités organisationnelles en évaluation comprennent à la fois les capacités individuelles (c’est-à-dire les compétences techniques et interpersonnelles de l’évaluateur ainsi que les connaissances des gestionnaires et autres membres de l’organisation en matière d’évaluation) et les systèmes, structures et outils organisationnels qui servent à la production et à l’utilisation des évaluations » (Bourgeois and Valiquette L’Heureux, 2018, p. 131).
 Original quote: « [La grille] ne prétend pas être exhaustive ni normative ; elle vise plutôt à rapporter des exemples de pratiques qui peuvent témoigner de fortes capacités en évaluation chez les organismes communautaires. De ce fait, les énoncés de la grille d’analyse peuvent être modifiés pour mieux refléter les réalités spécifiques d’un organisme ou d’un regroupement communautaire » (Buetti et al., 2019, p. 40).
Alvesson, M. (2010). Organizational Culture: Meaning, Discourse, and Identity. Dans N. M. Ashkanasy, C. P. M. Wilderom & M. F. Peterson (Eds.), The Handbook of Organizational Culture and Climate. Sage publications.
Angers-Trottier, P. (2018). The LabOA Project—Research Report. Centre for Community organizations (COCo).
Bourgeois, I. (2017). Évaluation : modèles et outils pour une utilisation plus stratégique.
Bourgeois, I. & Cousins, J. B. (2013). Understanding Dimensions of Organizational Evaluation Capacity. American Journal of Evaluation, 34(3), 299‑319. https://doi.org/10.1177/1098214013477235
Bourgeois, I. et Valiquette L’Heureux, A. (2018). Le renforcement des capacités organisationnelles en évaluation : une démarche axée sur les parties prenantes. Dans M. Hurteau, I. Bourgeois et S. Houle, L’évaluation de programme axée sur la rencontre des acteurs. PUQ.
Buetti, D., Bourgeois, I. et Savard, S. (2019). L’étude des capacités en évaluation des organismes communautaires du Québec : Proposition d’un cadre conceptuel et d’une grille d’analyse organisationnelle. Intervention, 150, 25‑46.
Buetti, D., Bourgeois, I., & Savard, S. (2020). “Predictors of Higher Use of Evaluation for Collective Learning and Action Among Community-Based Organizations: Implications for Building Evaluation Capacity”. Unpublished
Center for Nonprofit Excellence. (2016, May 11). What’s a Learning Culture & Why Does It Matter to Your Nonprofit? [Text]. Center for Nonprofit Excellence in Central New Mexico. https://www.centerfornonprofitexcellence.org/news/whats-learning-culture-why-does-it-matter-your-nonprofit/2016-5-11
Centre de formation populaire. (2020, mars 26). L’évaluation dans nos organismes après ÉvalPop : Comment maintenir nos pratiques réflexives ? ÉvalPop. https://evalpop.com/2020/03/26/levaluation-dans-nos-organismes-apres-evalpop-comment-maintenir-nos-pratiques-reflexives/
Mayne, J. (2008). Building an evaluative culture for effective evaluation and results management. Institutional Learning and Change (ILAC) Initiative.
Mayne, J. (2017). Building evaluative culture in community services: Caring for evidence. Evaluation and Program Planning. https://doi.org/10.1016/j.evalprogplan.2017.05.011
Norton, S., Milat, A., Edwards, B. & Giffin, M. (2016). Narrative review of strategies by organizations for building evaluation capacity. Evaluation and Program Planning, 58, 1‑19. https://doi.org/10.1016/j.evalprogplan.2016.04.004
Patton, M. Q. (2008). Utilization-focused evaluation (4e éd.). Sage publications.
Patton, M. Q. (2013). Utilization-focused evaluation (U-FE) checklist. Evaluation Checklists Project. http://www.managingforimpact.org/sites/default/files/ufe_checklist_2013.pdf
Patton, M. Q. et Labossière, J. (2009). L’évaluation axée sur l’utilisation. Dans V. Ridde et C. Dagenais (Éds.), Approches et pratiques en évaluation de programme (p. 159‑175). Les Presses de l’Université de Montréal.
Senge, P. M. (1990). The fifth discipline: The art and practice of the learning organization. Currency.
Taylor, A. & Liadsky, B. (2017). Making Evaluation Work in the Nonprofit Sector—A Call for Systemic Change. Ontario Nonprofit Network (ONN).
Taylor, A. & Liadsky, B. (2019). Organizational Learning Self-Assessment Tool: 18 questions to self-assess your organization’s learning culture and identify steps for action.
Taylor Newberry Consulting. (2018). Achieving Greater Impact by Starting with Learning: How grantmakers can enable learning relationships at the grant application stage.
For further information