17 February 2023

7. Using and communicating evaluation results

 

Summary: When should the question of using and communicating evaluation results be asked? In many cases, it is after the analysis of results has been completed that more specific thought is given to potential audiences and uses. To some extent, the results determine how they will be used. In this section, we will see the benefits of reversing this logic: the intended use should guide the entire evaluation process from the start. This approach offers the best guarantee of optimal use of the results. More importantly, it assumes that it is the use that determines the value of the evaluation process.

Utilization-focused evaluation

Introduced some 30 years ago by Michael Quinn Patton, utilization-focused evaluation is based on the following principle: the value of an evaluation is determined by how it is used.

While the idea may have been controversial at first, it quickly gained many supporters. A survey was conducted in 1996 among 564 evaluators and 68 practitioners, all members of professional evaluation associations in Canada and the United States. Participants were asked to indicate whether they agreed or disagreed with a list of statements. The statement that received the greatest consensus was: “Evaluators should make recommendations based on the study”. The second highest consensus (71%) was generated by the statement, “The primary function of evaluation is to maximize the expected uses of the evaluation data by the intended users” (Patton & LaBossière, 2012).

According to this approach, an evaluation should be designed and conducted with future use in mind, from start to finish. This particularity is important. Often, the question of use and communication arises after the analysis of results has been completed. At that point, thought is given on potential audiences and uses. This can severely limit the scope of the results. Some of the most common criticisms of evaluation reports include:

  • the report is late, decisions have already been made ;
  • The report is very large, no one will read it;
  • the questions asked are not the right ones;
  • the report does not tell us what we want to know;
  • jargon makes the report boring and difficult to understand (Patton & LaBossière, 2012, p. 146).

To avoid these pitfalls, it is important to understand what those requesting an evaluation want to know and how they intend to use the results. It is important to identify in advance who the actual users will be, so that the focus can be on the intended use of the results.

Indeed, for each evaluation, there are multiple possible uses. This also implies a diversity of potential users. In this regard, it is generally recommended to identify “audiences” for an evaluation. However, an audience is a rather vague and anonymous entity: one can only assume that they are interested in and committed to making use of the evaluation’s results.

It is therefore important, from the outset, to move beyond the general and vague to focus on specific and concrete targets. The intended users of the evaluation are those individuals or groups who are directly affected by an evaluation. More specifically, they are the people who will be able to actively contribute to the evaluation and who intend to use the results for clearly defined purposes.

It should be kept in mind that, in reality, it is individuals, not organizations, who use the results of evaluations. For this reason, it is not enough to target organizations or categories of individuals broadly as recipients of the evaluation report. It is important to reach out to those evaluation stakeholders who are likely to have a personal interest in the results.

It is important to understand that this approach does not determine the type of evaluation, model, method, theory, or even use that is desired. Its particularity is simply that it aims to ensure optimal use of the results.

To do this, you need to ask the right questions, know what the real information needs are, match the evaluation results to the decision-making moments, and work closely with the people who want to use them.

Some examples of questions to ask are:

  • Why are we evaluating?
  • What will the evaluation be used for?
  • Do the evaluation questions address what we really want to know?
  • Will the process give us information we can use?
  • How will the results be used in practice?
  • Do we want to be more effective in our actions? Better perceived by the population?
  • Do we want to reach a larger clientele?
  • What aspects of the intervention do we want to improve?

To learn more about the principles, steps and challenges of outcome-oriented evaluation

Some key principles of utilization-focused evaluation

  • The use of results is an ongoing concern from the beginning of the evaluation (not just at the end).
  • Evaluations are designed and conducted to meet the interests and information needs of specific individuals (not vague recipients).
  • The strong commitment of the intended users to use it guides the entire evaluation process.
  • Information is most powerful in the hands of people who want to use it and know how to do so: the right information must be given to the right people.
  • To be useful, information must be credible, relevant, and presented in a form that is accessible and understandable to users.
  • Use should not be confused with dissemination of results. Dissemination is one way to promote use.

Steps of a utilization-focused evaluation

The five main steps of a utilization-focused evaluation are:

  1. Identify the intended users and determine their appropriate level of involvement;
  2. Clarify the objectives and intended uses of the evaluation with targeted users;
  3. Selecting methods-usefulness is the most important criterion; there are many valid ways to do this; in a utilization-focused evaluation context, the best way is the one that will be most useful and relevant to the people using the evaluation;
  4. Analyze and interpret the data-users will be invited to actively participate in interpreting the data and developing recommendations;
  5. Using and disseminating the results.

Two issues specific to utilization-focused evaluation

  • Fear of loss of quality and rigor. Actively involving users in the choice of methods should not adversely affect the quality or rigor of the evaluation. The validity and robustness of the data collected may vary depending on the situation. The aim is not necessarily to achieve an absolute standard of scientific or methodological quality. What is important is to ensure that the methods adopted meet the validity needs of the situation. The appropriate level of validity must be determined, taking into account what the users intend to do with the data collected.
  • Frequent change in intended users. Since this approach relies on their participation, when many of these people leave the process along the way, the use of the results may suffer. If the process is already well underway, replacements may not come with the same expectations. Working with a diverse group of users reduces the impact of such changes.

It’s not just the results that are useful

So far, we have only referred to the usefulness of evaluation results. In a utilization-focused evaluation context, it is important to note that simply participating in the process can also be useful. The evaluation process provides many learning opportunities for those who are actively involved in it. In particular, it allows them to:

  • develop more analytical thinking;
  • ask themselves new questions about their intervention practices;
  • promote decision-making based on reliable data within their respective organizations.

By becoming familiar with the evaluation process, participants can develop skills that will outlast the results of the evaluation itself.


Using and Disseminating Results: Action or Communication Plan?

As mentioned in the section “Strategies for Demonstrating and Learning,” there are many motivations for evaluating the effects of our interventions. For example, we may want to make the results obtained more tangible or demonstrate the impact of our actions to different audiences: the users of our services, our partners or our funders. We may also question our practices, want to ensure that our actions are in line with the objectives pursued, and verify that the expected effects are indeed perceptible and not simply presumed.

Whether the evaluation is aimed at distinction or learning, the communication of results deserves special attention. Depending on the intended use, the timing and manner of communication may vary considerably.

Internal dissemination aims first to draw out the lessons learned and generate actions to improve the interventions. In this case, after a data collection phase, when the analysis is underway, it is important to share the results of the evaluation in a preliminary manner with the stakeholders concerned to discuss them.

Initial results are usually in the form of data that will need to be interpreted for broader significance. For example, why do we see the expected changes in one case, but not in another? If the data collected provide some answers to the questions posed, in the case of truly useful evaluations, they will also raise other questions. And it is these questions, arising from the data, that can fuel very stimulating exchanges with the intended users. The better the quality of the reporting and exchange of data collected, the better the chances that evaluation results will be used effectively.

It is a circular movement from consultation to investigation to consultation and back again:

Source : Bonbright, 2012, p. 9

In addition, by having preliminary discussions, the views of stakeholders can be incorporated into the final report. This will increase the interest of the expected users and strengthen the validation and acceptance of the evaluation results, which will promote the implementation of the recommendations and their integration into the organizations’ action plans.

External dissemination has other purposes. Often, the main goal is to highlight the scope of the organization’s actions and the successes that set it apart. In this case, dissemination strategies will be based on well-targeted communications planning. The essential questions to ask are:

  • What do I want to communicate?
  • To whom do I want to communicate?
  • For what purpose?
  • In what way?

The answers to the first three questions determine the answer to the last one. Convincing a funder to increase its support will not be the same as convincing a potential user of the benefits of a service offered.

Since we are talking about disseminating evaluation results, we might be tempted to answer the first question as follows: I want to demonstrate the impact of my organization’s actions on such and such an issue related to its mission. One way of doing this might be to highlight the importance of the problem while illustrating, with figures, how the organization contributes significantly to mitigating its most harmful effects.

Thoroughness is essential, of course, but it is not enough. For a message to have a lasting impact, it must strike a number of chords: it must be convincing, of course, but it must also capture the imagination and arouse emotions, especially when it comes to highlighting social or humanitarian interventions.

There are a variety of techniques, methods or strategies for communicating effectively, depending on the different audiences you are addressing. Here are a few resources that may be helpful in this regard.

Simple Tips for Communicating Your Impact

This tool created by the Ontario Nonprofit Network (ONN) presents six simple tips for communicating your impact effectively. These are fairly general tips for NPOs: 6 Simple Tips for Communicating About Impact (2015).

Why and How to Share Evaluation Results

The agirtôt.org website, linked to the Lucie and André Chagnon Foundation’s Avenir d’Enfants program, clearly addresses the issue of sharing and using evaluation results.

Creative ways to share evaluation results

Canadian consultant Kylie Hutchinson published A Short Primer on Innovative Evaluation Reporting in 2017. This publication provides format ideas for presenting evaluation results and facilitating meetings on the topic in a fun way.

Data Visualization

This 2018 article by Ben Losman on the TechSoup Canada website discusses how to put data in pictures to make a story with supporting illustrations: Visualizing Impact: A Picture Is Worth a Thousand Data Sets

Mise en récit

Storytelling

Storytelling is very popular, especially in the corporate marketing world. It is a particularly effective communication tool to influence opinion leaders and decision makers. Stories built around the experiences of people who benefit from an organization’s services, for example, can have considerable impact in engaging a broad audience.

There are many sources of information on this topic; this document provides a good summary.

As a complement, the Social Impact Story Map integrates the main principles of storytelling and proposes a four-step process for telling a social impact story.


See references

Patton, M. Q. et LaBossière, F. (2012). L’évaluation axée sur l’utilisation. Dans C. Dagenais et V. Ridde, Approches et pratiques en évaluation de programmes (145-160). Presses de l’Université de Montréal. DOI : 10.4000/books.pum.5983  https://books.openedition.org/pum/5983?lang=fr#:~:text=L’%C3%A9valuation%20ax%C3%A9e%20sur%20l’utilisation%20est%20bas%C3%A9e%20sur%20le,fin%2C%20de%20son%20utilisation%20future 

Bonbright, D. (2012, novembre). Utilisation des résultats d’évaluation d’impact. Notes sur l’évaluation d’impact, No 4. InterAction. https://www.interaction.org/wp-content/uploads/2019/04/2-Use-of-Impact-Evaluation-Results-FRENCH.pdf 


To go further