When to evaluate?
Most often, the prospect of conducting an evaluation is not considered until a project is nearing completion. However, much is to be gained, for a number of reasons, by looking at it further upstream.
The most common time constraint is when the evaluator is not called in until the project is already well advanced and the evaluation has to be conducted within a much shorter period of time than the evaluator considers necessary—both in terms of a longitudinal perspective over the life of the project, the time allotted for conducting the end-of-project evaluation, or both. (Bamberger and Rugh, 2009, p. 170)
An evaluation started too late, such as at the end of an intervention, prohibits the deployment of certain methodologies. One example is the use of “a pretest-posttest evaluation design with baseline study that can be repeated after the project has been implemented” (Bamberger and Rugh, 2009, p. 170).
When the evaluation is conducted as part of a single, time-limited mandate near the end of a project, those responsible for this evaluation will need to find strategies to simulate the pretest and posttest state of intervention, for example, by relying on the traces left by the project leaders or by asking participants to recall what the situation was like before the project was set up. Unfortunately, this type of method generates certain biases, which renders it weaker than other quasi-experimental models (Bamberger and Rugh, 2009, p. 171).
Conversely, an evaluation that must be submitted immediately upon the end of an intervention will hardly have an influence on the impact, the latter being understood as the sum of the lasting outcomes, whether expected or not, resulting from an intervention (Cekan, Zivetz and Rogers, 2017). It is only after several months, if not years, that some of the long-term impacts become apparent.
Finally, insofar as we are interested in impacts and outcomes, an evaluation based on data that changes over time is preferable to an evaluation that takes a snapshot of any one given point in time.
Hence, the evaluation should, ideally, be considered as early as the planning phase of the project. Although it may seem more time-consuming at first, using a logic model or theory of change with clear and measurable objectives will make the evaluation much more relevant, for example, by forcing you and your partners to think carefully from the outset about the results you wish to achieve and how to achieve them.
The timing of the evaluation will depend on your objective. The following table advises on this subject.
What to evaluate?
The questions of when and what to assess are inseparable. Therefore, even if the question is already partially addressed in the Why and for whom to evaluate? section, it is worth repeating that we must have realistic goals and expectations of what the evaluation can achieve.
For example, if you carry out workshops in primary schools to encourage young girls to pursue careers in science, it is only 10, 15 or 20 years later that you will know the ultimate impact of your intervention. And even if you were ever to be able to follow a cohort long enough to obtain information in this regard, your intervention will be only one factor among others that contributed to the observed results.
Hence, when evaluating outcomes and impacts, it is recommendable to stick to what could be called an area of accountability, understood as outcomes over which you have influence and for which you can reasonably be held responsible.
Coming back to the example of elementary school students, you will never be able to demonstrate that a participant has chosen a career in science because of one of your workshops. However, you will be able to assess her perception regarding this field before and after the intervention and make the assumption that a more positive perception, attributable to your intervention, will lead, on average, to more girls choosing this type of career.
In conclusion, focus on what you can demonstrate. Even if reducing a population’s poverty is your ultimate goal, we know that a small organization cannot do it alone. Find intermediate outcomes that you can influence. Document your contribution to achieving these outcomes. Explain why it is reasonable to expect that these intermediate effects will lead to poverty reduction. Rely, if possible, on scientific studies on the subject and provide examples to your audience.
Bamberger, M. and Rugh, J. (2009). Une stratégie pour composer avec les contraintes inhérentes à la pratique. In V. Ridde and C. Dagenais (Eds.), Approches et pratiques en évaluation de programme (p. 159‒175). Montreal, QC: Les Presses de l’Université de Montréal.
Cekan, J., Zivetz, L. and Rogers, P. (2017). Sustained and Emerging Impacts Evaluation (SEIE). Better Evaluation. http://www.betterevaluation.org/en/themes/SEIE