Evaluation and impact measurement pose several practical challenges. These challenges are important to understand, as anyone who engages in this activity is likely to face them.
For instance, Agence Phrase, a consulting firm, identified a number of obstacles to measuring social impact (2017, p. 46). While those include lack of knowledge and skills as well as some resistance on the part of staff, the majority is comprised of a lack of the resources money and time: money to fund the expertise and conduct the work, and time to do things right and to involve all stakeholders.
Costs, the main barrier to evaluation and impact measurement
The costs of impact evaluation (the time spent by management, employees and stakeholders, but also the cost of external expertise) are a significant barrier to its practice. According to KPMG France (2017, pp. 28, 34), this is the main barrier to entry cited by respondents who do not measure social impact (56%) or who have encountered this difficulty (54%).
This is also the conclusion drawn by Seivwright et al. (2016, pp. 3, 6) in the case of the Australian nonprofit sector, where 80% of the organizations surveyed believe that lack of funding is a barrier to measuring outcomes, far ahead of other potential problems such as lack of commitment from employees and users or the lack of standardized tools.
According to BetterEvaluation.org, the costs of an evaluation (impact measurement or other) generally correspond to 5 to 20% of the total project costs. The Social Innovation Fund (SIF), for its part, which had among its objectives to promote the development of evidence-based social interventions by measuring impact, considers that approximately 15% of the total budget should be allocated to evaluation alone. This percentage rises to 25% in the case of an experimental study (RCT) producing strong evidence (Zandniapour and Vicinanza, 2013). However, SIF’s benchmarks may be somewhat high for Quebec and Canada, where donors have not been, in general, dedicating more than 10% of the total costs of a project for evaluation. Further, the financial actors in impact investing, who offer loans rather than grants, usually do not pay for the evaluation costs of the organizations they fund.
Finally, let us not neglect the time required for data collection, a task relying on the participation of not only the people responsible for the evaluation but also all the stakeholders of an organization. Data collection also involves interviews or discussions about the analysis of results.
How can we reduce costs without sacrificing the rigour of the exercise?
Sometimes, the budget allocated for an evaluation is lower than anticipated or cut down along the way. In those cases, evaluators must improvise in order to nonetheless deliver a credible evaluation report (Bamberger & Rugh, 2009, p. 170).
Data collection techniques differ with regard to their costs. For example, a series of interviews or a focus group will cost less than setting up an RCT (randomized controlled trial) type of study. Or, analysis of existing data will be much less expensive than setting up new data collection systems.
There are also methodological reasons for preferring certain data collection and processing strategies over others. A quantitative approach does not necessarily generate the same type of information as a more qualitative approach. A written questionnaire does not necessarily generate the same level of detail as an individual interview.
Most manuals on qualitative and quantitative methods include information on different data collection tools such as surveys, enumeration, case studies, focus groups, observation and analysis of administrative data. Two relevant online resources are:
- Research Methods Knowledge Database
- What works (a site dedicated to NPOs looking to measure their impact)
Prioritize the data to be collected
Beyond the methodological aspects, we recommend prioritizing early on in the process what you want to evaluate. This will help significantly in reducing costs.
For most social economy organizations, the collection and reconstitution of relevant data is the costliest item of an impact evaluation. Yet gathering these data in an orderly and intelligent way, integrating them into your management and practices as you go along (as a learning organization would do), will go a long way toward helping you to reduce costs.
The importance of prioritization cannot be overemphasized, as it is impossible to assess everything. Therefore, if you are interested in evaluating the impact of your activities, you should develop a logic model or theory of change that illustrates your action and the underlying assumptions. This exercise should allow you to target the information you already know as well as the information you need to know. This will allow you, by considering the data sources at your disposal, to choose the aspects to be evaluated first. In other words, the amount of data to be collected and analyzed must be limited. In corporate jargon, these few prioritized aspects are called key performance indicators (KPIs).
Who has to pay for the measurement?
Many social economy organizations obtain funding from major funders, such as foundations and public administrations, in the form of donations or in exchange for services rendered. These funders may then be in a position to ask the social economy enterprise for some form of accountability report, evaluation or impact measurement. Yet who bears the costs incurred? Should funders dedicate a certain amount of their funding specifically to evaluation or impact measurement? Should they themselves hire the people who will conduct or coordinate this evaluation? If so, are they entitled to formulate specific requirements regarding the evaluation process, such as whom, what and how to assess?
The Social Innovation Fund (SIF), in the United States, aims to finance organizations with a social mission in the fields of health, education and poverty. The SIF is only intended to fund interventions for which preliminary evidence of effectiveness (e.g., a survey of participants at the beginning and end of the intervention) already exists, and to support these organizations to generate higher levels of evidence of the effectiveness of their intervention (e.g., by doing RCTs in multiple settings). Knowing that these are very high evaluation requirements, the SIF has set up a centre of expertise, produced tools and allocated specific budgets to finance the required impact studies. The SIF is also involved in the design of the proposed assessment and judges whether or not it is sufficiently rigorous.
In the Magdalen Islands, a concertation table on student perseverance wanted to encourage its member-organizations to become more involved in evaluation activities. The islands’ Service du développement social therefore funded a group training course on evaluation and a bank of hours of consultation with a professional evaluator for each organization.
The response to these questions depends on several parameters specific to your situation and to the answers to questions such as who initiated the request for evaluation and who has the required resources.
At TIESS, we believe that when funders require evaluations, they should provide a share of the budget to cover the costs. In other words, these costs must not eat into the budget originally allocated to the initiative.
To assist you in the process of negotiating and co-constructing a mutually beneficial evaluation for the company and the funder, two guides are available:
The costs and benefits of measuring
In a context where everyone agrees that evaluations and impact measurement are important, but where few organizations set aside the funds required to make a serious commitment to them, it is important to ask: is it really that important? The truth is that there are both costs and benefits associated to an evaluation process. When organizations decide (or not) to embark on such a process, they implicitly make this calculation. When your turn comes, make sure you do it in the most informed and educated way possible.
Agence Phare. (2017, March). L’expérience de l’évaluation d’impact social. Pratiques et représentations dans les structures d’utilité sociale.
Bamberger, M., and Rugh, J. (2009). Une stratégie pour composer avec les contraintes inhérentes à la pratique. Dans V. Ridde & C. Dagenais (Eds.), Approches et pratiques en évaluation de programme (p. 159‒175). Montreal, QC: Les Presses de l’Université de Montréal.
BetterEvaluation. (2017). Determine and Secure Resources. http://betterevaluation.org/plan/manage_evaluation/determine_resources
KPMG. (2017). Baromètre de la mesure d’impact social. France: KPMG.
Seivwright, A., Flatau, P., Adams, S., and Stokes, C. (2016). The future of outcomes measurement in the community sector (Bankwest Foundation Social Impact Series No. 6). Sydney, Australia: Centre for Social Impact.