An evaluative approach is always situated, consciously or not, within a given paradigm. The latter, understood as a perspective on the nature of things and the way in which they can be known, necessarily affects the method one will follow to analyze their evolution. Although it is desirable to remain as objective as possible, each evaluation approach involves certain choices and trade-offs that should preferably be made in an informed manner. The questions “Who will do the evaluation?” and “What will be evaluated?” are particularly important. This section helps you to answer them.
The results of the evaluation, particularly when it is summative (see Glossary), have potentially very tangible consequences, such as the non-renewal of a project. For this reason, those who carry out the evaluation may be subject to certain social pressures. This includes the desire of the evaluation sponsor to see results that make him or her look good.
To guard against this, evaluators tend to describe their work as “objective”, with the evaluation describing reality as it is, using proven measurement tools. At TIESS, we do not believe that complete objectivity or neutrality exists, at least not in the social sciences.
A social reality is eminently relative and intersubjective: it is constructed collectively, through the interactions and perceptions of each individual. Grasping that reality requires taking account of the voices of different actors who have partial, sometimes contradictory, visions of the same situation. This reflection on ontology and epistemology is fairly well explained in Guba and Lincoln’s (1989) Fourth Generation Evaluation, which we have summarized in the box below.
Paradigmatic conflicts: Positivism and constructivism in evaluation
Already 30 years ago (1989), Guba and Lincoln published a book suggesting the use of a “fourth generation of evaluation” based on a new paradigm: constructivism. This paradigm contrasts with a conventional view of evaluation, generally rooted in positivism, which presupposes that the evaluator observes an external reality, independent of her or his action.
The following excerpt from their book describes how a constructivist evaluation is based on three assumptions:
The basic ontological assumption of constructivism is relativism, that is, that human (semiotic) sense-making that organizes experience so as to render it into apparently comprehensible, understandable, and explainable form, is an act of construal and is independent of any foundational reality. Under relativism there can be no “objective” truth. This observation should not be taken as an “anything goes” position.
The basic epistemological assumption of constructivism is transactional subjectivism, that is, that assertions about “reality” and “truth” depend solely on the meaning sets (information) and degree of sophistication available to the individuals and audiences engaged in forming those assertions.
The basic methodological assumption of constructivism is hermeneutic-dialecticism, that is, a process by which constructions entertained by the several involved individuals and groups (stakeholders) are first uncovered and plumbed for meaning and then confronted, compared, and contrasted in encounter situations. The first of these processes is the hermeneutic; the second is the dialectic. Note that this methodological assumption is silent on the subject of methods and, in particular, on the subject of “quantitative” vs. “qualitative” methods. Both types of methods may be and often are appropriate in all forms of evaluative inquiries. (Guba and Lincoln, 2001, p. 1)
Some thinkers who adhere to this paradigm, such as Michael Scriven and even more so Michael Patton, have become true evaluation gurus. But although these authors are often cited in the literature on the evaluation of social initiatives, the two worldviews continue to coexist. This gives rise to certain tensions (Guba and Lincoln, 1989, pp. 84, 112‒113), which, while not specific to the measurement of social impact, structure most of the debates in this area, as elsewhere in the social sciences. The authors identify several tensions:
Accuracy vs. data richness
The (often numerical) data valued in conventional evaluations provides precision and conciseness. However, anecdotes and contextual data are also important to make the evaluation richer in terms of learning opportunities.
Rigour vs. relevance
Conventional evaluation can be overly concerned with methodological rigour and internal validity. This may be at the expense of the relevance of the evaluation for its target users or its applicability to future actions (external validity).
Elegance vs. applicability
Conventional evaluation tends to be too preoccupied with the elegance of the initial theories, when it should let them emerge from the local context.
Objectivity vs. subjectivity
The quest for objectivity in conventional evaluation tends to ignore the biases that will inevitably be caused by the interaction between interviewers and respondents. The solution is to be more transparent with regard to these biases, admitting an unavoidable part of subjectivity and relativity.
Verification vs. discovery
Conventional evaluation is sometimes more concerned with verifying the quality of an intervention than with exploring new solutions. In the approach developed by Guba and Lincoln, the objective also includes a better understanding of the situations studied.
Note that the authors have developed a checklist to help put these principles into practice, which can be found on the Passerelles platform.
The practical consequence of this stance is that the chosen methodology will never be perfectly objective. This does not mean, however, that evaluation must therefore be bad, deliberately biased or merely a matter of opinion or taste. It simply means that the evaluation is transparent regarding its initial assumptions. When a social economy organization is involved in an evaluation process, it should be aware that:
The choice of evaluation method has a decisive impact on the perception of the object being evaluated. Evaluation is therefore not entirely neutral and, in this sense, it contributes to the definition of the object being evaluated. One and the same organization can be represented very differently depending on whether one is measuring the volume of production, cost efficiency or the societal relevance of an action. As an analogy, an object will be perceived very differently depending on whether we choose a microscope, binoculars or a telescope as our preferred lens through which to observe (evaluate) it. Thus, the choice of evaluation tools and methods has a considerable influence not only on our vision of the world but also on the object being evaluated or even ourselves (TIESS, 2017).
Dealing with these ethical and political issues requires not only a clear sense of the underlying paradigm, but also thoughtful decisions about who will conduct the evaluation process.
Who should conduct the evaluation?
At the beginning of an evaluation process, many organizations ask themselves whether they should outsource the evaluation mandate. The answer to this question depends not only on the resources and expertise available internally but also on the audience of the evaluation report. For example, an evaluation intended for an internal audience (managers and administrators) can sometimes be carried out by team members with satisfactory results. However, if the evaluation is aimed at an external audience that is to be won over for a given action, it may be wiser to call on a third party. The latter, having greater distance to the situation, is more likely than any in-house team to confer an appearance of objectivity.
However, people are not easily fooled either. Studies that place undue emphasis on showcasing the organization, be it with or without involving external consultants, will often be met with some skepticism. See, for instance, the case of a study on work insertion enterprises in Quebec, summarized in the Impact of the Social Economy in Quebec section. It is preferable, as suggested by the ONN or Avise, to build trust by co-constructing your evaluation in partnership with your funders.
To further guide you in the decision to use an external resource or not, you can consult the following table, drawn from a guide compiled by the evaluation working group of the non-profit organization Communagir.
Internal or external evaluation?
Finally, there is also a hybrid option. In many cases, some of the evaluation work can be done in-house before being contracted out to a trusted third party (e.g., to validate the evaluation strategy).
However, whichever option an organization chooses, the evaluation should not be perceived as something that requires technical expertise and should be left exclusively to experts. Rather, conducting evaluations should be among the core competencies of managers, just like managing human resources or understanding financial statements.
In order to move forward, you will need at least one person in the organization who is committed to advancing this issue. Ideally, there will be several, so that the expertise gained is not lost due to the mobility of the workforce.
What should be evaluated
As indicated in Why and for whom to evaluate?, the evaluation can cover several objects. Subsequently, in When and what to evaluate?, TIESS advises you, with regard to measuring impact, to focus on more direct outcomes that can reasonably be attributed to your intervention—your so-called accountability zone—rather than documenting the impacts (or lack thereof) on the ultimate goals.
There is an ethical dimension to the choice of assessing one aspect over another. Thus, without telling you whether or not social impact measurement is desirable in your case, you should know that the choices you make in this area stem from ideological preferences that are often implicit, and that are worth identifying and naming.
One could thus explain the popularity of measuring social impact by the shift in our societies from deontological ethics to utilitarian (or consequentialist) ethics.
Wikipedia tells us that:
- Ethics (from the Greek ethos “character, custom, manners”) is a philosophical discipline dealing with value judgements.
- Consequentialism is part of teleological ethics and constitutes the set of moral theories that hold that it is the consequences of a given action that must form the basis of any moral judgment of that action.
- Deontological ethics or deontologism (derived from a Greek word meaning “obligation” or “duty”) is the ethical theory that states that every human action should be judged according to its conformity (or non-conformity) to certain duties.
For example, the interest in corporate social responsibility (CSR), which is still present today but which developed particularly in the 1990s and 2000s, was based on principles of transparency and respect for certain duties and processes. The social economy and cooperative movements, where actions are underpinned by principles and values, has traditionally been part of this lineage. In contrast, the impact investment and social entrepreneurship project suggests instead that the value of an action should be judged on the results produced: its social impact.
The following quotation by Norman and MacDonald, now dating back more than fifteen years, warns against the potential misuse of the concept of social impact:
It is common for advocates of [triple bottom line] and [corporate social responsibility] to talk of the “social performance” or “social impact” of a firm, as if this captured everything that was relevant for an ethical evaluation of the firm. On this view, what is morally relevant is how the firm improves its positive impact on individuals or communities (or reduces its negative impact). Presumably “social impact” here must be closely related to “impact on well-being” (including the well-being of non-human organisms). In the language of moral philosophy, this is to locate all of business ethics and social responsibility within the theory of the good: asking, roughly, how does the firm add value to the world? Obviously, this is a very relevant question when evaluating a corporation. But much of what is ethically relevant about corporate activities concerns issues in what moral philosophers call the theory of right: e.g., concerning whether rights are respected and obligations are fulfilled. Now clearly there are important links between our views about rights and obligations, on the one hand, and the question of what actions make the world better or worse, on the other. But unless we are the most simple-minded act-utilitarians, we recognize that the link is never direct: that is, we do not simply have one obligation, namely, to maximize wellbeing. Sometimes fulfilling a particular obligation or respecting a particular person’s rights (e.g., by honouring a binding contract that ends up hurting the firm or others) might not have a net positive “social impact” but it should be done anyway. (2004, p. 253)
In short, by focusing on the results that the enterprise generates (its impact) rather than on the way it operates (its processes), the notion of measuring social impact in some way challenges the traditional mode of action of the social economy and community action.
Groupe de travail sur l’évaluation. (2018). Une évaluation utile et mobilisatrice, est-ce possible? Communagir.
Guba, E. G. and Lincoln, Y. S. (1989). Fourth generation evaluation. Sage.
Guba, E. G. and Lincoln, Y. S. (2001). Guidelines and checklist for constructivist (aka fourth generation) evaluation.
Nicholls, A. (2015). Synthetic Grid: A critical framework to inform the development of social innovation metrics. Oxford: Creating Economic Space for Social Innovation (CRESSI).
Norman, W. and MacDonald, C. (2004). Getting to the Bottom of the “Triple Bottom Line.” Business Ethics Quarterly, 14(2), 243‒262.