Despite the potential benefits of social impact measurement, TIESS does not support the idea of making such measurement mandatory for social economy enterprises. This section explores some of the potential negative impacts of the imposition of social impact measurement: effects of reactivity, counter-productive competition and undue influence on the governance of social economy organizations.
Summary: Mandatory impact measurement may elicit unexpected reactions from some social economy enterprises. For example, organizations may resort to strategies or behaviours that boost the appearance of their performance, namely, on the basis of certain indicators, rather than enhancing the actual outcome. Or, organizations may simply adopt practices to pay lip service, without really believing in them.
In their article on rankings and reactivity, Espeland and Sauder (2007) use the concept of reactivity to examine the introduction of rankings into university law programs in the United States. The authors find that reactivity refers to a situation where organizations or “individuals alter their behavior in reaction to being evaluated, observed, or measured” (p. 6).
Reactivity can be manifested through the following two mechanisms in particular:
- Self-fulfilling prophecy ‒ an anticipated result is announced, the actors adjust their behaviour accordingly and, due to the new behaviour, that result is realized (p. 11). Example: The media announces that no one believes that a candidate has a chance of being elected, prompting voters to strategically vote for someone else; as a result, said candidate will indeed not be elected.
- Commensuration ‒ a process by which qualities are transformed into quantities that can be compared because they share a common unit of measurement (p. 16). Example: Assigning a monetary value is the most typical example of commensuration; however, any ranking or rating has the same effect.
In addition, responsiveness can have the following three effects on organizations:
- Allocation of resources to maximize results directly (and narrowly) obtained from the measurement, such as indicators or rankings (p. 25).
- Redefinition of policies and efforts to maximize results obtained from the measurement (p. 27).
- Use of gaming strategies, in other words, playing with rules and numbers in order to enhance the appearance of the performance (p. 29). Some of these strategies are discussed below.
One of the best-known risks of introducing performance evaluation based on externally imposed indicators is what the literature calls goal displacement, that is, the substitution of goals initially targeted by new objectives directly linked to the indicators put forward.
In order to appear to be better performing, some might focus their efforts on getting “nice indicators” rather than on the activities and outcomes these indicators are supposed to represent. Since investors are looking for performance, there is a risk that the most difficult projects will be abandoned. Standardizing impact measurement risks denying the high degree of contextualization of social enterprises (Alix & Baudet, 2015, p. 21).
For example, a physician paid on a fee-for-service basis will report the number of patients he or she sees in a day (if that is the indicator) rather than reporting on the improvement of patients’ health (which is the original target, imperfectly reflected by the indicator).
In some cases, a performance assessment defined and framed by a donor through certain imperfect indicators could even be counterproductive to achieving the objectives desired by other key stakeholders.
For example, an international aid agency could be rewarded by its donors according to the number of bags of rice it distributed, even if the local population contests the legitimacy of this action, considering it to be harmful dumping for local farmers.
When such behavioural changes are significant enough to affect the core of an organization’s raison d’être, it is called mission drift (Hurvid, 2013).
Do social impact measurement and results-based evaluations have a real influence on the objectives of the organizations that are targeted by these measures? Studies proving this relation are scarce; however, according to Ebrahim and Rangan, (2014, p. 120), citing a survey by United Way of America, 46% of the surveyed agencies believed that “Program Outcome Measurement has […] led to focus on measurable outcomes at the expense of other important results” (United Way of America, 2000, p. 6).
Morley (2017), in an article on “the impact of impact,” documented cases of organizations (social enterprises or charities) in the United Kingdom that have incorporated the language and logic of impact measurement into their activities and communications. While in some cases this integration was deemed useful and satisfactory, in others it was hardly seen as integration but rather as the adoption of certain codes for the sole purpose of increasing legitimacy in the eyes of funders and other external stakeholders. This situation, where a non-profit organization wants to appear more professional and comparable to a traditional business without necessarily being so, is referred to as business washing.
The following testimony is a perfect illustration of what is meant by business washing:
The reason we did the SROI [social return on investment measure] and the social impact was because my predecessor was trying to make the organization credible. And she’s very well-connected, and has always moved amongst the great and the good in the sector, and so [she] knew what it was that would make people happy to see. And it has stood us in really good stead. So we did all that, and then we’ve ignored it, other than updating the calculations […].” (Morley, 2017, p. 7)
Thus, after green washing, where companies project an image of environmental responsibility without really committing themselves to consistent action, and social washing, where large companies project an inflated image of responsibility and social commitment in comparison to what they really do, we might see the emergence of business washing, where, in order to be taken seriously by business circles, non-profit organizations borrow the language of social impact measurement to appear more professional and respectable.
In cases such as this one, “the mere use of social impact language, in particular performance reporting using economic and financial terminology has a demotivating effect on staff, even if there is no associated change in organizational practice” (Morley, 2017, p. 4). This not only fails to achieve the original purpose (improving performance) but also demoralizes employees. How is that possible? Let’s proceed by analogy.
In 1970, Richard Titmuss argued that blood donations would decline if potential donors received financial rewards, because of the crowding out of their moral motivation to donate blood. Economists describe this phenomenon as the crowding out of intrinsic (internal, non-financial) motivation by extrinsic (external, financial) incentives (Morley, 2017, p. 16).
The same is true in several areas of intervention where the social economy is active, such as personal care: “‘Saving the taxpayer money’ may sit less easily with the intrinsic motivation of staff and not be their intended action, whereas they may intend to ‘chang[e] a person’s life for the better’” (Morley, 2017, p. 21).
Other reactivity effects
Other reactivity effects (unexpected or undesirable behaviours) are identified by Cabaj (2017, pp. 13‒15), based on Smith (1995), and are summarized as follows:
Perverse behaviours in response to performance measurement
- Tunnel vision: Organizations, faced with many different targets, choose the ones that are easiest to measure and/or offer rewards, and then ignore the rest.
- Sub-optimizations: Organizations choose to operate in ways that serve their own operation well but damage the performance of the overall system.
- Myopia: Organizations focus their efforts on short-term targets at the expense of longer-term objectives.
- Measure fixation: When outcomes are difficult to measure, there is a natural tendency to use measures based on measurable outputs, which replace the desired outcome as the organization’s major focus.
- Misrepresentation: Organizations misreport or distort performance measures to create a good impression.
- Misinterpretation: Organizations use or analyze information in a way that is misleading and/or difficult to interpret.
- Gaming: Organizations deliberately under-achieve in order to secure a lower target in the round of activity.
- Ossification: Organizations cannot be bothered to revise or remove measures that are past their “sell-by” date and/or have lost their purpose.
Competition and conformity
Summary: By pitting social economy enterprises against one another on unsuitable grounds, a funder who wishes to finance only enterprises that generate a measurable social impact would risk discouraging the development of certain innovations and rewarding those enterprises that are best able to generate and demonstrate certain pre-established outcomes, to the detriment of other equally desirable outcomes.
As highlighted by Agence Phare (2017, pp. 29‒31), many people express fears about social impact measurement becoming informed by new public management and neoliberalism. Were that to happen, social impact measurement would become a tool for “increased control and competition between structures” (Agence Phare, 2017, p. 29; our translation). Chiapello (2013), in an astute editorial in the journal Confrontations Europe, discusses some of these fears alongside other criticisms of social impact measurement:
This shift is linked to the desire to extend competition to all activities. The structures of the social economy, which historically worked in partnership with the public authorities on broad missions over a long period of time, must now enter into bidding processes for specific projects and limited time horizons. Obviously, this evolution leads to the development of measures to compare and demand results. The desire to see the development of private financing is also at play in this shift. State coffers are empty while abundant private funds are looking for investment. It is therefore a matter of attracting private funds by building an investment universe that resembles that of finance and that can be intermediated by non-specialist fund or asset managers.
The craze for impact measurement is therefore a sign of our times. It appears attractive not because of its presumed effectiveness but because of its ability to reorganize the social services sector so as to encourage remote steering by non-professionals in the sector, project contractualization for public or private financiers, and the generation of easily digestible information for potential investors. However, the domination of the social services sector by such practices would surely come at a cost—which merits further consideration. (Chiapello, 2013, p. 1) (our translation) 
The main fear is, therefore, that the introduction of increased competition for certain types of financing, based on the invariably imperfect measures of social impact, will encourage the types of companies that are good at highlighting their impact rather than those that are good at actually producing the impact.
It wouldn’t take much for initially marginal differences to increase as a result of feedback loops: the more organizations measure their impact, the more funding they receive, the more they are able to measure their social impact, leaving the laggards far behind and constantly widening the gap.
In addition, this competition would also introduce a harmful bias: only those evaluations likely to cast a favourable light on the organization would be carried out, or at least made public. The spirit of a genuine evaluation, carried out with a view to finding weaknesses and making improvements, would then be lost in favour of an expensive communication and marketing operation that seeks to win over a certain clientele by framing social impact in a distinct, fashionable language.
Another potential consequence of social impact measurement is what some call mimicry, conformity or standardization (VISES, 2017, p. 28). Alix and Baudet, for their part, contend that competition for recognition and funding based on the common measurement of certain outcomes rather than others leads to “the alignment of practices on the majority,” thus leading to conformity and the restriction of innovation (Alix & Baudet, 2015, p. 22, our translation). The authors say:
Other risks worth mentioning are those of conformity, restructuration of the offer and avoidance of innovation. While the Commission Group of Experts on Social Entrepreneurship does not require following such a protocol, it recommends a comply-or-explain system, which, experience shows, leads rather to alignment with the practice of the majority. A label for social enterprises based on “statistical methods and the establishment of common indicators” (Commission européenne, 2011) would go in the same direction. For this reason, not all policies should be aligned with social impact “e-measures” as currently manufactured by the financial industry (Alix & Baudet, 2013). Local knowledge of structures and professions remains an essential and complementary element of any responsible social policy (Chiapello, 2013). (Alix & Baudet, 2015, pp. 111‒112) (our translation) 
We identify competition, loss of confidence and external control of the social as the most important fears. In the words of Chiapello (2013), and already quoted (in part) earlier, these may be summarized as follows:
Complicated processes of standardized and possibly audited measurements are therefore being replaced by a control by experts. This shift is linked to the desire to extend competition to all activities. The structures of the social economy, which historically worked in partnership with the public authorities on broad missions over a long period of time, must now enter into bidding processes for specific projects and limited time horizons. Obviously, this evolution leads to the development of measures to compare and demand results. The reactivity effects are now well documented, starting with that of the self-fulfilling prophecy. Small differences in scores at the outset can contribute to producing large real differences, especially in terms of resources. Other risks are those of conformity, restructuring of supply and avoidance of innovation. In order to get a good ranking, some mechanisms could be tempted to cease all activities that are not represented in the figures. (Chiapello, 2013) (our translation) 
In short, it is not so much the measurement of social impact that is frightening as the transformation of funding methods that it makes possible. Indeed, social impact measurement doesn’t come alone; it is accompanied by a paradigm, a vision of what the world should be, carried forward by actors who have resources, who are part of a network and who have interests. One way to prevent impact measurement from being used to enable a type of transformation that is not desired by the social economy sector would be, as Gouin (2018) suggests, to decouple impact and efficiency from giving. We would then have donors supporting the evaluation out of a desire to support and improve the sector’s practices rather than to pit social economy enterprises against one another and facilitate their control from the outside. But is that really possible?
Evaluation as a mode of governance
Summary: Evaluating the impacts of social economy enterprises is a strategic issue that can never be neutral. The preceding sections served to explain what essentially comprises the premise of the TIESS project on impact measurement. Any criticism is not intended to rule out the validity of evaluation as such but to warn against certain abuses thereof. It is, hence, in that context that social economy enterprises and networks are called upon to develop a critical perspective with respect to the issues at stake. Overall, networks and enterprises are not discouraged from engaging in evaluation but rather to seize it as an opportunity to reflect, rethink or reaffirm their objectives, priorities, identity and political project.
TIESS also concurs with Bouchard, who argues that “the mode of evaluation is apt to determine the mode of governance of the organizations participating in it” (2009, p. 251; our translation). Thus, we caution against evaluations that promise to be depoliticized, neutral and based on an external technical expertise that ensures objective decision-making based on principles of “good governance.”
In a decision-making system that substitutes indicators and standards with the experience of experts, the chosen measures will necessarily be constrained by the figures which it produces, whereby the self-fulfilling prophecy unfurls at full throttle (Chiapello, 2014). (Alix, 2015, pp. 111‒112) (our translation)
Tensions around evaluation arise when one of the stakeholders appropriates the right to define what constitutes social usefulness, and impose a particular method for measuring that usefulness. Indeed, adopting an evaluation method is like putting on special glasses. Depending on the type of glasses, observations will differ. Hence, when seeking to understand the issues involved in evaluating social usefulness and to render the process appropriate, we must first understand who is doing the evaluation and how it is evaluated. (Branger et al., 2014, p. 4) (our translation)
This call for skepticism is not, however, a rejection of evaluation, expertise or the quest for objective, informed, “true” information,
Even though the legitimacy of the expert—considered impartial because he or she is external—can be questioned by the legitimacy of democratic processes, it should not be rejected. The presence of an expert who is proficient with the different methods has proved indispensable for the methods tested by Corus-ESS, including the one based on consultation. Expertise should not be seen solely in contradiction with internal debate, consultation and deliberation processes. Rather, it can, to a certain extent, enrich these when care is taken to ensure that it does not replace them. (Branger et al., 2014, p. 37) (our translation)
Hence, the call for skepticism is simply an admonition that recommends participatory, negotiated evaluation and that takes into account power imbalances and attempts to counter them by giving a voice to groups that might otherwise be excluded from the discussion of what is given value. For that is the whole issue of evaluation: determining what is valuable.
It is important to ensure that all those organizations and groups that will contribute to its success (or failure) are involved in calibrating performance metrics. The process of determining who should be involved and at what scale will be a matter of judgement. Involving stakeholders is one way of reducing the risk of missing a significant contributor to the desired outcome (Jacobs, 2006). (Nicholls, 2015, p. 9)
In short, the way in which social economy enterprises are evaluated will help define the role that these organizations ought to play in the development model of our societies (Bouchard & Richez-Battesti, 2008, p. 7).
This lays the groundwork for a project which the Franco-Belgian initiative Valorisons ensemble l’impact social de l’entrepreneuriat social (VISES) summarizes as follows:
When seeking to increase awareness and recognition of the SSE, it is important to ensure that any system of indicators to be established can report on what an SSE enterprise is and what it produces. When the productivity of the SSE is captured only or mainly by the results of its outputs, and little or not at all by its governance and redistribution practices, its specificities (and their outcomes) risk not being considered. (VISES, 2017) (our translation)
This leaves us with the question of who is to define how social economy enterprises are evaluated.
 Original quote: Ce glissement est lié à la volonté d’étendre la concurrence à toutes les activités. Les structures de l’économie sociale, qui travaillaient historiquement sur la longue durée avec les pouvoirs publics sur des missions larges, doivent maintenant entrer dans des processus d’appels d’offres sur des projets précis et des horizons de temps limités. Évidemment, cette évolution pousse au développement de mesures pour comparer et exiger des résultats. Joue également dans ce déplacement le souhait de voir se développer les financements privés. Les caisses des États sont vides, alors que des fonds privés abondants sont en quête d’investissements. Il s’agit donc de les attirer en construisant un univers d’investissement qui ressemble à celui de la finance et qui puisse être intermédié par des gestionnaires de fonds ou de fortune non spécialistes. L’engouement pour la mesure d’impact est donc un signe des temps. Elle apparaît attrayante, non à cause de son efficacité présumée, mais du fait de sa capacité à organiser autrement le monde des services sociaux en favorisant le pilotage à distance par des non-professionnels du social, la contractualisation sur projet pour des financeurs publics ou privés, et la génération d’une information digeste pour d’éventuels investisseurs. Il faudra faire attention à ce qui risque d’être perdu si ces pratiques finissaient par dominer tout le champ des activités sociales. (Chiapello, 2013)
 Original quote: D’autres risques à mentionner sont ceux de conformation, de restructuration de l’offre et d’évitement de l’innovation. Les impacts pourraient perdre de leur importance par rapport à la conformation aux critères de notation […] Si le GECES n’oblige pas à se référer à un tel protocole, il préconise le système « se conformer ou se justifier » (comply or explain), dont l’expérience montre qu’il conduit plutôt à l’alignement sur la pratique de la majorité. Une labellisation des entreprises sociales à partir de « méthodes statistiques et [de] la mise en place d’indicateurs communs » (Commission européenne, 2011) jouerait dans le même sens. C’est la raison pour laquelle les politiques ne doivent pas toutes s’aligner sur des « e-mesures » d’impact social telles qu’est en train d’en fabriquer l’industrie financière (Alix, Baudet, 2013). La connaissance de proximité des structures et des métiers reste un élément incontournable et complémentaire de toute politique sociale responsable (Chiapello, 2013). (Alix, 2015, pp. 111‑112)
 Original quote : On substitue dès lors des processus compliqués de mesures standardisées, et éventuellement auditées, à un contrôle par des connaisseurs. Ce glissement est lié à la volonté d’étendre la concurrence à toutes les activités. Les structures de l’économie sociale, qui travaillaient historiquement sur la longue durée avec les pouvoirs publics sur des missions larges, doivent maintenant entrer dans des processus d’appels d’offres sur des projets précis et des horizons de temps limités. Évidemment, cette évolution pousse au développement de mesures pour comparer et exiger des résultats. Les effets de réactivité sont maintenant bien documentés, à commencer par celui de prophétie autoréalisatrice. De faibles différences de scores au départ peuvent contribuer à produire de grandes différences réelles notamment en termes de ressources. D’autres risques sont ceux de conformation, de restructuration de l’offre et d’évitement de l’innovation. Pour être bien notées, certaines structures pourraient être tentées d’arrêter tout ce qui n’est pas représenté dans les chiffres… (Chiapello, 2013)
Agence Phare. (2017, mars). L’expérience de l’évaluation d’impact social. Pratiques et représentations dans les structures d’utilité sociale.
Alix, N. (2015). Mesure de l’impact social, mesure du « consentement à investir ». Revue internationale de l’économie sociale : Recma, (335), 111-116. doi:10.7202/1028537ar
Alix, N. et Baudet, A. (2015). La mesure de l’impact social : facteur de transformation du secteur social en Europe (no 2014/15). Belgique : CIRIEC.
Bouchard, M. J. et Richez-Battesti, N. (2008). L’évaluation de l’économie sociale et solidaire : une perspective critique et internationale. Économie et solidarités, 39(1), 5-13.
Branger, V., Gardin, L., Jany-Catrice, F. et Pinaud, S. (2014). Évaluer l’utilité sociale de l’économie sociale et solidaire. Projet Corus-ESS (Connaissance et reconnaissance de l’utilité sociale en ESS).
Cabaj, M. (2017). Shared Measurement | The why is clear, the how continues to develop. Tamarack Institute.
Chiapello, E. (2013, mai). Mesure de l’impact social : pourquoi tant d’intérêt(s) ? Bulletin mensuel de Confrontations Europe.
Ebrahim, A. et Rangan, V. K. (2014). What Impact? A Framework for Measuring the Scale and Scope of Social Performane. California management review, 56(3), 188-141.
Espeland, W. N. et Sauder, M. (2007). Rankings and Reactivity: How Public Measures Recreate Social Worlds. American Journal of Sociology, 113(1), 1-40.
Gouin, R. (2018, janvier 25). Les dangers (relatifs) de la culture de l’impact. The Conversation. http://theconversation.com/les-dangers-relatifs-de-la-culture-de-limpact-90265
Hurvid, D. (2013, May 1). Mission Drift: Avoiding the Slippery Slope. Imagine Canada Blog. Repéré à http://www.imaginecanada.ca/blog/mission-drift-avoiding-slippery-slope
Morley, J. (2017). The impact of « impact »: The effect of social impact reporting on staf identity and motivation at social enterprises and charities in the UK. Working Paper.
Nicholls, A. (2015). Synthetic Grid: A critical framework to inform the development of social innovation metrics. Oxford: Creating Economic Space for Social Innovation (CRESSI).
Smith, P. (1995). On the unintended consequences of publishing performance data in the public sector. International Journal of Public Administration, 18(2–3), 277-310. doi:10.1080/01900699508525011
United Way of America. (2000). Agency Experiences with Outcome Measurement.
VISES. (2017). Orientation stratégique du projet VISES – Approche des théories et pratiques. Lille et Louvain-la-Neuve : CRESS et ConcertES.
Want to know more?