Paddy Carter
Director, Research & Policy @ British International Investment
October 2022

I must start by confessing that so far I have only watched the IPA video.

The message that naïve before-and-after comparisons can be misleading is true and important but the problem with “go rigorous or go home” is that, if I had to guess, we only stand a chance of doing something rigorous with about 5% of our portfolio.

These conversations can misfire because people use words differently – I am using “evaluations” to refer to any concerted effort to go out and understand something about impact, over and above the impact metrics that get reported back to you by the investee.

Another caveat is that these questions look different to different types of investor that seek to achieve different types of impact in different ways. BII is a “full service” DFI  that invests in everything from infrastructure and banks, VC funds, and direct investments in automative manufacturers or large agricultural off-takers. How easy it is to do something rigorous varies a lot in that set.

This is an enormous topic – here are only a few starters.

In my view there is a whole bunch of worthwhile things to do, with the general goal of testing your theory of change, that falls between rigorous and nothing. This is ex-post impact due diligence. You want Information that seems to confirm (or not) that what you hoped would happen seems to be happening. It is suggestive in the absence of causal identification. Even before-and-after might be worth knowing, if you are in a context where confounding factors are less of a concern.

It seems obvious that learning more about how (and whether) your investments achieve impact is important for impact-led investors, but it is surprisingly difficult to produce evaluations that are useful to senior decision-makers in an investment organisation (interesting in their own right is easy).

I am setting aside the “accountability” function of demonstrating to your investors (in our case the British government) that you are achieving something worth the money you’ve been given.  That matters to senior leadership, but does not much help with strategic decisions or day-to-day investment decisions.

I think part of the problem is that we are not designing interventions which we want to evaluate to help us design a better intervention. Management teams run the businesses that we invest in, they react to market conditions. Sometimes evaluations do help managers of specific businesses raise their impact, but that’s a somewhat different objective. Senior leadership at BII want to know what should we be doing more or less of, but we do not have a single coin with which to measure impact across everything we do, and it’s hard to get evaluators to produce results that help answer these sorts of question. Raw information about the quantities of heterogenous impacts achieve across a set of investments is not much use.

We have small samples too, so it’s hard to separate noise from signal and know if investments that seemed to achieve high/low impact mean that the next similar investments are likely to.

It is perhaps also worth noting that we won’t always want to know about impact in the sense of “what happened when we did minus what would have happened had we not” – we invest in an EV manufacturer, we want to know about the quantity of jobs that resulted in, who got those jobs, how much carbon was avoided because of those EVs etc. We are not always interested in those answers being relative to what might have happened if we had not invested, and some other EV manufacturer might have come along. To use a rather tired analogy, if you save a child from drowning, you did something real, not nothing because the person next to you would also have saved them under the counterfactual. The whole “treatment / control” set up gets trickier when you are investing in one of a number of actors in a competitive market.

That is probably enough for now

About this discussion
“What can impact investors learn from evaluators (and vice versa)?”
Over the last decade, investors claiming  to have positive impacts on people and the planet have been called upon to meet increasing standards of impact measurement and reporting, to avoid the perception (or the reality) of ‘impact-washing.’ As the communities of impact evaluation and investing converge, what can each learn from the other?
“What can impact investors learn from evaluators (and vice versa)?”
Impact Ratings Math
Investor Contribution 2.0 – Discussion Forum

Subscribe to Our Newsletter