Jonathan Harris
Founder @ Total Portfolio Project
November 2022

@John Berger, great questions! I think it really depends on what you think the errors in play ‘look like’, and what the risks are if you make the wrong decision. I can see different ‘ratings math’ doing well in different contexts. The more I think about this the more curious I get about the specifics of how everyone else is thinking about it.

This discussion, where we’re sharing our current ‘answers’, seems like a good starting point. But it doesn’t feel like it will be enough to convince anyone one way or the other.

One idea that might work as a next step is to put together a commonly agreed pool of example (hypothetical) investments – with a wide mix of scores on the impact dimensions and other characteristics. And then apply different approaches to them. Also stress test the approaches by adding different errors to the dimensions. Different investments would rank differently under each approach and perform different under the stress tests.

The result would be that rather than just saying ‘I multiply’ or ‘I add’, we could say ‘I use a weighted-average approach because it makes sense to me AND it worked best for my asset class in the Impact Frontiers ratings stress test’.

Thoughts? Anyone want to see if we can do something like this? Or other ideas?

John Berger
COO @ Women of the World Endowment
November 2022

Jonathan

It might help if I share some of my biases.  This is long but it will help explain why I am asking these questions.

I’ll start with a concern that I have that stems from some of the trends I have seen in nonprofit evaluation.  In the nonprofit funding world, there has been a strong trend to the idea that evidence in numerical form is better than qualitative evidence and thus a strong push by funders to require nonprofits to report quantitative outcome data.  Sometimes that’s great and improves the work and evaluation, but more often than not what I see happening is that nonprofits end up spending a lot of time and money creating low-quality data that has far less useful info than the qualitative reporting and related due diligence that funders used to rely on.   Its been very discouraging to see some amazing nonprofits lose funding because they don’t play the data game well with funders.

Similarly, in my work in the impact metrics space and the emerging ESG and other sustainability reporting requirements, I have seen a trend (not accusing anyone here of thinking this way) which is basically faith in the idea that if we make people put numbers to things its always better than narrative.  This faith then extended to “all data is good data” yet the metrics systems (even the best ones like TCFD if you look closely ) are kind of a mess with too many options on how to report that results in data that really is not comparable between reporters.

I also have a bias from my banking life, where we have lots of financial data that has been well specified and is pretty easy to report comparably between companies, yet is still just the past and has a lot less power to tell the future than we would like. For example, If I wanted to understand a company’s CAPEX, I can see that past easily, but I’d much rather read a narrative about a company’s R&D plans than look at data.

Because of all those biases, I have been trying to learn more about the breaking point in the utility of qualitative vs quantitative approaches and how that question relates to information in impact, ESG, etc.

I don’t have an answer or even a good path yet.

In regards to your suggestion “next step is to put together a commonly agreed pool of example (hypothetical) investments…..then apply different approaches to them”  that’s a good idea and seems worth pursuing. 

 

 

About the contributor
Jonathan Harris
Founder @ Total Portfolio Project
Founder @ Total Portfolio Project

Subscribe to Our Newsletter

Skip to content