Discover more from Eduwonk
Is Diversifying Grantmaking About Metrics Or Methods?
Maybe we got the aperture wrong but not the method?
Earlier this year I took a look at how different ways of grantmaking – in particular project based funding – might be a contributing factor to the ongoing meltdowns at progressive organizations. And more recently a look at philanthropy and risk.
In both cases, some of the most interesting responses came from people inside philanthropy – blink three times if you’re being held against your will.
So I want to share a third and related observation. In an effort to improve diversity in the education sector we may be hindering effective operations of non-profits more than helping. Some people in the non-profit and philanthropic sector have raised concerns about this as well, it’s not my unique insight.
Also, at the outset let me be clear: Improving access to philanthropic capital (and private capital for that matter) is an essential goal and a goal that involves a focus on diversity. As in many areas of American life patterns, in particular race, ethnicity, and gender, should cause concern and action. To the sector’s credit, philanthropists saw this problem, too, and are trying to address it. (It’s also a problem along ethnic and class lines. Going to the “right” schools, for instance, still helps a lot.) The end result is that lot of people are shut out. Changing that is for the good.
My concern, instead, is about the how of changing that, not the goal.
A brief history of grantmaking that is over-simplified but good enough for our purposes goes like this: For a long time grantmakers gave money to people, usually men, and usually white men, who they saw as change agents. There were generally agreed upon goals, open up schools, develop new university or secondary school curriculum, or pilot this or that. But the specifics of how to do the work were generally left up to the grantees. Sometimes this strategy led to positive change, sometimes it didn’t.
It was an invest in people strategy. Give people money to do things they seek to do to try to improve the world.
While this work certainly led to some progress in a variety of ways, it also had one big problem. Owing to the structure of American life and prevailing views it was mostly white guys getting the money. Sure, there were exceptions. But the fact that we can quickly name so many of those exceptions, historically and more recently, basically illustrates the problem.
This is not a pre-1964 or 19th-century problem. This was still a pretty common pattern when I started doing this work a couple of decades ago.
To address this, grantmakers decided they needed metrics and measures to help ensure more diversified philanthropy. This became something of a best practice. And they turned to management consultants to create those metrics, often consultants who said, ‘well, why use five metrics when 50 will do?’ (I’m obviously not anti-consultant, Bellwether consults. But my tastes run more to KISS or stuff like this).
Did someone forward you this? You can read these posts at Eduwonk.com or subscribe for free to get them in your inbox
Meanwhile, woe to the grantees who have to live under all of those metrics. It’s especially hard for new, scrappy, and under-resourced grantees – who are more likely to come from what’s now fashionably called “marginalized” backgrounds. And even in well-established organizations it can distort priorities and create paper chases. And yes, it can lead to the kind of internal discord fueled by lots of small grants that leads to meltdowns.
So here’s the question: What if this solution is a misreading of the problem and consequently a badly designed remedy?
I’d suggest the problem might not have been the method of giving – invest in people and be tight on goals and results and loose on the rest. Rather, it was the more basic problem of who was and was not getting the money. In other words, we might ask if the method was sound but the field of view was too constrained. Invest in people, yes, but make sure you are investing in a far more diverse and representative group of people.
This would point to a strategy of opening the aperture, but perhaps not changing the foundational methods of giving.
The problem with the “old” way of grantmaking, it seems, was a problem being far too narrowcast. But the method itself, give money to people and expect them to do good things is not inherently flawed. In fact, it may be preferable to burying everyone in metrics.
The detailed metrics all feel safe. Everyone is doing the same things, being measured the same way. Using metrics and data to compare, that feels very progressive and equitable.
But there are costs.
First, there is a noise to signal problem. Grantees are going to try to meet these metrics or at least appear to meet them and sometimes it’s stuff that is measurable but a distraction from the core goal of a grant. Take for example “press hits,” a popular metric for policy work. The media is a key part of the policy ecosystem but getting a press hit and getting something done are two different things. Some of the most impactful work is behind the scenes. And a focus on press hits creates an incentive for bad or distracting behavior. Or meetings, a metric for how many you take. With both media or meetings, quality matters more than quantity. Yes, requirements can focus on high quality earned media or decision-maker level meetings. But why? If you don’t trust grantees to have that political sophistication a better strategy is to provide support to develop it.
Second, there is a capacity inequity. Large grantees can bring on talent to manage these grant processes. Setting up all these metrics and tracking progress against them takes time. Smaller grantees – and smaller organizations are more likely to be led by people from exactly the kind of backgrounds funders now want to include – often don’t have that capacity. It’s really not a highest and best use of scarce resources for them anyway.
Third it leads to informal evasion and bad practice. I’ve been told by funders not to worry about metrics, “no one reads them.”I know others get that advice. I don’t have any reason to believe that different people are being held to different standards in any way that’s not pretty random and program officer driven. But why leave people wondering what metrics actually matter when you could just ask for data on the ones that really do matter. It’s sort of mandarin.
Leveling down. The idea that a focus on compliance or common outcomes leads to a leveling down of quality is hardly a new one. But it seems relevant and hard to miss here.
Now look, to be clear* – there’s always a ‘to be clear’ graf on complicated stuff and especially on complicated stuff being oversimplified – I’m not against measurement and evaluation. Or against diligence. We do a lot of that work here at Bellwether. As I’ve noted I don’t think blank check giving it all it’s made out to be. I just think the sector can find a happier medium.
The point is that metrics and measurement should be the minimal amount to help discern impact, not tied to every aspect of a logic model – especially on dynamic work like policy or advocacy. It should certainly not take on a life of its own within any project. So more MVP than pseudo-scientific or creating false precision.** Measurement should serve the work, not the other way around. I should also note, there are also a variety of methods of giving out there, some like what I’m arguing for. Not every grant is a blizzard of metrics. But too many are.
And also to be clear, I don’t want to conflate metrics about who gets dollars with the metrics attached to a grant. Collecting data on the who is important and not especially burdensome. And tracking the who is pretty essential to ensuring that efforts to diversify portfolios are working. I’d suggest a broader set of metrics than what is sometimes used, to also include economic class, veterans status, political orientation, and other measures of diversity.
The larger point is pretty obvious, was the problem with the old way a problem of method of giving or rather the size of the aperture in terms of who was involved? Broadening the aperture and also giving people a lot of room to run seems worth considering?
More diverse grantees and more flexibility for them. Not a great bumper sticker or tee shirt. But maybe worth trying more? A more diverse set of grantees all laboring under the same constraints doesn’t seem like true progress.
*Also, to be clear, at Bellwether we get some foundation funding (though not as much as would help us achieve all our goals and it’s a fraction of our overall annual budget). All sources of funding are disclosed on our website and in any project or publication where it’s relevant. All our funders are brilliant, virtuous, beautiful, visionary people.
**The tyranny and craziness of small n sizes in all these things is a whole separate issue.