The quick answer seems to be ‘nearly everybody’. In a report I recently compiled for WINGS, the need for more data was an almost constant refrain among those I spoke to. In a few – very few – parts of the world, there is plenty of data, while in others, there’s practically none. That’s only one of the problems.
Even in countries like Canada with a well-developed philanthropic infrastructure, respondents complained about the absence or quality of data, so before we get carried away in a headlong rush for information, it’s worth asking what sort of data we need and whether what we already collect is as useful as we think it is.
First, consider the survey: it’s too blunt an instrument to offer much precision.
In surveys on individual giving, for example, giving to religious causes or institutions always comes out pretty near the top, even in secular countries.
This might seem surprising, until you consider that there is no distinction drawn between gifts made to the institution for its maintenance as an institution, or gifts made through it in support of a welfare or social programme.
Many people simply see giving through the mosque, church, synagogue or other faith based institutions as the most effective way of giving to the needy. Education – another top destination for donors – is similarly unsatisfactory.
Is it someone funding a new building at their alma mater? Or is it a donation to provide teachers or equipment in an underserved area?
These and other forms of imprecision are not necessarily the fault of the survey designers. There’s a limit to what you can ask and to what participants can answer in a questionnaire.
Second, trying to compare data across countries or regions is a chancy proposition.
Even in Europe, for example, where there’s plenty of data and which appears (at least to outsiders) to be relatively culturally and institutionally homogeneous, comparing like with like is still an impossible exercise, as the European Research Network on Philanthropy’s (ERNOP) study of last year shows. In some countries, data is incomplete either because it was either not collected or not made available to the researchers.
The figures for Norway don’t include corporate giving. We have ‘only an incomplete picture’ of foundations in most European countries and, in sum, the information available ‘does not yet provide a convincing and comprehensive story about philanthropy.’
Other comparative studies of European philanthropy have similar weaknesses – the most up-to-date information on country X is from 2014, while for country Y it is 2016, for example; or it excludes donations over a certain amount; or the categories of recipient aren’t the same, etc, etc.
The researchers themselves are well aware of these defects – the imprecision of their instruments and the often incomparable character of their results – in fact, they go out of their way to caution us about the second and to try to compensate for the first. The trouble is, we can’t help ignoring them.
There’s a seductively one-dimensional quality about statistics. They appear uncompromising and inscrutable and the temptation to read supposedly comparative sets of data as league tables, however spurious, is overwhelming.
But beside the problems which we know about – if we choose to think about them – I think there’s another that’s less clear.
My suspicion is that researchers are more inclined to produce data for other researchers than for those who might be in a position to put it into practice. Information goes round a small circle.
It’s picked up and quoted by people like me in reports and articles, so we can pat ourselves on the back because it looks like we’ve done our homework.
But for those who don’t need to know how many foundations there are in this or that country or who already know from bitter experience that very few people give to human rights causes …. well, it can be hard to see what they get out of it.
Data seems to be serving least well the people who need it most. There might be a few reasons for this. For instance, because those who gather it are prevented, for whatever reason, from finding out what they need to know, they fall back on what they are able to know.
But my guess – and, not having done the research, I put it no higher than that – is that the principal reason is that the research and practitioner communities don’t talk to each other very much.
About a year ago, Alliance had a special feature entitled ‘Philanthropy scholarship and practice – bridging the divide.’ That bridge is far from finished and, with the announcement of the imminent closure of the Erasmus Centre for Strategic Philanthropy, what there is of it may even be facing structural damage.
So should we scrap all data gathering? Of course not.
People involved in the social sector in data-starved countries probably wish they had half these problems. But before we try and bring them into the land of data-plenty, I believe we need to review what we collect and why.
In practical terms, this probably means more collaboration – or at least consultation – between practitioners and researchers.
It also means fewer misleading international league tables and more detailed and rigorous studies focused on one country. It also means interrogating the data we get more closely, and not just accepting it because it’s a number.
Does this seem obvious? Quite possibly, but it’s often worth stating the obvious, because we’re inclined to overlook it.
Andrew Milner is associate editor of Alliance. The views in this article are the author’s and don’t necessarily represent those of Alliance.
Comments (0)