As is clear from this issue of Alliance, impact evaluation is a growing preoccupation in the non-profit world. To get a wider perspective on how it is viewed and how well it’s working, during September and October Alliance and Keystone conducted an online survey to get the opinions of those on both ends of evaluation, donors and grantees.
The results suggest a broad acceptance of the importance of evaluation by donors and grantees alike, though neither group seems to feel it is contributing all that much to grantee effectiveness. This presents donors with a huge opportunity – to do it better and realize the full potential. If evaluation were properly funded, and if donors did more in terms of following up the findings, it could make all the difference.
Another perhaps surprising finding was that the views of those on either side of the fence are not as polarized as might be imagined – though each has the unsurprising tendency to see themselves in a better light than the other group sees them in.
Just under 300 people responded to the survey, 226 of them grantees, representing most types and sizes of CSO; the remaining 72 were donors (a few fell into both categories, but chose to answer as either one or the other). It should be noted that the survey had no pretensions to being scientific. It was widely advertised and open to all; those who responded were those who wanted to. The donors who took part, therefore, were not funders of the grantees who took part, which means that none of the respondents is talking about exactly the same experience of evaluation. This caution should be borne in mind when reading this, and in particular when comparing donor and grantee responses.
The purpose of evaluation
There was a fair measure of agreement on both sides as to what evaluation is for – to understand the difference that grantees make as a result of a grant. Donors also seek to understand the long-term changes that grantees are making, their influence on others, what the grantees are learning, and the actual activities and outputs that the grant has underwritten. More than 80 per cent of the grantees said that they and their donors agree on the definition of success before they receive their grants.
On the question of what information is collected, there were a number of discrepancies between the two camps. Whatever information grantees collect, they say they are more likely to track the changes voluntarily than to respond to a requirement from a donor. Strangely, of the two groups, the donors more often reported that they require grantees to track specific information. While 72 per cent of donors say that they always require grantees to track changes in output and activities, only 59 per cent of grantees reported that donors make this a requirement.
Donors apparently think they are far more flexible than grantees believe them to be. When asked whether they would adjust reporting requirements in order to make a grantee more effective, 62 per cent of donors said yes, but only 24 per cent of grantees think donors would actually make these adjustments.
There was also a discrepancy in the view of the two camps on the reporting of problems. While a majority of donors surveyed think that grantees feel comfortable in reporting problems, grantees are more likely actually to report them than donors think they are. Seventy-six per cent of grantees claim, confidently or reluctantly, to report problems, whereas 67 per cent of donors believe they do.
The burden on grantees
Another piece of received wisdom is that reporting is generally seen by grantees as onerous. However, the vast majority of both donors (83 per cent) and grantees (80 per cent) see evaluation reports as not overly distracting, at least in terms of the time they take to produce. Interestingly, the donors seem more likely to see the reports as burdensome than the grantees.
The financial burden on grantees is another matter – and this relates to one of the key findings of the report. A significantly higher percentage of donors than of grantees believe evaluations to be adequately funded – 29 per cent as against only 10 per cent. The most striking point here, however, is that a large majority on both sides believe that evaluations are not properly funded. If donors think evaluation is important, why don’t they fund it?
How useful are the reports?
There are also some discrepancies between the views of the two sides when it comes to following up the results of evaluations. Almost a third of the grantees say that donors don’t follow up, while only 5 per cent of donors think that they don’t. While 80 per cent of donors claim to discuss the reports with their grantees, only 56 per cent of grantees say they discuss the reports with their funders. Donors are also more likely than grantees to say that they (the donors) use the evaluation to improve their own effectiveness and to learn how reporting enhances grantees’ effectiveness.As to publicizing the results of impact evaluations, a far greater proportion of grantee respondents believe they always do this than donors are prepared to give them credit for.
Perhaps surprisingly, views on the effectiveness of reports are much more consonant.A similar proportion of both groups think it is useful to collect specific data on specific impacts, but donors have a slightly more positive view of the overall effect on grantee effectiveness than the grantees themselves.
Only 23 per cent of donors require external evaluations at least half the time, and only 38 per cent of the grantees reported having undergone an external evaluation with the donor they referred to throughout their responses. The reason for these relatively low figures may well be that, according to the survey, neither group finds the process very useful. Only around half of the respondents in both groups feel they are ‘somewhat’ useful. While the donors consider themselves quite flexible when it comes to setting the terms of external evaluation – naming the evaluator, defining the terms of reference, and adjusting these to meet revised objectives – the grantees’ view is slightly less favourable.
There is also a big discrepancy in views on the design of external evaluations. All the donors reported that their external evaluations are forward-looking, designed to highlight what works, what to sustain and what to change in the future, but only two-thirds of the grantees agree with this. Some expressed frustration with a lack of understanding on the part of the evaluator, one remarking that ‘External evaluators need to understand an organization’s objectives and why they do things the way they do before making assumptions.’ Another wrote, ‘External evaluations are done using a quick method, and thus use tools which are not always adequate and appropriate to capture the range of impact, especially the indirect impacts.’
It appears that donors and grantees alike understand the reasons for evaluation and pretty much agree about what information should be collected and reported. Neither donors nor grantees find the evaluation process overly burdensome, but even the donors agree that they don’t provide sufficient funds for grantees to cover the costs. In many instances, as we have seen, the perceptions on both sides about how well they do things are more favourable than those expressed by the other side.
This is particularly evident in the area of follow-up. While donors think they do a better job of engaging the grantee and learning from the report once it is finished than the grantees think they do, grantees claim they publicize evaluation findings more frequently than donors think they do. While it is not surprising that each side should see what it does more favourably than the other, the discrepancy in these theseresponses suggests the need for better communication between donors and grantees.
Finally, the verdict on the usefulness of evaluation is a mixed one on both sides. While we did not find the disgruntled group of grantees one might have expected, nor did we find complete satisfaction among either group of respondents. That said, the majority on both sides seem to feel that it has its uses.
It seems that donors may be missing a trick here. On the showing of the survey, grantees are generally willing participants in evaluations, yet donors do not invest in them sufficiently. Nor do they make good use of the opportunities for learning they offer. In short, they don’t take them seriously enough. Evaluation, as practised, may have its limitations, but greater conviction among funders would surely help.
Betsy Schmidt is President of Southpoint Social Strategies, a management consulting firm for non-profit organizations and their communities. Email email@example.com
David Bonbright is Chief Executive of Keystone. Email firstname.lastname@example.org