Motivated to #ShiftThePower in nonprofit evaluations


Dana R. H. Doan


As I read GrantCraft’s latest report, “How Community Philanthropy Shifts Power: What Donors Can Do to Help Make That Happen,” I found myself scribbling down thoughts relating to nonprofit performance evaluations. Less than a week before, I completed my first year in a doctoral program at the IU Lilly Family School of Philanthropy, after working in the nonprofit sector for almost two decades, including nearly ten years with a community philanthropy organization. The GrantCraft report, co-written by Jenny Hodgson and Anna Pond, provides warranted advice for funders seeking to contribute to positive development outcomes. I particularly welcome the report’s guidance on how funders can use metrics and due diligence to empower local people.

In my second semester, I spent a good chunk of my time researching past efforts to measure nonprofit effectiveness – the history, the ethical dilemmas, current trends, and on-going challenges. I encountered a LOT of scholarship on the ineffectiveness, even harmfulness, of many (still!) common approaches to measuring nonprofit performance. As the report states, it is clear to many that institutions in positions of power, such as funders and policy makers, play a key role in enforcing or heavily influencing many unfortunate practices we see today in performance measurement.

Consider the dizzying array of tools and platforms designed to help donors compare or evaluate nonprofits. And this is despite a half century of scholarly works and practitioners raising red flags and awareness about the complexity of nonprofit work and the need for more context in any determination of effectiveness. Such cautions are regularly swept aside in the popular pursuit of standardized, quantifiable, and comparable measures. Aside from the fact that these tools and platforms prioritize efficiency at the expense of impact, they raise an important ethical question: Who decided what is (and what is not) to be measured?

According to principal agent theory, a more powerful “principal” can influence a less powerful “agent” to act against its best interest[1]. Applying this theory to development, a community philanthropy organization (CPO) must guard against playing the role of agent to its funders and principal to its grantees. Hodgson and Pond’s guidance alludes to principal agent theory, calling on funders to think about power dynamics when establishing values, determining metrics, and communicating with grantees. I appreciated the language they use in the report, replacing power laden terminology, such as beneficiary and downward accountability, with constituent and outward accountability.

Funders need to think about the incentive structures they helped create – intentionally or not – for their nonprofit partners. When a key goal is to build relationships, shift power, and promote collaboration; quantifiable outputs, speed, and efficiency are not likely to be the right indicators. In fact, those indicators can be detrimental to the long-term goals. The reality of the situation is that, “not everything that can be counted counts, and not everything that counts can be counted[2].”

One approach to performance measurement that is gaining adherents for its focus on ensuring accountability to individuals and communities nonprofits are meant to serve is Constituent Voice™. Some practitioners and funders are forming collectives to test its potential. One example is the Resilient Roots initiative, coordinated by Civicus with technical support from Keystone and Accountability Now. The initiative aims to study the resilience of nonprofits that are accountable and responsive to their primary constituents. Meanwhile, in the United States, the Fund for Shared Insight’s Listen for Good initiative is working with US based human service organizations (and their funders) to set up constituent feedback loops.

During GEO’s 2018 National Conference held last month in San Francisco, Valerie Threlfall, Director of Listen for Good, shared preliminary findings from the constituents of 46 nonprofits in their first co-hort. When that feedback data was disaggregated, there were notable differences in satisfaction ratings across age groups, gender, and racial identify. Specifically, adults, females, and Caucasians reported higher satisfaction levels compared with youth, non-female, and constituents of color.

Listen for Good’s findings mirror scholarly research in the Public Administration discipline, which reveal differences in levels of constituent satisfaction with public services across gender, race, and location. Scholars compared constituent satisfaction with administrative records of service outputs and efficiency (e.g., number of people served, number of issues resolved, cost per unit served). When satisfaction and service levels converged, scholars generally felt the data was to be trusted; however, a debate would emerge whenever satisfaction and service levels did not correlate. While some scholars tried to discredit constituent feedback as unreliable or biased, others proved that constituent feedback can reveal important information not captured by allegedly “objective” data. Objective data is not designed to uncover contextual information, such as differences in culture, values, life experience, and expectations, which is critical to understanding outcomes.

“…official performance measures … tend to be labeled as objective simply because they reflect the perspective of administrators as opposed to citizens… what’s the difference between expert (agency) and citizen feedback? … the distinction is actually between measures developed by a relatively small group of experts vs. individual judgements of large numbers of citizens.”[3]

Since reading the GrantCraft report on Community Philanthropy, it’s got me thinking that while constituent voice holds great potential to provide agency, it can also be manipulated by people in positions of power. It requires guiding principles and examples, such as those laid out in the report by Hodgson and Pond, and Keystone’s Ethical Framework for Feedback Exercises. Only in that way, will we be able to design better methods for collecting, learning, improving, and reporting in a way that promotes responsiveness, equity, and agency for constituents, reliable data to inform decisions by nonprofit staff, and assurances to funders. As Hodgson and Pond indicate, a movement to prioritize feedback that empowers will require “A blending of systems… A loosening of the reins… A shift in power.”

Dana R.H. Doan is a Doctoral Student, IU Lilly Family School of Philanthropy Advisor, LIN Center for Community Development


  1. ^ Eisenhardt, K. M. (1989). Agency theory: An assessment and review. Academy of Management Review, 14(1), 57–74.
  2. ^ Cameron, William Bruce (1963), Informal Sociology: A casual introduction to sociological thinking. Random House, New York (p.13).
  3. ^ Schachter, H. L. (2010). “Objective and Subjective Performance Measures: A Note on Terminology.” Administration & Society, Vol.42(5): 550-567.

Comments (0)

Leave a Reply

Your email address will not be published. Required fields are marked *