Participatory evaluation

Ricardo Wilson-Grau

I evaluate the programmes of organizations that are committed to social change: innovation, modifying systems, transforming structures and relationships of power – no less! They want to understand the impact of their activities, but accept that the time lag between action and effect, and the extreme difficulty of attribution, make it almost impossible to measure enduring, sustained changes in the lives of people or the state of the environment. We therefore evaluate the results that are upstream from impact – outcomes.

The 1,800 members of the Global Water Partnership (GWP), for example, aim to improve integrated water resources management in 60 countries to alleviate poverty and contribute to sustainable development. The GWP process involves many actors from government, business and civil society at global, regional and national levels whose relationships are constantly changing. Moreover, the many variables – economic, political, social, technological and ecological – to which the global, regional and country water partnerships are subject mean that influences outside GWP’s work may have as much effect on results as those within it. Its ultimate goals will take decades to achieve.

GWP’s contribution to outcomes is therefore one of non-linear causality: however well thought out the original objectives and the plans to achieve them, GWP can at best control its own activities, and often these too are modified by external circumstances. Outcomes and impacts are largely uncontrollable and unpredictable. We therefore use a definition of outcome developed by the Canadian International Development Research Centre: the changes in other social actors that together represent a pattern of change towards the sustainable use of water. It is only when people or institutions change their behaviour that changes in society or the environment will be achieved. These results are beyond GWP’s control but within its influence.

A traditional way to evaluate a programme to provide communities with cleaner water by installing purification filters would be to count the filters installed and measure subsequent changes in water quality. An evaluation that focuses on changes in behaviour begins from the premise that water does not remain clean unless people work to keep it that way. It therefore looks at whether those responsible for water purity helped create the right laws and regulations and community awareness and understanding, and used appropriate means to monitor and maintain that purity.

The evaluation answers three questions about change:

• The outcomes: who changed what, when and where? We identify the specific, verifiable actions in communities that are a result of GWP’s work. This explanation must be intelligible to third parties.

• What is the significance of each outcome?

• What facet of GWP’s work influenced each outcome? How?

Two additional questions are often answered as well:

• Is there any evidence of the outcomes contributing to sustained improvement in the lives of people or their environment?

• What lessons have been learned that could improve GWP’s intervention?

The methodology I use draws on Michael Quinn Patton’s Utilization-Focused Evaluation. The primary intended users of the evaluation’s findings are key participants in it. In GWP’s case, this is the staff based in the global secretariat in Stockholm and in the 13 regional and 60 country secretariats, plus the representatives of government agencies, businesses and CSOs that form the water partnerships. We invite these users to clarify the intended uses of the evaluation. These commonly include:

• enhancing shared understanding among GWP partners and informing other stakeholders;

• supporting and reinforcing GWP’s ongoing work;

• increasing engagement, self-determination and ownership by users;

• fostering an evaluation culture in GWP;

• building the capacity of users;

• enhancing communications among them and others related to the programme.

Once users are engaged, they continue to be as involved as they wish to be in designing, collecting and analysing data, drawing conclusions and formulating recommendations for action.

Whether an evaluation is formative or summative, I find that the greater the involvement of users, and the more we evaluators serve as facilitators in a joint inquiry rather than as experts, the better the evaluation will be. Perhaps most importantly, through their participation, users develop the understanding and commitment to implement the recommendations of the evaluation.

Ricardo Wilson-Grau is a consultant specializing in evaluation. Email ricardo.wilson-grau@inter.nl.net

For more information
http://www.gwpforum.org


Comments (0)

Leave a Reply

Your email address will not be published. Required fields are marked *



 
Next Special feature to read

Action now for our planet

Stephen Heintz and Changhua Wu