Evaluating projects

Independent evaluation involves an agreement with an outside evaluator, often a researcher or consultant, to evaluate the effectiveness of a project or service. In addition to everyday evaluation, it is another way of assessing the impact of our work and identifying changes that can make our work more effective. This topic covers some basic knowledge that can help you participate effectively in independent evaluation of health promotion projects and HIV peer navigation services.

Evaluation designs

Evaluations and research projects always begin with questions. The study design means how the researcher or evaluator plans to go about answering these questions. There are different study designs that are more or less appropriate for answering different questions. Poorly chosen questions or designs can produce confusing findings.

Just like research questions, projects typically begin with objectives — the outcomes it is intended to achieve in the world. A very common evaluation question is: did the project achieve the intended outcomes? And a very common evaluation design is to measure how things are before and then after the project, and see if changes have occurred that can be attributed to the project activities. This is called a ‘pre/post’ design.

Sometimes, projects and services aim to achieve changes that are very hard to measure. For instance, a health promotion project might aim to encourage a culture of safe sex among a particular target group, including people who are not connected with the project itself. The W3 project was created to help evaluate projects with goals like this one. Rather than measuring pre/post and attributing changes, it asks: has the project done everything it can to learn about and influence changes in its environment? It recognises that the project itself is a learning activity, and draws insights relevant to the evaluation from the project’s own activities. This is sometimes called an adaptive evaluation.

Evaluation methods

The methods used for evaluation are a lot like those used in social research. They often include:

  • Conducting interviews and surveys with workers, clients, contacts and community members
  • Using questionnaires at different points in time to capture changes
  • Counting ‘events’ (e.g. how many sessions were held) and collecting numeric data
  • Reading and analysing documents and outputs (e.g. campaigns, resources) made by the project

When you are invited to take part in an evaluation, it’s a good idea to invite the evaluator to come and talk with the staff who will be involved. Understanding the evaluation design can make it easier to capture relevant information. It can also motivate staff to take part if they understand how isolated activities add up into the bigger picture.

Evaluation findings

Evaluation findings are usually published in a report and occasionally published in a journal article.

It’s important to understand that every project or service has significant room for improvement, often due to constraints like limited funding, difficulties of engaging the target group, and the uncertainty involved in working with diverse and dynamic communities. It is best not to take evaluation findings as a personal criticism. Rather, when room for improvement or ‘opportunities for growth’ are identified, there’s a chance to apply for funding to try new things.