How to Evaluate a Project

Choosing the right evaluation design

The world of project and program management looks neat and organised on paper, but in reality it’s often a bit of a mess. Being part of the mess is one thing, but making sense of it afterwards, and being able to answer the question ‘did it work?’ and ‘why?’ is another thing altogether.  So, how do your evaluate a project?

 

This blog follows on from our previous explainer, describing what evaluation is and what evaluation is trying to achieve. Here, we focus on some of the key approaches to evaluation – from the strictly scientific to the more practice-focused.

 

Our next few blogs in the evaluation series start to put all of this theory into action and offers some tips, tricks and tools.

 

This blog assumes that you are planning your evaluation approach before your project begins. This is the ideal scenario. But there are other approaches that you can take if you are trying to evaluate a program after it has already finished.

 

Approaches to Evaluation

 

Experimental designs are more common to the health or science industries (for example, testing a new pharmaceutical product), and the ‘softer’ observational designs tend to be used in the human services or organisational areas (for instance, evaluating a hospital improvement program).

 

Experiments

 

 

Like the science experiments that we all did in school, experiments tend to occur in the laboratory environment or under strictly controlled conditions. They seek to work out (evaluate) whether one or more defined factors bring about a particular effect or outcome.  For example, in a laboratory, we can be sure that when we add magnesium to hydrochloric acid, we create hydrogen gas. We can also ‘control’ for other variables, like temperature or humidity to ensure that these factors don’t interfere with our results, or we can vary temperature or humidity to work out how they vary the results (if at all).

 

Experimental scientists talk a lot about biases – the things that may accidentally impact on your outcome without you knowing it. There are many potential biases, which relate to project and program evaluation too. We will touch a few potential biases later in this blog, and another blog will explain these in more detail.

 

How does ‘experimentation’ relate to your project or program evaluation?

 

Experiments are the ‘gold-standard’ approach to evaluation. But unlike the laboratory, our capacity to isolate the variables that caused a particular outcome is made more difficult in social settings, like health care organisations. In organisational ‘open systems’ (as opposed to a laboratory ‘closed systems’) many complex, hidden and potentially influential factors may operate simultaneously to produce a particular outcome including factors that are unique to the organisational environment.

 

These might be: the quality or style of leadership, financial pressures, organisational culture, or prior experience with the project management or improvement methodology. Anyone of these things may have caused or changed the particular outcome we got. And unlike ‘temperature’ or ‘humidity’ they are very difficult to measure and ‘control’. This means that for projects or programs, we cannot isolate the variable of interest (the project intervention) from other organisational peculiarities and so it is often not possible to use experimental evaluation designs to assess whether the intervention ‘works’.

 

Additionally, unless you are a research scientist and have access to specific facilities and ample resources, it is unlikely that you will be able to design and conduct an experiment for your project or program evaluation. This is because ‘controlling’ for all of the possible variables that may impact your project intervention is expensive.

 

But the take home message is: it is useful to think about some of the features of good experimentation and to bring as many of these practices into your approach, irrespective of what evaluation design you go with. Essentially the idea is to avoid as many research biases as possible. This will allow you to be more confident in your results.

 

Quasi-experiments

 

A quasi-experimental design seeks to test whether an intervention achieves its objectives, as measured against criteria set prior to the intervention taking place. It is the most ‘scientific’ of the non-experimental designs.

 

One model for a quasi-experimental design is called a ‘before and after’ study. A ‘before and after’ design takes a measurement of the desired outcome (i.e. for a hospital, the average time it takes for a patient to be seen by an emergency department) ‘before’ an intervention is introduced and compares this with the outcome ‘after’ the intervention (i.e. we hope, a reduced waiting time).

 

Threats and biases

 

Like all study designs, ‘before and after’ designs are not perfect, and are vulnerable to a number of threats and biases. A key issue is a ‘history threat’ – that is, when one or more events, not related to the intervention itself occur between the ‘before’ and ‘after’ measurement and have an effect on the result. For instance, there may be a change in management personnel or an infectious disease outbreak might occur between the ‘before’ and ‘after’ measurement, influencing the outcome, and making it more difficult to evaluate whether the change (or lack of change) was due to the project intervention, or other factors.

 

To overcome history threats another evaluation design might help, but it is a little more resource-intensive. A ‘time series analysis’ design measures the outcome variable (i.e. waiting time) at least three times prior to the intervention and three times after the intervention. This helps to stabilise the impact of any unusual events that may take place between measurements. Again, however, other ‘threats’ may intervene in the results. For instance, ‘the Hawthorne effect’. This refers to the tendency for people to start changing their behaviour due to their awareness of being observed, rather than indicating a lasting change. This is a common problem for project interventions.

 

This may be a problem for science and research, but if the Hawthorne effect does help to produce lasting change in behaviour, this is not such a problem for the practitioner. In fact, sometimes the only intervention necessary to start changing workplace behaviour is to create a data feedback loop, where colleagues regularly share and review data. As measurement is a necessary part of research, the line between research and intervention can sometimes be blurry.

 

There are many more potential threats to an evaluation design, which will be covered in another blog soon.

 

Action research

 

 

Action research designs are less ‘scientific’ but they may suit a practitioner’s needs far better. There is far less emphasis on having confidence in our ‘measurements’ and controlling for factors. Instead, there is an emphasis on pragmatically creating a positive result. One of the unique characteristics of action research is that the approach attempts to evaluate a project intervention in a way that allows for ‘real time’ refinement and improvement of that intervention during implementation (rather than after implementation). In experiments, you can’t decide half-way through to vary the amount of magnesium that you add to hydrochloric acid, or to suddenly turn up the heater. This would reduce the confidence in your results.

 

But in action research, conducted in social settings, it is often necessary to vary aspects of your project in order to keep the project rolling. For instance, you may find that participants in your project work group are not engaging well with email communications, and so you might choose to vary your method of communication. You could go meet with each of the group members face to face, or give them a phone call to touch base and hear any concerns. Action research values the use of collaboration, broad participation and feedback loops throughout the project. This is a bit like a ‘Plan Do Study Act’ cycle for evaluation.

 

Action research may draw on elements of ‘quasi-experimental’ (quantitative metrics) and ‘naturalistic’ designs (qualitative and experiential data) to form a pragmatic model of evaluation, depending on the nature of the intervention under study. Naturalistic methods may involve talking to people about their experiences, or simply, sitting and watching what happens within the project environment. Although many researchers regard these approaches as ‘less scientific’, there are often valuable insights that you can draw from these more ‘subjective’ methods. And sometimes these observations are vital to the success of your project. They may give you clues as to what to measure quantitatively later, or they may help you to interpret why a particular process is functioning as it does. These ‘how’ and ‘why’ questions are more difficult to answer quantitatively (for instance, using a time series analysis) and so it is often useful to use both quantitative (i.e. emergency department waiting times, or surveys) and qualitative measurements (i.e. observations, interviews or focus groups) within projects and interventions in organisational or human services settings.

 

Blog by Ellen

Managing Director, Amfractus Consulting