Explainer: What exactly is ‘evaluation’?

Evaluation is attribution.  If you have attributed value to something, you have evaluated it.

 

Why do it?  Because without understanding and comparing the relative value of various ‘somethings’ (innovations, strategies, new or old programs, services, organisational structures or products) it is difficult to make decisions that are likely to lead to positive outcomes.  Without knowing whether a particular innovation works, and how well it works compared with an older (or newer) approach, we can do little more than take a ‘stab’ in the strategic dark.  Poorly informed choices are haphazard at best, reckless at worst.  Evaluation and strategy are intimately entwined.

 

So what is evaluation exactly?

Evaluation involves five key components:

Value + information + comparison = decision + action

 

Value

By the end of an evaluation your aim is to make a judgement as to the ‘value’ of the ‘something’ you are interested in (the innovation, strategy, new or old program, service, organisational structure or product etc).  Ask yourself:

 

  • Did it work?
  • Did you meet your objectives?
  • Were the outcomes sustained?
  • Was it worth the time, energy and investment?

 

(A more difficult question, beyond ‘did it work?’ is ‘why?’ or ‘how?’.  These are questions not easily addressed by the scientific approach to evaluation, but are canvassed in later blogs.)

 

Information

In order to answer ‘did it work?’ type questions you need information.  In particular, data that you can be confident in.  Data that is valid and reliable and indicates fairly conclusively whether a positive outcome has occurred and whether this outcome is a result of your activity (and not something else that might have happened).

 

‘Valid’ data is accurate.  ‘Reliable’ data is consistent.  According to scientific tradition, valid and reliable data is achieved by taking a systematic approach to gaining and using information (in other words, conducting research).  Thus, a sound evaluation method will help you to form judgements of ‘value’ that you can be confident in.

 

Comparison

If judgement of ‘value’ is your aim, and ‘information’ is your material, ‘comparison’ is the process.  You will have some choices here depending on the nature of the ‘something’ that you are evaluating.  However, be warned, trade-offs are plentiful – particularly between confidence and applicability.  Often, the more confident ‘scientifically’, the less you will know about applicability to (diverse) ‘real life’ settings.

 

For example, the ‘gold standard’ in evaluation is experimentation.  This includes the much revered ‘randomised controlled trial’ which seeks to compare the outcomes of one group who received an innovation or intervention with a comparable group who didn’t.  In a laboratory you can ‘control for’ a raft of extraneous factors that might influence your outcome, increasing your confidence in the data.  But in the messy world of organisations we can rarely ‘control for’ all of the possible variables that might influence the outcome.  These messy variables might be: regulatory changes, leadership or structural shifts, or a crisis that unfolds half-way through your evaluation period.

 

More commonly, in the real world of organisations and public sector agencies, programs or projects are evaluated by looking at ‘before and after’ information.  This involves comparing what happens before your program or intervention, and then again after.  Or another common tactic is evaluating the process used to conduct the project of interest itself (how well did you do what you did?).  More recently, there has been a growing interest in outcome-based evaluation rather than process-level evaluation, but both have value.  Your choice will really depend upon the question you want answered (see ‘value’ above).  We cover evaluation designs in more detail in later blogs.

 

Decision

Forming conclusions as to the relative ‘value’ of a program, project or intervention will help to inform future decisions of relevance to the funder, manager, or user of the new ‘something’.  If clear value can be demonstrated then a justification could be made for your program, project, intervention or service to: be made more permanent with continued or ongoing funding; for a change or improvement to be made; a new work process, policy or procedure; or, justification to spread the new ‘something’ further afield.

 

Action

If your ‘something’ worked here, perhaps it will work elsewhere?  The concept of ‘generalisation’ is relevant to researchers who seek to describe overarching principles for improvement (in short, theories) that may work in many different settings.  It is also highly relevant to managers and executive staff, who ultimately have a responsibility for achieving high quality outcomes, efficiently.  Particularly for public services, spreading innovative or streamlined improvements between different jurisdictions, organisations or units can be an effective way of achieving overarching national aims, at scale.

 

This is where decision becomes action – and the cycle begins again.  As new ideas, innovations, programs, projects, services are made permanent or ‘rolled out’ to others, a new process of evaluation takes place to ensure that ‘what works’ continues to work, or ‘what works here’ works elsewhere.  This is essentially a process of continuous monitoring and improvement.

 

 

This all sounds great in theory, right?  In our next blog on evaluation I go into further detail on the ‘how’ of evaluation (how to make comparisons you can be confident in).  Following that, I’ll move on to the more tricky stuff – the messy, complex ‘black box’ of evaluation that, it seems, you may never be particularly confident in, but, with as much art as science, you may just conquer anyway.

 

Blog by Ellen

Managing Director, Amfractus Consulting