Wright, N. (2004) ‘Learning & Development Impact Evaluation’, British Journal of Occupational Learning, Institute of Training & Occupational Learning, Volume 2, Issue 2, December, pp63-66.
Organisations exist only to add value to key stakeholders. What is the value that L&D adds to stakeholders and how to we measure it? This has been a vexed question for the Learning & Development Team in Tearfund, a Christian international development and relief organisation. We want to ensure that the organisation’s resources are being used to maximum positive effect but have sometimes lacked confidence in the mechanisms we use to evaluate the impact of our learning interventions.
We commissioned Professor Andrew Mayo (www.mliltd.com) to help us address these questions practically through a 2-day workshop in March 2004. I am indebted to Andrew for his wise input and advice. The notes below represent a combination of Andrew’s insight shared at the workshop and my own experience of this arena.
Firstly, it is important to beware of ‘subtracted value’, that is, negative return on investment. The fact that interventions work doesn’t necessarily mean that they were the most important or even the best ways to tackle things. Different stakeholders have different priorities. Stakeholders from different cultures, for instance, consider different things as adding value. It is necessary, therefore, to identify key stakeholders, ask what each wants L&D to achieve for it and decide how to measure results accordingly.
I have found that it is easy to get carried away with idealistic visions of what is possible, especially when personal vision is combined with a desire to impress business partners. This can lead to much agonising, hand-wringing and wasted resources when trying to substantiate the impossible. Be pragmatic and realistic with stakeholders: ‘We cannot do this but we can do this and this and this.’
We need to consider from the outset not only what each stakeholder wants from an intervention but what we will need from each stakeholder to ensure its success. ‘We are going to commit significant organisational resources to this initiative and this is what we will need from you to make that investment worthwhile.’ The L&D Team may introduce new service level agreements that list respective responsibilities and commitments in very specific terms.
One of the best ways to move senior managers from passive acceptors to passionate advocates of L&D is to demonstrate its tangible results and benefits. This is one of the tactical goals of impact evaluation. You may, for instance, substantiate your business case for L&D by showing how it can be used to fix things that are going wrong from the point of view of key stakeholders. ‘If we were to invest in this, this is what we could prevent or achieve.’ ‘This would release others to use their resources more efficiently.’
Start from the business case and work backwards to potential solutions (e.g. ‘We have a problem that is costing us…that could be solved by…’) rather than coming up with good ideas, apparently out of the blue (e.g. ‘Wouldn’t it be nice to have coaching?’), and expecting management buy-in. Tap into managers’ own motivators, e.g.
‘What is the problem that you are wanting to solve?’ ‘What kind of problems do you envisage encountering when you try to implement this?’ ‘How would you really like things to be and what would help to make that happen?’
If organisational goals are not quantitative, it will be difficult to know or demonstrate what impact L&D interventions have had on them. Agree with key stakeholders at the outset what will be evaluated and how. ‘This is how we intend to tackle this programme to achieve the impact you desire but we will need your support with seeing whether it has worked.’ Do not attempt to apply positivistic, quantitative measures arbitrarily where they really are inappropriate (e.g. for emotional intelligence or spiritual development).
The resulting overall L&D strategy should be a corollary of L&D process plans agreed with each key stakeholder, including impact objectives, interventions, learning processes and evaluation/reporting mechanisms.
L&D is almost always concerned implicitly if not explicitly with culture change. Consider which aspects of the organisation’s culture support or inhibit learning. I have noticed that L&D is often regarded as counter-cultural and that some stakeholders can feel threatened by culture-change implications. In Tearfund, we are considering whether to negotiate and add explicit culture-change objectives to our L&D plans and objectives.
Do not try to evaluate everything in depth. The level of evaluation should reflect the level of benefit you want or need to derive from it. Evaluation has its own costs, e.g. time, finance, opportunity. It can also place strain on stakeholder relationships, e.g. ‘Do we really have to fill in one of these forms for everything we do?’ Be careful to ensure that the benefits of evaluation will outweigh the costs associated with it.
Cost-benefit equations for L&D interventions are often difficult to establish. The costs are relatively easy to assess (e.g. finance, time), the overall benefits much harder. Where feasible, it is very helpful to establish binary measures, i.e. something either happens or does not happen as a result of the intervention.
Kirkpatrick’s model of evaluation provides a helpful framework. His 4 levels are: (1) customer satisfaction, (2) evidence of learning, (3) behavioural change in the real-work environment and (4) resulting business benefit. As a general rule of thumb, Mayo advises always to evaluate learning at Levels 1 and 2 but to plan to do additional evaluation at Levels 3 or 4 if certain conditions apply, e.g.
There is a formal requirement (e.g. legal, policy or funding). Learning outcomes are critical to organisational strategy. The benefits of evaluation will outweigh the process costs.
In Tearfund, we evaluate Level 1 by using feedback forms that include an overall customer satisfaction rating. We collate these scores and express them as a %age to track our general performance. We now insist, too, that trainers always include Level 2 assessment in all programmes. ‘What will you do to assess participant learning?’ L&D team staff will attend certain courses to ensure that this happens and are currently devising Level 2 guidelines to assist this process.
One of the most practical and straightforward ways of determining Level 3 impact is to ask participants and line-managers at some point after an event, ‘How much of what has changed could you attribute to event X?’ Alternatively, ‘Tell me about how you have handled (for example) conflict differently as a result of training.’ Although this type of feedback is more intuitive than scientific, we are finding that most people are able to make a reasonably reliable judgement.
Another method of Level 3 evaluation is to get people to agree to specific actions at a learning event and then check afterwards if they did them. ‘I want you to be able to tell me in 3 months time what has changed for you as a result of this training so please do reflect on it from time to time as things progress.’ Alerting participants in advance in this way helps to keep learning and subsequent impact at a conscious level. This type of discussion-based evaluation is, too, less cumbersome and bureaucratic than traditional form-based methods.
I have found that, in practice, tracking impact at Level 4 can feel like tracing a path through woodland that becomes increasingly less-defined. So many factors can compound the learning impact equation, especially over time, that proving direct cause and effect relationships between interventions and outcomes feels almost impossible. Do not be too ambitious, therefore, about setting Level 4 goals when there is virtually no way that you will be able to substantiate them. Measure what can be measured and leave it at that.
L&D is often best described in terms of correlation rather than cause and effect, e.g. ‘This form of intervention will, among others, increase the possibility or probability of a certain outcome being achieved.’ or ‘A will contribute to B’ rather than ‘A will necessarily result in B’. It is a matter of professional judgement and the L&D professional needs to avoid allowing him or herself being trapped in simplistic measures that lack both validity and integrity.
The stronger the link between L&D interventions and desired learning outcomes at the planning stages, the easier it will be to evaluate afterwards. If you are clear about what you expect to see as a result of an intervention, and that result is clearly achieved, do not waste valuable time and energy conducting further in-depth evaluation to prove the case. When agreed linkages between L&D interventions and desired outcomes are tight, even Level 1 & 2 results can be extrapolated as probable influences of outcomes at Levels 3 & 4. This is a ‘reasonable assumption’ principle.
Insofar as group evaluation is concerned, Mayo comments that, ‘collective subjectivity may border on objectivity’. In other words, if you get consistent feedback from a wide range of participants in an event, you probably do not need to use additional sophisticated methodologies.
A simple and practical framework for assessing capability before and after an intervention is to map capabilities against the following basic categories:
A Aware (‘I know what this is’) B Basic (‘I can do this with support’) C Competent (‘I can do this well in my own job’) D Distinguished (‘Others look to me for input on this’) E Expert (‘I write/speak on this externally’)
Effective learning is concerned less with the impact of one intervention and more with a chain of inputs from various stakeholders that form a learning process. In this respect, personal development plans are better considered as on-going learning plans. Set learning objectives at a realistic level relative to (a) where participants are at now, (b) the level of input to be provided and (c) realistic opportunities for application.
Decide also whether you are intend to measure whether learning has taken place (i.e. participants have moved along a learning continuum) or that a standard baseline of competence has been achieved. In Tearfund, we provide a combination of open, exploratory interventions alongside formally accredited programmes. We are, therefore, learning to ensure that our own forms and levels of evaluation are tailored appropriately to respective learning goals.