L&D impact evaluation
Notes from a training workshop with Professor Andrew Mayo, 2004
Evaluation in Context
Organisations exist only to add value to key stakeholders. What is the value that L&D adds to stakeholders and how to we measure it?
Beware of ‘subtracted value’!
Different stakeholders have different priorities. Stakeholders from different cultures, for instance, will consider different things as adding value. Identify key stakeholders, ask what each wants L&D to achieve for it and decide how you will measure results accordingly.
Be careful to consider from the outset not only what each stakeholder wants from an intervention but what you will need from each stakeholder to ensure its success. ‘We’re going to commit significant organisational resources to this initiative and this is what I’ll need from you to make that investment worthwhile.’
Employees are key stakeholders. Investment in personal development adds value to employees. It is, therefore, a valid goal – among others. If we don’t add value to employees, they will become demotivated or deskilled or leave.
Be pragmatic and realistic with stakeholders: ‘We can’t do this but we can do this and this and this.’
One of the best ways to move senior managers from passive acceptors to passionate advocates of L&D is to demonstrate its tangible benefits. This is one of the tactical goals of impact evaluation.
You can substantiate your business case for L&D by showing how it can be used to fix things that are going wrong (e.g. waste of project resources) from the point of view of key stakeholders. ‘If we were to invest in this, this is what we could prevent or achieve.’ ‘This would release others to use their resources more efficiently.’
Start from the business case and work backwards to potential solutions (e.g. ‘We have a problem that is costing us…that could be solved by…’) rather than coming up with good ideas (e.g. ‘Wouldn’t it be nice to have coaching?’) and expecting management buy‐in.
Tap into managers’ own motivations, e.g.
‘What is the problem that you’re wanting to solve?’
‘What kind of problems do you envisage encountering when you try to implement this?’
‘How would you really like things to be and what would help to make that happen?’
If organisational goals are not quantitative, it will be difficult to know or demonstrate what impact L&D interventions have had on them.
Agree with key stakeholders (e.g. Group Leaders) at the outset how an intervention will be evaluated. ‘This is how we intend to tackle this programme to achieve the impact you desire but we’ll need your support with seeing whether it has worked.’
The L&D Team Plan should be a corollary of L&D process plans agreed with each Group Leader, including impact objectives, interventions, learning processes and evaluation/reporting mechanisms.
Evaluation in Practice
Consider which aspects of the organisation’s culture support or inhibit learning. L&D should conduct systemic analyses (e.g. what and who influences what) in order to target interventions effectively.
L&D is almost always concerned implicitly if not explicitly with culture change. L&D should consider adding explicit culture‐change objectives in its plans and objectives, e.g. for events.
L&D is often regarded as counter‐cultural. People in powerful management roles can sometimes feel threatened by L&D’s culture‐change implications.
Key questions for L&D evaluation include:
What is the relationship between individual and organisational development?
How can we establish causal relationships between what we do and what results?
How can we ensure that evaluation is focused on the most important areas?
How can we demonstrate added value to different stakeholders?
Don’t try to evaluate everything in depth. The level of evaluation should reflect the level of benefit you want or need to derive from it. Evaluation has its own costs, e.g. time, finance, opportunity. Be careful to ensure, therefore, that the benefits of evaluation will outweigh the costs associated with it.
Cost‐benefit equations are difficult to establish in the field of L&D. The costs are relatively easy to assess (e.g. finance, time); the overall benefits much harder. Where feasible, we need to establish binary measures. Something either happens or does not happen as a result of the intervention.
Evidence of learning can be demonstrated in a number of different areas, e.g.
Know‐how (business and professional)
Know‐what (competences and behaviours)
Know‐who (internal and external networks)
Kirkpatrick’s model of evaluation provides a helpful framework. His 4 levels are: (1) customer satisfaction, (2) evidence of learning, (3) behavioural change in the real‐work environment and (4) resulting business benefit.
As a general rule of thumb, always evaluate learning events at Levels1 and 2 but plan to do additional evaluation at Levels 3 or 4 if certain conditions apply, e.g:
There is a formal requirement (e.g. legal, policy or funding)
Learning outcomes are critical to organisational strategy
The benefits of evaluation will outweigh the costs.
Insist that trainers always include Level 2 assessment at the end of programmes. “What will you do to assess participant learning?”
One of the most practical and straightforward ways of determining Level 3 impact is to ask participants and line‐managers at some point after an event, “How much of what has changed could you attribute to event X?” Alternatively, “Tell me about how you have handled conflict differently as a result of training.”
Another method of Level 3 evaluation is to get people to agree to specific actions at a learning event and then check afterwards if they did them. ‘I want you to be able to tell me in 3 months time what has changed for you as a result of this training so please do reflect on it from time to time as things progress.’
Tracking impact from Levels 1 to 4 can be like tracing a path through woodland that becomes increasingly less well‐defined. Don’t be too ambitious about setting Level 4 goals when there is virtually no way that you will be able to substantiate them. Measure what can be measured and leave it at that.
External validation is an important source of Level 4 data.
L&D is often best described in terms of correlation rather than causal relationships, e.g. ‘This form of intervention will, among others, increase the possibility or probability of a certain outcome being achieved.’ or ‘A will contribute to B’ rather than ‘A will necessarily result in B’. It’s a matter of professional judgement.
The stronger the linkage between L&D interventions and desired learning outcomes at the planning stages, the easier it will be to evaluate afterwards. When linkages between L&D interventions and desired outcomes are tight, even Level 1 & 2 results can be extrapolated as probable influences of outcomes at Levels 3 & 4. This is a ‘reasonable assumption’ principle.
Insofar as group evaluation is concerned, collective subjectivity may be regarded as bordering on ‘objectivity’.
Don’t get too carried away with over‐sophisticated methodologies. A good simple framework for assessing capability before and after an intervention is to map capabilities against:
A Aware (‘I know what this is’)
B Basic (‘I can do this with support’)
C Competent (‘I can do this well in my own job’)
D Distinguished (‘Others look to me for input on this’)
E Expert (‘I write/speak on this externally’)
Set learning objectives at a realistic level relative to (a) where participants are at now, (b) the level of input to be provided and (c) realistic opportunities for application.
Decide whether you are trying to demonstrate that learning has taken place or that a baseline of competence has been achieved.
Be clear about what you want to map evidence of learning against, e.g.
Target (e.g. specific learning objectives)
Time (e.g. change over time)
Others (e.g. benchmarking with comparable agencies)
Be explicit about and balance your intentions, bearing in mind that each has validity in its own right, e.g.
We do this for motivation
We do this to develop our people
We do this to develop the organisation
Don’t try to prove direct links between personal development and specific business benefits. The relationship is complex and any simplistic correlation probably lacks honesty. Rather, evaluate personal development in its own terms as to whether or not it was successful.
Related comments
If we develop people to fulfil their God‐given potential (not just strictly tied to role specifics), it will help the whole organisation to mature and grow in capability.
Effective learning is concerned less with the impact of one intervention and more with a chain of inputs from various stakeholders that form a learning process. In this respect, personal development plans are better considered as on‐going learning plans.
Don’t seek 360-degree feedback on everything from everyone. Choose the right people for the feedback you want. For example, if you want feedback on your leadership, ask those you lead; if on your teamworking, ask team colleagues.
Three basic questions to elicit feedback from others:
What did I do well?
What could I do better?
What do you think should be the focus of my development?
Part of the L&D equation is, “What is this person capable of learning?”
Alongside training, CPD records should include reference to (a) development and maintenance of external networks and relationships and (b) specific relevant experiences (e.g. time spent, scope of involvement, size of organisation, levels of responsibility, special circumstances etc).
A straightforward way of arranging organisational training budgets:
Individual and team‐specific training: the relevant team holds the budget and L&D acts as internal consultant.
Corporate, strategic or cross‐cutting: the L&D Team holds the budget and acts as internal consultant and
manager of the process.
Another way of managing budgets:
Each individual is allocated budget to address his or her own development plan objectives in a way that he or
she chooses.
Each team is allocated budget to address priority team‐specific development objectives in a way that it chooses.
The L&D Team is allocated a central budget to use for strategic, corporate and generic development priorities in
a way that it chooses.
L&D needs to evaluate the quality of its own service delivery in 3 areas:
How effective and efficient are its processes (e.g. L&DNA, PDP, CPD)?
How much of its time is genuinely spent adding value to stakeholders?
What is its return on investment in initiatives and programmes?