Trainers have their own version of the iterative four-step management method, "plan, do, check, act", which is to analyse needs, design training, deliver training and evaluate (see figure at the bottom). Many companies spend money and time on the first three steps, then neglect the fourth. A lot of training receives only a token check or none at all.
Three of the most common problems are: not building evaluation in from the start; relying on course delegate feedback ("happy sheets"); and where training is just one of several measures to resolve a safety issue.
Much training lacks clear learning objectives. No training session of any type or length should begin without a clear statement of its purpose, and, ideally, attendee buy-in that achieving those aims will benefit them and their organisation.
You should try to answer three key questions:
What will participants know at the end that they didn't know already?
What will participants be able to do that they couldn't do before?
How will their behaviour change as a result of their participation?
Key factors to get the best learning environment:
The temperature is comfortable: not too hot or too cold, no draughts.
The lighting is suitable; blinds allow videos to be seen clearly.
The seats are comfortable for the duration of the course.
Everyone is able to see the tutor and, if used, the screen, flipchart or white boards.
You have provided pens and paper and any other materials needed (such as name badges, case studies and test sheets).
If writing is required, participants have something on which to rest.
You have made arrangements for refreshments and comfort breaks.
You cannot evaluate training unless these have been met. How else can you measure success?
Although feedback forms are an essential part of the evaluation process, only a limited amount of information is captured to determine whether the training has met its overall objectives (the most important aspect of evaluation). Importantly, they only present feedback from one of the key training "clients": the attendees and how they perceived the effectiveness and value of the training -- usually at the end, when everyone is keen to get away. They do not capture the views of managers who sent employees on the course, nor do they reflect what benefit (if any) the organisation has gained from its investment in the training.
Finally, where an organisation is actively looking to improve safety, training is often just one aspect of the solution; it's rarely the only action that's taken. In one company, for example, management wanted to show a link between stress management training and a reduction in costly stress-related sickness absence. But it could not prove that the training alone had brought about the improvement and separate its influence from that of other measures such as return-to-work interviews.
Whoever does the training, the acid test is 'did it make any difference?' If no, you are wasting valuable resource; if yes, it justifies more training
If this strikes a chord, take heart: there are practical measures that can help make training evaluation much more effective. Seven of the most important are listed below and are useful to consider as questions to measure the success of the final evaluation.
Do all courses have clear objectives, shared with trainees? Course evaluation is impossible without the training having a clear purpose to start with -- and this should be stated. If the delegates don't even know why they are there, it's unlikely they will get much out of attending.
Do you have the right trainer? Trainers should be experts in their field, but subject knowledge is not enough by itself; they also need to be able to connect with the audience and to bring the content to life.
Have you got the admin right? Course admin needs to be effective, so that people go to the right place at the right time. Just as important is to maintain proper records, which will provide vital evidence should the trainer need to defend liability claims -- possibly many years in the future.
Is evaluation built in throughout the training cycle? Training planning and design should cover not only the course content and how it will be delivered but also a method for evaluating success. This should in turn be tied back to the overall training objectives. For knowledge, use a simple test at the end of the training to prove what people have learned; it will be even more meaningful if the trainer tests them both before and after the course so they can see more clearly the difference the training has made. (Knowing there is a test at the end also encourages the attendees to concentrate more during the training because they don't want to miss important information.) On the wider issue of learning skills, how to drive a forklift safely, for example, or how to put up a safe scaffold or to check an item of electrical equipment, the training is not complete without a practical check that those taking part can now undertake these tasks competently. It's important to get medium-term feedback from managers because they should be able to see a difference after the training. If it doesn't make any difference, how can it have been worth doing? Ideally, managers should attend the training -- if only to introduce it because not only does it show ownership but it also promotes buy-in and helps connect the training with the day job.
Does course design/delivery reflect training effectiveness research? A large body of academic research has been undertaken to look at the key factors that make teaching and training effective. Four of the key findings and what they mean for trainers are shown in the box (see above).
Is training delivered in the right environment? The right environment is crucial if training is to be successful. For people used to being on their feet all day, or working outside, sitting in a training room for hours at a time can be stressful. The box on p 5 provides some essential points to consider when setting up the training space.
Do you get the most from external trainers? Here the key is to build in evaluation right from the start and to hold the provider to account. For example, when selecting providers ask, "How will you prove to us that your training has met the objectives we have set?" Good training suppliers actively seek feedback that they are on the right lines and that they are keeping key people happy: you as the organiser, the individuals being trained and the organisation that will benefit from their training. But whoever does the training, the acid test is "did it make any difference?" If no, you are wasting valuable resource; if yes, it justifies more training -- and doing it even better. Either way, evaluation is the only way to find out.