However, in an emergency -- a co-worker has a limb stuck in the machine, say -- when people are shouting, the operator presses the wrong button. A simple change such as colouring one button red would make the error less likely. But by how much?
With more complex displays and controls, on which changes would be more expensive, how would you calculate the benefit of any alteration to the user interface? How can you decide which of two equally expensive changes would provide the best benefit?
This was a dilemma that the nuclear industry faced after the part-meltdown of a reactor at Three Mile Island, Pennsylvania, in 1979. In response, the US Nuclear Regulatory Commission produced the Handbook of Human Reliability Analysis (Swain and Guttmann, 1983). In its 728 pages (bit.ly/32Zh1zl) it attempts to put numbers on the likelihood of different user errors, given different types of job aids, and to describe the performance-shaping factors (PSF) that can cause the error probabilities to be better or worse.
From this analysis, the technique for human error-rate prediction (THERP) was born. Two years later, Jerry Williams published proposals for human error assessment and reduction technique (HEART) based on his work in the UK energy sector. HEART gave greater consideration to a wider range of PSF, including inexperience on the part of the operator, and overload of information.
THERP and HEART provided analysts with evidence-based human error probabilities (HEPs). The human components of a system could be shown in an event tree, alongside failures of valves, alarms and response systems. Decisions about user interface design, procedures and job aids could be based on calculations rather than guesswork.
To apply THERP (or HEART) there must first be a task analysis (see IOSH Magazine September 2018: bit.ly/2kJyriW). At each node of this process, the types of human error possible need to be defined. For example, following a smoke detector alert, the security guard's task is to go to the location to check for fire. Errors could include: doing nothing (omission); going to the wrong location (selection); going to the right location but too slowly (timing); or immediately phoning the fire brigade without checking (sequence). Other error categories include quantity (doing too much or too little of something) and extraneous acts (doing something unrelated).
Techniques such as THERP and HEART have their limitations. Swain and Guttmann admitted there was a "paucity of actual data on human performance" on which to base the HEPs. The time required to apply the techniques is unlikely to be practicable in most occupational health and safety responses.
However, some generalised features of THERP are useful to understand when carrying out a risk assessment. The first is the importance of understanding the task. If the team considering 'changing a fuse' all have a different idea of what the task involves, the assessment will be unreliable. If there isn't the resource for a full-blown task analysis, a clear description of the task being studied would improve many assessments.
A second lesson from THERP is to understand what makes errors more (or less) likely. For example, performing rule-based actions when written procedures are available and used could be 100 times less error-prone than performing the same action when written procedures are not available. You are ten times more likely to leave a step out of a procedure than to add a step in, so instructions that remind people what they should do will be more effective than ones telling them what not to do.
The final lesson to take from THERP and HEART is a change of mindset. Too much behavioural safety literature identifies the correct behaviours as something the worker can choose to do, or not to do.
THERP reminds us that for every physical or cognitive task there is a probability of error. It also reminds us that reducing -- or increasing -- the probability of those human errors is the responsibility of those who design the physical and psychosocial environment in which people work.