In most other areas of endeavour we learn from triumphs, but in safety the focus is often on disaster. We are knowledgeable enough to realise that we cannot simply blame the end user, but we still try to unpick long, causal chains of errors -- of end users, of senior management, of designers -- seeing human error as something to be designed, managed or reinforced out of an organisation.
Some approaches focus on the end-user errors, whether deliberate violations or mistakes. Behaviour-based safety approaches, for example, assume that the systems have been fixed, and that persuading people to wear their personal protective equipment and complete their checklists should be the focus of effort. Other approaches concentrate on organisational faults -- which still amount to human errors, just further up the management chain. Critics of Herbert Heinrich's 1930s methods often forget that his "fault" domino did not refer just to the human errors of front-line workers. Heinrich's use of the phrases "unsafe design or construction" and "hazardous arrangement" suggests he was well aware of the role of errors in design and management.
"A generic term to encompass all those occasions in which a planned sequence of mental or physical activities fails to achieve its intended outcome, and when these failures cannot be attributed to the intervention of some chance agency." (Reason, 1990).
Although Lisanne Bainbridge and Charles Perrow challenged the idea that human error could be identified and eliminated in complex modern systems, their writing does not appear to challenge accepted definitions.
Jens Rasmussen points the blame clearly at the designers of the equipment and tasks with his definition of human error as "instances of man-machine or man-task misfits". In the second edition of Man-made Disasters, authors Barry Turner and Nick Pidgeon suggest human error is "a convenient justification for the rather different social process of blame allocation".
In Safety I and Safety II (Aldgate, 2014), Erik Hollnagel surrounds the words "human error" with quotes, preferring to consider "performance variation". In an earlier publication he defines human error as "one explanation out of several possible for an observed performance" (1983).
Sidney Dekker's Field Guide to Human Error (CRC Press, 2014) defines his new view of human error as "a judgement that we make after the fact; information about how people have learned to cope (successfully or not) with the complexities and contradictions of real work. Elsewhere, he has stated "there is no such thing as human error".
James Reason's "Swiss cheese" model shows holes (failings) in layers of protection at the managerial level, which let an accident trajectory through these "slices" so, when the end user performs an unsafe act, an accident occurs.
Through multiple books, Sidney Dekker has tried to persuade organisations to move away from seeing human error as the fault (culpable or otherwise) of the individual to a symptom of problems in the organisation.
As well as the arguments over whose errors are of most concern -- end users', management's, designers', or planners' -- there have been debates over how best to eliminate mistakes. Early approaches considered "error prone" people (see Error proneness box below) could be weeded out of the workplace. Other efforts have focused on training, human factors, reinforcement or punishment.
Human factors expert Jens Rasmussen and Reason are well known for their efforts to categorise human error (see Error definitions box). By categorising an error, it is argued, you are better able to prevent it. Training reduces skill-based errors, checklists improve the use of rules, and a better understanding of the system and of the potential for mistakes reduces the likelihood of knowledge-based faults. But whomever they blamed, and however they thought it could be solved, there has been a consensus that human error is something to be identified and eliminated.
"However comforting the promise of an infinite tidiness [-¦] there are limits on our ability to reduce uncertainty, to master all of the open-ended and perverse qualities of our environment, and upon our ability to prevent disaster. If we start by recognising that instability lies at the heart of the world, then we may come to realise that the optimism and the assertion of certainty which enables life to create and spread order cannot completely overcome this instability."
In other words, even if we try to understand the chains of errors that "cause" accidents, disasters will still happen. Turner also emphasised how organisations could have an amplifying effect on individual human errors, making things worse rather than better.
We are not consistent, and it is that inconsistency -- or variability -- that has led to progress
Categorising human error
Jens Rasmussen's 1982 paper Human Errors: a taxonomy for describing human malfunction in industrial installations, in the Journal of Occupational Accidents defined three categories of human performance that could be subject to errors:
Skill-based: more or less subconscious routines, with performance controlled by stored patterns of behaviour -- an experienced driver does not need to think consciously about changing gear and using the clutch at the same time, for example. Errors are related to variability.
Rule-based: performance in familiar situations controlled by stored rules. Typical errors include wrong classification of the situation or the failure to recognise the need to apply a particular set of rules.
Knowledge-based: required in unfamiliar situations for which actions must be determined from an analysis and a decision based on knowledge of the functional, physical properties of the system and the priority of various goals. Error mechanisms can be defined only in relation to the goal of the task.
James Reason adopted these performance categories to define skill-based slips (substituting one step for another), skill-based lapses (forgetting a step), rule-based mistakes (choosing the wrong rules) and knowledge-based mistakes (making the wrong decisions).
Reason added a further category of human failures in the form of violations, which could be subdivided into routine, situational and exceptional. By 1997 Reason had developed a "culpability decision tree" which had nine possible outcomes, with decreasing culpability from "malevolent damage" to "blameless error".
In 1983 psychologist Lisanne Bainbridge warned of the "ironies of automation": the more we automate processes, the less capable people become to cope with problems when they occur. The less frequently we have to intervene to make a system work, the less knowledge we have of how it works (leading to more knowledge-based mistakes).
In 1984, Yale sociologist Charles Perrow published Normal Accidents (Princeton), in which he argued that our socio-technical environments are now so complex that accidents are inevitable. And yet, despite these often-cited works we continue as though we can identify all hazards, proceduralise all necessary controls, audit the procedures, and that accidents will only occur if people (whether frontline or management) make errors or choose to violate codes.
There is an alternative way to look at behaviour that is labelled "human error" (see Error proneness box below). Perhaps errors are not a problem to be solved, but an innate attribute of human performance that is essential to learning and development. As a leading proponent of this idea, safety academic Erik Hollnagel has attempted to redefine safety. If old safety (or Safety I) is the absence of accidents, or "a condition where the number of things that go wrong is acceptably small", the study of safety will cover those things that go wrong, which inevitably leads to a focus on errors.
Safety II, by contrast, is where as many things as possible go right, and should lead to a focus on desirable outcomes, and encourage more of that. Safety I has an assumption that error-free procedures can be written, and that variability can and should be eliminated; Safety II sees variability in human performance as indispensable.
This is not a new idea. In 1982 Rasmussen explained: "Human variability is an important ingredient in adaptation and learning, and the ability to adapt to peculiarities in system performance and optimise interaction is the very reason for having people in a system."
He went on to suggest that the way to optimise performance was to provide people with opportunities for trial-and-error experiments, with human error considered as an unsuccessful experiment. In some situations, we can experiment because we have the time and resources to reverse the effect if it is not what we wanted. Say an area of floor has been identified as being particularly slippery, so a new method of cleaning is proposed. After the new method is adopted, if the problem remains, other methods can be tried. Complex systems such as flight decks and air traffic control do not allow for trial and error, so people are trained using simulators, which allow for unsuccessful experiments without the unacceptable consequences normally associated with them.
Rasmussen does not claim to have originated the idea that the variability that can lead to disaster or to triumph is the same thing, crediting the philosopher and physicist Ernst Mach in 1905: "Knowledge and error flow from the same mental sources; only success can tell the one from the other."
Rasmussen also identified the problem that Hollnagel tackles: that human errors are identified after the fact, only when a system performs unsatisfactorily. But where Rasmussen's solution to this was ergonomics and human factors -- design the machines and tasks to fit the way people act and think -- understanding Hollnagel's Safety II requires us to stop talking about errors, and instead talk about variability.
Academics and practitioners have argued about the extent to which human error is the cause of accidents and have proposed different theories about the cause of error itself. In the 1920s, a study of drivers found an uneven distribution of injuries, and suggested the term "accident prone" to describe a personality type. Herbert Heinrich's domino theory of accident causation reflected this attitude, suggesting that users' faults were due to their "social environment and ancestry". In Human Safety and Risk Management (CRC Press, 2006), Ian Glendon et al list some personality characteristics that have been linked to safety behaviour:
Aggression predisposes people to be unwilling to endure inconvenience and frustration which they might associate with taking safety precautions, obeying speed limits or wearing personal protective equipment.
Anxiety can result in more obsessive checking of actions and a lower tolerance for danger; this can be positive in increasing caution but negative in interrupting the smooth flow of routine actions.
Extroversion might lead individuals to seek out sensation, including risk. Extroverts are regarded as less able with high vigilance and easy tasks, and less able to cope with sleep deprivation. In some studies, extroverts had more injuries and more vehicle crashes than introverts.
Introversion has been found to make people less able to cope with noisy, stressful environments, or demanding situations requiring attention on multiple tasks.
Impulsiveness can make people respond faster but less accurately than those with low impulsivity, though when task requirements are complex high impulsives can be slower.
Such findings are often the result of a single study, and each personality trait has strengths as well as weaknesses. Glendon et al conclude: "No personality trait for accident proneness has ever been isolated and it is unlikely to be worth looking for."
If we had always followed the procedures provided by those before us, we would still be carving our own tools, cooking on an open fire and riding horses, albeit with written procedures for the storage of sharp flints, the avoidance of burns, and controls for zoonoses -- diseases that can be transmitted to humans from animals. Humans are innovative -- we think of better ways of doing things, we try them, and if they work we carry on doing them. We are not consistent, and it is that inconsistency -- or variability -- that has led to progress.
The same behaviour that "caused" an accident today was the behaviour that yesterday got the job done successfully and quickly. The can-do attitude that won the contract is the same one that leads to an accident because of driving when tired; the culture that landed the first people on the moon was the same one that led managers to override the advice of engineers to launch the Challenger Space Shuttle on a day that was close to freezing in 1986.
In the short term it might be possible to specify how someone should climb a particular ladder safely to clean a particular window. A different window might need a different approach -- and we don't want to prevent someone finding a better way to clean the window that doesn't involve a ladder.
As the Loughborough University researchers in IOSH's OSH in a Changing World research programme found (bit.ly/2ri574Q), workarounds and dynamic risk assessments are recognised by many as acceptable forms of adapting existing procedures or, in some cases, creating procedures on the spot. Each time a business sends a vehicle out on the road, the driver is expected to adapt their focus to the circumstances. For more complicated systems it is even more important that we have people who can think through the options and determine the best approaches. The more complex our systems become, the greater is the need for "performance adjustments" without which, Hollnagel argues, "anything but the most trivial activity would be impossible".
Whereas Reason's accident trajectory was a straight line, suggesting a predictable linear sequence of events, Hollnagel talks of "non-linear emergent outcomes". Small amounts of variability in lots of places in the system (in people and in technical elements) affect each other in unpredictable ways to produce an undesirable effect. Trying to control the variability of each person and each technical component is practically impossible, and in any case undesirable -- since you would lose the benefits along with the disadvantages.
Returning to Turner's 1978 hypothesis that organisations amplify individual errors, Hollnagel's ideas challenge us to see how we can build resilient organisations instead, ones that can monitor conditions and adjust quickly to sustain operations and safety. He points out that this approach has the benefit of making organisations better able to respond to business opportunities as well as to safety threats.
Another outcome of considering human variability as something of value is that, rather than investigating just accidents or even near-misses, we should put more effort into talking to people about what they do when the outcomes are positive. Asking people how they avoided an accident, or how they prevented a problem escalating might be more revealing than waiting for accident.