
In the days when most computers had black screens with green text, the term ‘What You See Is What You Get’ (WYSIWYG, pronounced whizzy-wig) referred to new computers that were being developed by companies like Apple and Xerox, where documents appeared on the screen as they would be printed.
Younger readers will take this for granted but before WYSIWYG, word processing was almost a programming job. For example, to print ‘Hello World’ your screen might show ‘\begin\bfHello \itWorld \end’.
The term WYSIWYG wasn’t new, and had been used for some time to mean ‘I may not be perfect, but I am what I seem to be, so take it or leave it’ and this use outlived the computing term in popular parlance.
In 2009, three academics working in Sweden revised the phrase for a new purpose. Jonas Lundberg, Carl Rollenhagen and Erik Hollnagel titled their Safety Science paper ‘What-You-Look-For-Is-What-You-Find – The consequences of underlying accident models in eight accident investigation manuals’ and introduced us to WYLFIWYF.
The argument made by Lundberg and his two colleagues is not simply that you will find what you are looking for, but that the model of accident causation you use will have an impact on what you find in an investigation. This suggests a bias that goes beyond the cognitive bias types that we considered in Clear your head.
Lundberg et al point out, “It is simply not possible to begin an investigation with a completely open mind.” In the 2008 publication, Remaining Sensitive to the Possibility of Failure, Hollnagel’s chapter titled ‘Investigation as an Impediment to Learning’ notes, “A root cause analysis implies that accidents can be explained by finding the root – or real – causes”.
Similarly, if you apply Herbert William Heinrich’s domino model, as discussed in Industrial Accident Prevention: a scientific approach (4th edition), you will look for, and therefore identify, social factors like the environment, the inherent faults of people such as impatience, and unsafe acts. If you use James Reason’s Swiss cheese model from his book Managing the Risks of Organizational Accidents, you will still look for, and find unsafe acts, but you will also find organisational factors such as poor planning, and workplace factors (such as time pressure), and failed defences (such as a failure of supervision or inspection).
Models assume that one failure inevitably leads to another, whereas in reality there are many factors to consider
What then can be done to avoid this type of bias? The authors conclude that most models of accident causation are too linear, and too tightly coupled. Models assume that one failure inevitably leads to another, whereas in reality there are many factors to consider, and much variability as to whether a single action results in a good or a bad outcome.
Although the authors admit that, “there is no objectively true description of an accident”, they, perhaps not surprisingly, point to Hollnagel’s own accident model as a solution. The functional resonance accident model was covered in F is for FRAM. FRAM defines six parameters for each function (input, output, time, control, preconditions, and resources) and recognises that each parameter can vary. This explains why an operator action, which can occur day after day without incident, suddenly becomes the ‘cause’ of an accident when other parameters change coincidentally.
FRAM might be too complicated for many organisations, but WYLFIWYF is a reminder for us to take a look at our own incident reporting procedures. Are they written down in a manual that everyone can read, so the criteria for reporting, investigation and drawing conclusions are in the open? Does the process make an assumption about causes, for example, always attempting to find a management root cause, or always looking for an operator error to blame? Even if you don’t have a written process, you are likely to have accident and near-miss reporting forms. Look at the fields on your forms; even the way these are labelled can be a source of bias. As Hollnagel points out: “There are never any simple ‘truths’ to be found, hence no simple way of learning from experience. Any lesson learned is limited by the assumptions on which the investigation is based.”