The term has been given added credibility by its inclusion in an EU regulation (Art 2 of 691/2010) on air navigation services where it is defined more tightly as "a culture in which frontline operators and others are not punished for actions, omissions or decisions taken by them which are commensurate with their experience and training, but where gross negligence, wilful violations and destructive acts are not tolerated".
Some behavioural models of safety and health assume human actions or omissions are the cause of accidents -- if something went wrong, someone must be to blame. The operator hit the wrong button and a colleague was electrocuted. Punishing the operator means there is no need to investigate further.
But what if the start buttons for two machines are unmarked and beside each other and there is no way to isolate the power supply for maintenance? Looking beyond blame might identify that a similar error could be eliminated through redesign of equipment and processes. "For the most part," said Reason, "punishing people does not eliminate the systemic causes of their unsafe acts. Indeed, by isolating individual actions from their local context, it can impede their discovery."
A restorative just culture would ask: "Who has been hurt or potentially hurt? What are their needs?"
For a while in the 1990s some organisations hailed the idea of a "no blame" culture. No individual would be blamed for their mistakes. This results in a lack of accountability -- in Reason's terms, a lack of understanding as to where the line between acceptable and unacceptable behaviour has been drawn. Just as a blame culture prevents us learning from events, so a no-blame culture can imply that, since no one is at fault, nothing needs to change.
Reason tried to help organisations find the "just" line between blame and no blame by producing a decision tree for determining the culpability of unsafe acts. He asked questions to distinguish between sabotage (actions and consequences intended), reckless acts (actions intended, consequences unintended), system-induced violations and errors, and blameless errors.
Reason added a "foresight test": did an individual engage in behaviour that an average worker would recognise as likely to increase the probability of a safety-critical error? This could include being under the influence of alcohol or drugs, clowning, working when overtired, or using equipment known to be damaged or unsuitable, such as retrieving a broken ladder from a skip.
But can an algorithm create the right sort of just culture? Sidney Dekker, in his book Just Culture, thinks not, since where the line should be drawn is "infinitely negotiable." He writes: "As a result there really is no line, there are only people who draw it." Who draws the line depends on where the power is held.
Recognising that human error cannot be eliminated from tasks requiring human decisions and actions, Dekker suggests organisations move from a retributive to a restorative just culture: "Retribution asks which rule has been broken, who did it, how bad was the infraction, so what does the person deserve?" By contrast, a restorative just culture would ask: "Who has been hurt or potentially hurt? What are their needs? Whose obligation is it to meet those needs?"
Dekker's restorative just culture or Hollnagel's fair culture might work within organisations, particularly as a means of encouraging near-miss reporting. However, while compensation is generally driven by the need to prove that someone or something failed a duty of care, it seems unlikely that Dekker's vision of a just culture could become widespread.