get started
conceptualize
analyze
unify
learning indicator
get started
conceptualize
analyze
unify
PEXiS - performance experts in safety - behind human error
the new look primer
(this is also available as a downloadable .pdf with a companion piece)

The usual judgment after an accident is that human error was the cause. In other words, human error is the stopping point for an investigation and ends the learning process.

However, just as recent celebrated accidents in medicine have directed attention to patient safety, previous highly visible accidents in other industries, such as power generation and transportation drew attention to issues surrounding the label "human error" (e.g., Three Mile Island in 1979; the capsizing of the Herald of Free Enterprise in 1982; various aircraft accidents). The intense interest in these accidents led to sustained, cross-disciplinary studies of the human contribution to safety and accidents.

To make sense of these accidents, as well as other less celebrated cases, researchers found a second, multi-faceted story hidden behind the label of human error that revealed patterns about how systems fail. Going behind the label human error points the way to effective learning and system improvements. In other words, the label human error should be seen as starting point for investigation.

The result of research that pursues the "second" story has been a "new look" at the human contribution to safety and to failure.


multiple contributors

Traditionally, error analysis has focused on identifying the cause. However, one basic finding from "new look" research is that accidents in complex systems only occur through the concatenation of multiple small factors or failures, each necessary but only jointly sufficient to produce the accident. Often these small failures or vulnerabilities are present in the organization long before a specific incident is triggered. All complex systems contain such "latent" factors or failures, but only rarely do they combine to create the trajectory for an accident.

It is useful to depict complex systems such as health care, aviation and electrical power generation as having a sharp and a blunt end. At the sharp end, practitioners interact with the hazardous process in their roles as pilots, spacecraft controllers, and, in medicine, as nurses, physicians, technicians, pharmacists and others. At the blunt end of the health care system are regulators, administrators, economic policy makers, and technology suppliers. The blunt end of the system controls the resources and constraints that confront the practitioner at the sharp end, shaping and presenting sometimes conflicting incentives and demands.

"swiss cheese effect"
swiss cheese graphic


creating safety

Traditionally, accident analysis has focused on individuals as unreliable components. The search for a cause typically stopped at the human or group closest to the accident who was determined, after-the-fact, could have acted in another way and in a way which we now believe would have led to a different outcome. The response to this verdict has been interventions to protect the system from erratic human behavior.

However, the more researchers have looked at success and failure in complex work settings, the more they realize that the real story is how people learn and adapt to create safety in a world fraught with hazards, tradeoffs, and multiple interacting goals.

When we look at the role of sharp end practitioners in various research investigations, we see that people make safety in the face of the hazards that are inherent in the system. System operations are seldom trouble free. There are many more opportunities for failure than actual accidents. Groups of practitioners pursue goals and match procedures to situations, but they also resolve conflicts, anticipate hazards, accommodate variation and change, cope with surprise, workaround obstacles, close gaps between plans and real situations, detect and recover from miscommunications and misassessments. In these activities practitioners at the sharp end block potential accident trajectories. In other words, people actively create safety when they can carry out these roles successfully.

Safety research tries to identify factors that undermine practitioners' ability to do this successfully. Research has examined how people, individually, as groups, and as organizations, create safety through investigations of how expertise is brought to bear in non-routine situations, how people cope with multiple pressures and demands before an accident draws attention to their practice, and how some organizations achieve "high reliability."


feedback and recovery from incipient failure

The "new look" at human error has shown that robust, "high reliability" individuals, teams, systems, and organizations are able to recognize systemic vulnerabilities and how situations evolve toward hazard before negative consequences occur.6 This means that processes involved in detecting that a situation is heading towards trouble and re-directing the situation away from a poor outcome is critical to safety--the concepts of error tolerance, detection and recovery. Evidence about difficulties, problems, and incidents reveals information about the underlying system – systemic vulnerabilities that can create or propagate trajectories toward failures.

"High reliability" organizations also value such information flow, use multiple methods to generate this information, and then use this information to guide adaptive and constructive changes without waiting for accidents to occur.


hindsight biases

The hindsight bias is one of the most reproduced research findings relevant to accident analysis and reactions to failure. Knowledge of outcome biases our judgment about the processes that led up to that outcome.

In the typical study, two groups of judges are asked to evaluate the performance of an individual or team. Both groups are shown the same behavior; the only difference is that one group of judges are told the episode ended in a poor outcome; while other groups of judges are told that the outcome was successful or neutral. Judges in the group told of the negative outcome consistently assess the performance of humans in the story as being flawed in contrast with the group told that the outcome was successful. Surprisingly, this hindsight bias is present even if the judges are told beforehand that the outcome knowledge may influence their judgement.

Hindsight is not foresight. After an accident, we know all of the critical information and knowledge needed to understand what happened. But that knowledge is not available to the participants before the fact. In looking back we tend to oversimplify the situation the actual practitioners faced, and this tends to block our ability to see the deeper story behind the label human error.

Researchers use methods designed remove hindsight bias to see the multiple factors and contributors to incidents, to see how people usually make safety in the face of hazard, and to see systemic vulnerabilities before they contribute to failures.

hindsight bias figures

before the accident

hindsight

with hindsight
on out view




organizational factors in failure

Traditionally, accident analysis has focused on the sharp end. However, the more researchers have looked at success and failure in complex work settings, the more they realize that the real story is how resources and constraints provided by the blunt end shape and influenced the behavior of the people at the sharp end -- organizational factors. Reason summarized the results: “Rather than being the main instigators of an accident, operators tend to be the inheritors of system defects... Their part is that of adding the final garnish to a lethal brew whose ingredients have already been long in the cooking” (1990, p. 173).

creating cultures of safety and learning

When hindsight bias leads to the judgment human error, organizational learning stops. Accountability degenerates into search for a culprit (blame) and threats of punishment follow in the mistaken belief that this will repair a fallible human component (Figure 7).
Research on safety culture show that open flow of information about vulnerabilities is the lifeblood of safety. In other words, blame and learning are in conflict.

A simple test of a safety culture is to examine what happens after a failure or other event creates an opportunity to learn about how safety is made and sometimes broken. Are calls for action focused on the “other”? – those people are not as careful, well intentioned, motivated, knowledgeable, etc. …, as I am. Are proposed changes just those that other groups should make? There is this curious belief that because I am well intentioned, motivated in my role, I am immune from the processes that contribute to failure for other people.

The basic driving force to create a safety culture is the need to demonstrate that the goal of all, blunt end and sharp end, is learning to make the system work better. Sharing new knowledge and the willingness to act on this knowledge, especially when it conflicts with economic and other goals, is fundamental to a culture of safety. In other words, safety is not a commodity, but a value.

side effects of change

Systems exist in a changing world. The environment, organization, economics, capabilities, technology, and regulatory context all change over time. Even the current window of interest and opportunity to improve safety are mechanisms for change. This backdrop of continuous systemic change ensures that hazards and how they are managed are constantly changing.

The general lesson is that as capabilities, tools, organizations and economics change, vulnerabilities to failure change as well -- some decay, new forms appear. The state of safety in any system is always dynamic, and stakeholder beliefs about safety and hazard also change. Progress on safety concerns anticipating how these kinds of changes will create new vulnerabilities and paths to failure even as they provide benefits on other scores.

For example, new computerization is often seen as a solution to human performance problems. Instead, consider potential new computerization as another source of change. Examine how this change will affect roles, judgments, coordination. This information will help reveal side effects of the change that could create new systemic vulnerabilities.
Typically, it is the complexity of operations that contributes to human performance problems, incidents, and failures. Changes, however well-intended, that increase or create new forms of complexity will produce new forms of failure, in addition to other effects.

Armed with this knowledge we can address these new vulnerabilities at a time when intervention is less difficult and less expensive (because the system is already in the process of change). In addition, these points of change are opportunities to learn how the system actually functions and sometimes mal-functions.

Another reason to study change is that organizations are under severe resource and performance pressures from stakeholders. First, change under these circumstances tends to increase coupling, that is, the interconnections between parts and activities, in order to achieve greater efficiency and productivity. However, research has found that increasing coupling also increases operational complexity and increases the difficulty of the problems practitioners can face. Second, when change is undertaken to improve systems under pressure, the benefits of change may be consumed in the form of increased productivity and efficiency and not in the form of a more resilient, robust and therefore safer system.

To move forward, one simply strategy is to examine how unintended effects of economic, organizational and technological change can produce new systemic vulnerabilities and paths to failure. Future success depends on the ability to anticipate and assess the impact of change to forestall new paths to failure.

sharp-end/blunt-end factors
(click the images for larger versions)
basic sharp-end/blunt-end graphicadapting to surprise

coordinating knowledge mediatioin goals organizational learning







drift towards failure
drift towards failure















site home



background information: