fbpx
Connect with us

Diving Safety

Drift is Normal. Being a Deviant is Normal. Here’s Why

What causes individuals and organizations to drift from acceptable standards and behavior? Is it an aberration or something to expect, and what can we do about it? Human Factors coach Gareth Lock takes us for a deep dive into human biases and our tendency to drift, and what that means for human performance.

Published

on

by Gareth Lock

Header image: a deviant diver on the SMS Cöln, and other pictures courtesy of Gareth Lock, unless noted

In 1994, two US Army Black Hawk helicopters were shot down by two US Air Force F-15 fighter jets over northern Iraq killing all 26 people on board the choppers. When the story hit the media, it was almost unbelievable that two highly professional aircrews being guided by other equally professional operators on the Airborne Warning and Control System (AWACS) aircraft could mistake the Black Hawk helicopters for Mil Mi-28 Hind helicopters. But they did!

In his excellent book Friendly Fire: The Accidental Shootdown of U.S. Black Hawks over Northern Iraq, Scott Snook developed and demonstrated the concept of practical drift, a theory whereby each sub-organisation or team has a certain amount of leeway to undertake their operations. This flexibility acknowledges that you can’t follow the rules exactly to the letter all the time. The problem is that these small deviations compound across the wider system with potential disastrous results; and, importantly, no one appears to recognize that the drift is occurring. Snook’s event map describes a complicated web of relationships between multiple stakeholders—the tasking organisation, the aircrew in the Black Hawks, the F-15 aircrew, and the AWACS control team—all of whom were doing the best they could with their limited resources and quickly changing circumstances.  

Practical drift is similar to the “Normalization of Deviance,” a concept Diane Vaughan developed during her examination of the Challenger Shuttle disaster. Vaugn explored the idea in her 1996 book, The Challenger Launch Decision – Risky Technology, Culture, and Deviance at NASA. Normalization of deviance has been discussed in a number of recent diving blogs in an attempt to explore the acceptance of the (continued) breaking of a single rule. 

Rather than focus on a single rule, we should consider Vaughan’s definition wider than the individual level, and look to a larger scale. “Social normalization of deviance means that people within the organisation become so accustomed to a deviation that they don’t consider it as deviant, despite the fact that they far exceed their own rules for elementary safety.” Neil Richardson, a safety and human factors professional (and colleague of mine) operating primarily in the aviation domain, offers another perspective while addressing the same point: “The Shuttle programme was risk-managed right up until the point it wasn’t and the Challenger and crew were lost.” 

Risk management vs, uncertainty management

Risk management is often mentioned in the “professional” arm of diving and diver training courses—such as dive master, instructor, and instructor trainer courses—but it is rarely covered in detail during “user” courses or sport diving. Despite this lack of formal content and process, we are constantly managing relevant uncertainties with the goal of providing an enjoyable dive for ourselves and our students and reducing the likelihood of having an adverse event. 

The term “uncertainties” has specifically been used instead of “risk” because of the way that we normally make decisions in an uncertain environment. When managing risk, we are often comparing historical analyses of quantitative data to determine likelihood and consequence using the logical or System 2 part of the brain. However, when we are managing uncertainties, we use a different part of the brain—often described as System 1—which relies on pattern matching, cognitive biases and mental shortcuts. Importantly, System 1 is heavily influenced by our emotions, which is why we often react quickly rather than logically. 

Equating “risk” and “uncertainties” is like conflating the “apple” type of decision-making with the “orange” type of decision-making. They are both decision-making concepts, but they have different processes and applications and can lead to different outcomes.

We need to recognize that the uncertainties we deal with while diving aren’t just focused on physical safety/harm, but also cover legal, reputation, financial, psychological, and social uncertainties and their associated outcomes. Research has shown that the fear of psychological harm can be stronger than the fear of physical harm.

In the diving industry, when something goes wrong, the (social) media and “investigations” often focus on the proximal causes—those that are closest in time and space to the event—of what happened. There is a focus on violations, rule-breaking, human error, recklessness, or direct health issues, and only sometimes do supervisory/instructional factors come into the discussion. Furthermore, the media rarely examines “local rationality” (why it made sense for the individual to do what they did) or the immediate or wider organisational and cultural factors that may have been present.

Local rationality

If we focus on the local rationality to start with, we know that the majority of the time we are operating in System 1 mode, which is fast, intuitive, and pattern-matching based thinking. We are not actively paying attention to everything that we’re sensing; instead, we are picking what we think are the relevant or important factors based on our previous knowledge and experiences, focused by our present goals and expectations, and using those elements of information to make a decision. 

Despite what some would think, you can’t pay 100% attention all the time! This means that we are literally ditching billions of bits of sensory data each day because, in real time, we don’t think those bits are relevant or important. When there are pressures that prevent us from being more thorough, we are trying to be as efficient as possible. These pressures might be related to time, money, peer-pressure, fear of failure, fear of non-compliance, or fixation on goals/outcomes. However, the more we get “right” without thinking about all of the incoming stimuli, the more we use this pattern to reinforce our decision and then repeat it. How often have you heard “We’ve always done it this way?”

Maybe an adverse event would provide a learning opportunity? Unfortunately, the likelihood of adverse events serving as cautionary tales is entirely dependent upon biases in our thinking and how those biases inform our interpretation of an event, adverse or otherwise. The following is a list of biases:

  • Outcome bias describes the tendency to judge serious events more critically than minor events. This is because we disconnect the quality of the outcome from the quality of the decision. For example, those involved in a fatality with the same conditions as a non-fatality will be treated more critically; a poorly performing regulator that free-flows in 10 m/33 ft of cold water will be treated differently from the same regulator that free-flows in 40 m/131 ft of cold water because the consequences are more severe.
  • Fundamental attribution bias is the tendency to attribute causality of an adverse event involving someone else to the individual involved rather than the situation or context. This is different to when we personally experience failure, as we often blame the situation or context! Inversely, when we personally experience success, we look at our skills and behaviors; but, when others succeed, we have a tendency to attribute the “opportunities” they had as the cause for success.
  • Distancing through differencing is the tendency to discount failures in others as being relevant to ourselves because we are different to the other party in some way, even if the general conditions and context are the same. A recreational OC diver may forget part of their pre-dive sequence because they were distracted but an experienced OC technical diver may believe that they wouldn’t make that same mistake, even though the conditions were the same.
  • Hindsight bias is the tendency to think that, if we had been in the adverse situation, we would have known at the time what the adverse event would have been and would have responded differently. Part of this is because we are able to join the dots looking backwards in time, recognising a pattern that wasn’t apparent in the moment.

Rewards

As a result of these biases, we aren’t very good at picking up small deviations in procedures because we experience “good enough” outcomes, and we are “rewarded” for gradual erosion of the safety margins that the original standards were created to address:

• We saved time (or weren’t late) as we skipped through the checks quickly. 

• We saw more of the wreck or reef because we extended the bottom time and ate into our minimum gas margins. 

• We managed to certify a few more students this month which helped pay the bills, even though we didn’t cover everything to the same level of detail that we normally do.

• We got some really great social media feedback because we took those divers somewhere they hadn’t been before—and shouldn’t have been either—but they loved it.


Coming 24-25 September 2021 to a Laptop Near You!

Rewards come in all sorts of shapes and sizes, but the common factor is the dopamine rush: Our brains are wired to favor the feel-good rush of a short-term gain over the prolonged reward of a long-term gain. On the other side of the coin, we are also willing to sacrifice a potential major loss in the future if there is a guaranteed minor loss now. For instance, imagine that you’re entering the water for the “dive of a lifetime” in cold water with a regulator setup that doesn’t breathe too well. You weren’t able to get it serviced because of time/money issues. At the end of this particular dive, you have to do a gas sharing ascent; someone else was out of gas due to an equipment failure, and both of your second stages freeflow and freeze, due to poor regulator performance, increased gas flow and the cold environmental conditions. This resulted in two people who were now out of gas and making a rapid ascent to the surface. 

In hindsight, we can see where the failures occurred. But, in real time, the erosion of safety margins and subconscious acceptance of the increased “risk” are likely not considered. In mid-July 2021, I gave a presentation to Divers Alert Network Southern Africa (DAN SA) on the topic of setting and maintaining goals and how goal focus can reduce safety. 

Credit: DAN South Africa

Organisations drift too

This article opens with the topic of normalization of deviation as it related to NASA and the Challenger Shuttle loss. The gradual, imperceptible shift from an original baseline through a series of “risk managed” processes and activities resulted in a “new” baseline that was far from acceptable when considering the original safety argument. This isn’t the first time an organisation has drifted, nor will it be the last. 

Organisations are made of people, and there are reward systems in place within organisations which lead to a conflict between safety, workload, and financial viability. The image below from Jens Rasmussen shows this tension and the “safety margins” that are perceived to be in place. The difficulty is that we don’t know how big the gap is between the margin and catastrophe, so we keep pushing the boundaries until we get some feedback (failure) and hope that it isn’t catastrophic.

“Risk Management in a Dynamic Society: A Modelling Problem” Rasmussen, 1997

Another way of looking at this tension and drift is to use a framework from the Human and Organisation Performance (HOP) domain called the Organisational Drift Model from Sidney Dekker.

The premise here is that safety is “created” by the development of rules, processes, procedures, and a culture which supports adherence to these standards or expectations. In the modern safety domain, these rules, processes, and procedures are called “Work as Imagined” or “Work as Prescribed.” They rarely match exactly the operational environment to which they are going to be used. There are good reasons for that; you cannot document everything that you want your people (instructor trainers, instructors, dive masters, and divers) to do in every circumstance, so there will be gaps between what should be done and what is done. These gaps are filled in by experience and feedback. Some call this common sense, but you can’t develop common sense without personal experience!

As time progresses, there is an increased gap between the “Work as Imagined” (black line) and “Work as Done” (blue line). This gap is risk or uncertainty to the organisation. Not all drift is bad though, because innovation can come from drift as long as it is recognized, debriefed, and intentionally fed back into the system for improvement.

At the same time as individual and team performance is drifting, the operational environment is changing too. There are accumulations which are adding uncertainty/risk to the system: old or outdated equipment, external requirements changing, legislation changes, change of purpose of equipment or accommodation/infrastructure, and many others. Often these accumulations are dealt with by different people in an organisation, so the compounding effect is not seen.

The gap between “Work as Done” and the “Accumulations” line is known as capacity within the system. This capacity is managed by individuals, taking into account their experience, knowledge, skills, and attitudes towards and within the diving environment. Safety does not reside in paperwork, equipment, or individuals; it is created by those within the diving system taking into account all of the resources they have and the pressures they face while balancing workload, money, and safety dynamically. 

However, when the capacity runs out (when the Work as Done line crosses the Accumulations line) an adverse event occurs. This event is now under the spotlight because it is obvious and cannot be hidden, especially if it is very serious. Hindsight clouds our ability to learn because we think the gaps must have been obvious. Effective organisational learning to prevent drift doesn’t need an adverse event. What it needs is a curious mind and the motivation to improve. If we stopped time 5 seconds before the lines crossed, while we still had capacity, then all of the learning opportunities would still be present and we could examine them. We would be able to see what accumulations are occurring, we would be able to see Work as Done actually was, and we would be able to increase the capacity of the system thereby reducing the likelihood of an adverse event. But that requires organisations to recognize that adverse events are outcomes from a complex system with many interactions, and where they set and demonstrate the acceptable standards and expectations. The absence of adverse events does not mean that you are operating a ‘safe’ system.

If drift is normal, what can I do about it?

First, recognize and acknowledge that drift exists. We all have a tendency to drift. If drift is occurring, look at the conditions that are causing the drift without focusing on the drifting individual themselves. This could be time pressures, financial pressures because of ‘cheap’ courses, lack of experience, high turnover of staff and low commitment to the sport by divers or dive professionals. 

Secondly, create an environment where feedback, especially critical context rich feedback, is the norm. This has multiple benefits: 

  • Individuals find out where they are drifting from the standards/expectations which have been set.
  • Organisations find out if their standards/expectations are fit for purpose and where issues about compliance are arising.
  • Accumulations are identified in a timely manner and addressed.

There are a number of blogs on The Human Diver website and our Vimeo channel which help to develop a learning culture, understand how drift occurs via human error, and how to develop both a psychologically safe environment and a Just Culture. In terms of having an immediate effect, a post-dive/post-project debrief is one of the best methods, and you can download the DEBRIEF framework I created to help facilitate critical, learning-focused debriefs from here: www.thehumandiver.com/debrief  

Remember, is it normal to err. It is what we do once we’ve made the error that matters when it comes to creating positive change in the future. If we focus on the individual and their behavior, things are unlikely to improve. However, if we look at the conditions and context, then we have the opportunity to reduce the chances of an adverse event in the future. And if we share those lessons, it isn’t just our organisation or team that improves, the diving community can too.

Additional Resources

Be There or Be Deviant: HF In Diving Conference 24-25 September 2021


Gareth Lock has been involved in high-risk work since 1989. He spent 25 years in the Royal Air Force in a variety of front-line operational, research and development, and systems engineering roles which have given him a unique perspective. In 2005, he started his dive training with GUE and is now an advanced trimix diver (Tech 2) and JJ-CCR Normoxic trimix diver. In 2016, he formed The Human Diver with the goal of bringing his operational, human factors, and systems thinking to diving safety. Since then, he has trained more than 350 people face-to-face around the globe, taught nearly 2,000 people via online programmes, sold more than 4,000 copies of his book Under Pressure: Diving Deeper with Human Factors, and produced “If Only…,” a documentary about a fatal dive told through the lens of Human Factors and a Just Culture. In September 2021, he will be opening the first ever Human Factors in Diving conference. His goal: to bring human factors practice and knowledge into the diving community to improve safety, performance, and enjoyment.

\

Community

The Role of Agency When Discussing Diving Incidents: An Adverse Event Occurs—An Instructor Makes a Mistake

Human Factors educator and coach Gareth Lock examines the role of our innate attribution biases and language, in forming our collective judgements when incidents occur—in this case, by considering a student diving injury that occurred during a class. Was the instructor to blame? Was anyone?

Published

on

By

by Gareth Lock

Header Photo by Alexandra Graziano

What do you think when you read the following? Who is at fault? Where do you think the failures lie?

“The instructor failed to notice that the gas pressure in one of their four student’s cylinders was dropping faster than was expected, and consequently, missed that this particular student had run out of gas. The student then panicked and bolted for the surface which ended up with them having an arterial gas embolism.”

It would be normal for the majority of Western-cultured divers to believe that the fault would lie with the instructor, especially as I framed your thought processes with the subtitle, ‘An Instructor Makes a Mistake’. 

The instructor would have had a clear level of responsibility to make sure that the event didn’t happen the way it did, and because the student ended up with an out-of-gas situation and an arterial gas embolism, that instructor needs to be held accountable for the mistakes that were made. 

Financial compensation to the diver might be involved. As for the instructor, specific solutions for ways to prevent future mishaps would be standard. The instructor might be advised to be more aware, to monitor students more closely, and follow standards and/or training.

The problem with this approach is that it can miss significant contributory factors. Over thousands of years, we have developed a mindset that searches for the cause of an adverse event so that we can prevent the same thing from happening again. There are two parts behind this sentence that we are going to look at in this article—agency and attribution.

Agency and Attribution

Photo by Alexandra Graziano.

The first is Agency—an agent is a person or thing that takes an active role or produces a specific effect. ‘The instructor failed to notice the faster-than-normal pressure drop.’ In this example, the instructor is the agent. While we can easily identify the action and agent, we cannot determine from this simple statement whether the instructor intentionally didn’t monitor the gas, whether they accidentally missed the increased consumption rate or leak, whether the student didn’t inform the instructor, or if there was another reason. A reader of this short case study would normally assume that the instructor had some choice in the matter, that they were a free agent with free will, and that a professional with training should know better. This assumption can heavily influence how an ‘investigation’ develops from a blame-worthy event to one where wider learning can happen. 

Research has shown that the attribution of agency is subjective and is swayed by a number of different factors including culture, experience, and the language of the observer. Furthermore, the language used and how this frames the event has also been shown to directly influence the assignment of guilt, blame and/or punishment. This is especially the case if the only reports available are based around litigation and insurance claims, as these are purposely written to attribute blame. 

Societally, and developmentally, we believe that the attribution of cause behind an action is important, especially if it is an adverse event because it allows us to identify who or what needs to change to prevent the same or similar events from occurring in the future. In the out-of-gas event above, it might be obvious to some that it is the instructor who needs to change or ‘be changed’!

The Fundamental Attribution Bias

While agency is relatively clear when we describe an event, where this attribution of agency is applied is very subjective. Attribution theory was developed in the 1950s by Fritz Heider in which he described behaviours that could be attributed to internal characteristics or disposition (personality, abilities, mood, attitude, motivations, efforts, beliefs…) or to the influences external to them which were situational in nature (culture, social norms, peer pressure, help from others, organisational pressures, rules, environmental conditions…). For example, a diving student might not perform as expected despite having been given the training detailed in the course materials. This could be because of performance anxiety, lack of confidence, not paying attention to the demonstrations… (internal or dispositional attribution), or it could be caused by an argument they had had at home that morning, mortgage worries, homework which is due, promotion or threat of being fired, or poorly serviced equipment… (external or situational attribution).

Photo by Alexandra Graziano.

This subjectivity is so powerful and prevalent that there is a recognised cognitive bias called the fundamental attribution bias or error. This bias shows that there is a tendency to look for dispositional attribution when an adverse event involves someone else (they didn’t pay attention, they didn’t have the skills or experience), but the tendency to look for situational attribution when the adverse event involves us (high workload led me to be tired, the students were spread far apart, their gauge was in their BCD pocket). “When explaining someone’s behavior, we often underestimate the impact of the situation and overestimate the extent to which it reflects the individual’s traits and attitudes.” As a consequence, it is much easier to ascribe the failure to the individual rather than to look at the wider situation. This aligns with Lewin’s equation, B=f(P, E), which states that an individual’s behavior (B) is a function (f) of the person (P), including their history, personality and motivation, and their environment (E), which includes both their physical and social surroundings. 

Research has shown that culture can strongly influence how agency is attributed. Those from Western cultures e.g. Anglo-American or Anglo-Saxon European, have a tendency to be more individualistic in nature, whereas those from Far Eastern cultures have a more collective view of the world which increases collaboration, interdependence and social conformity. The research also shows that “Compared to people in interdependent societies, people in independent societies are more likely to select a single proximal cause for an event. Western cultures therefore have a tendency to erroneously attribute control and decision to the human actor closest to the event, even if this was not the case. This has huge implications when it comes to litigation and organisational/community learning.

Self-Serving and Defensive Attribution Bias 

When it comes to an adverse event, those cultures that have high individualistic behaviours are more likely to find a way to identify someone other than ourselves as the cause i.e. “the dive center manager didn’t tell me the time had changed, and so I was late for the boat.” Conversely, when we have a successful outcome, we are more likely to look to our own performance and traits (dispositional attribution) rather than the context (situational attribution) i.e. “I had spent time practising the ascents, so my buoyancy was good for the final dive.” without noticing that their buddy was rock solid in the water and provided a very stable platform to reference against. This is known as self-serving self-attribution.

As the severity of the event increases, we mentally distance ourselves further from the traits or behaviours that would have led to this event. “I wouldn’t have done that because I would have spotted the situation developing beforehand. I am more aware than that diver.” This defensive attribution is also known as distancing through differencing.

This is a protection mechanism; if we can shift the blame to someone else because they have a different disposition (internal behaviours/traits), we can convince ourselves that what we are doing is safe, and we carry on with what we were doing in the same way we’ve always done. This might appear to be simplistic; however, much of what we do is relatively simple in theory, it is how it is weaved into our daily lives that makes things complicated or complex. 

Photo by Alexandra Graziano.

Language Matters – Invisible Meanings

The subtitle of the first section “An adverse event occurs. An instructor makes a mistake.” will have invoked a number of mental shortcuts or heuristics in the reader. We will likely make an assumption that the two events are linked and that the instructor’s mistake led to the adverse event. I purposely wrote it this way. That link could be made stronger by changing the full stop to a comma.

Language can have a large impact on how we perceive agency and causality. The problem is that how we construct our messaging is not normally consciously considered when we write or speak about events. As with many other aspects of culture, it is invisible to the actor unless there is some form of (guided) active reflection.

For example, research has shown that there is a difference between how Spanish and English-speaking participants considered the intentional or unintentional actions in a series of videos. In one example, the actor in the video would pop a balloon with a pin (intentional) or put a balloon in a box with a (unknown) pin in it and the balloon would pop (unintentional) as the balloon hit the pin.“The participant descriptions were coded as being either agentive or non-agentive. An agentive description would be something like, “He popped the balloon.” A non-agentive description could be, “The balloon popped.” The study concluded that English, Spanish, and bilingual speakers described intentional events agentively, but English speakers were more likely than the other groups to use agentive descriptions for unintentional events. Another study showed similar results between English and Japanese speakers.



Another powerful bias exists in the form of framing. This is where information is given to another party to influence their decisions and is either done consciously or not. For example, take two yoghurt pots, the first says “10% fat” and the other says “90% fat free”. The framing effect will more likely lead us to picking the second option, as it seems likely it is the healthier yoghurt. If we look at how this applies to diving incidents and agentive language “The diver ran out of gas near the end of the dive.” or “Their cylinder was empty near the end of the dive.” The first appears to put the diver at fault but we don’t know how or why this happened; whereas, the second statement is not personal and therefore allows a less confrontational conversation. Consequently, we must be careful with how we attribute agency as it limits our attention to the context immediately surrounding the person involved. If we want to learn, we have to expand our curiosity beyond the individual and look at the context.

Another example of how language matters and the shortcuts we use is the use of binary oppositions e.g., right/wrong, deep/shallow, recreational/technical, success/error, or deserved DCS/undeserved DCS. While binary modes might work for technical or mechanical systems (work/don’t work), they are not suited for systems involving people (socio-technical systems) due to the complicated and complex interactions that are present. “They didn’t use a checklist.” Is often seen as a final reason why something went wrong, as opposed to asking questions like “What sort of checklist should have been used?”, “When would the checklist normally be used?”, “What were others doing at the time”, “Which checklist? Manufacturer’s, agency’s, or their own?” 

When it comes to these socio-technical systems, we can only determine success or error/failure AFTER the event. If the actors knew that what they were doing would end up as a failure due to an error, they would do something about that ‘error’ before it was too late.

Isn’t this just semantics?

All of this might appear to be semantics, and technically it is because semantics is the branch of linguistics and logic concerned with meaning. “Words create Worlds” (Heschel and Wittgenstein) for the better or worse. Think about how you frame an event or attribute agency because it WILL impact your own and others’ learning.

Look back at the original narrative in the second paragraph, which was purposely written in the manner it was, and consider where attribution has been placed, how it limits learning and what questions you can ask to improve your understanding of the event. We are cognitively efficient creatures, always looking for the shortcut to save energy. However, this efficiency comes at the expense of learning.

In this event, there were many other factors that we needed to consider, many of which would be focused on the limitations of our cognitive system. We CANNOT pay more attention; it has a limited capacity. What we can do is make it easier to prioritise and focus on the most important/and or relevant factors, and we do this by designing systems that take our limited capacity into mind. 

Monitoring four students is going to be at the limits of what is safely possible, especially when other factors are taken into consideration, such as instructor experience, visibility, current, task loading, comfort levels, etc. These factors are readily apparent and their significance obvious after the event, but in real-time with all of the other conflicting goals present, not so. When designing systems and processes, try to apply the key human factors principle: make it easier to do the right thing, and harder to do the wrong thing.

As an example of how this language can manifest itself, have a look at any agency training materials which describe adverse events or incidents, and look to see how agency and attribution are applied, and how little the context is considered. e.g. the following example is from a leadership-level training manual: a supervisor left the dive site before accounting for all of the divers in the group and two were left behind and suffered from hypothermia. The reason given for the abandonment was that the supervisor was distracted. The material then goes on to say that despite the supervisor having normally conducted good accounting procedures, this would not help in a lawsuit as a court would look at the event that occurred not what they normally did. What is missing is understanding ‘how the supervisor came to be distracted’ and what the context was. This would provide a much greater learning opportunity than the normal ‘make sure you account for everyone otherwise you could be in a lawsuit.’ “We cannot change the human condition, but we can change the conditions in which humans work.”—Professor James Reason.

Summary

We have a tendency, especially in Western cultures, to want to find out ‘who did it’ and ascribe blame to an individual agent. More often than not, the agent is the person who was closest to the event in time and space. In effect, we play the game of ‘you were last to touch it, so it was your fault’ but this rarely prevents future events from occurring. In reality, divers, instructors, instructor trainers, and dive centre managers are all managing complex interactions between people, environment, equipment and cultural/societal pressures with sensemaking only being made after the event. 

Photo by Peter Gaertner.

To be able to identify a single cause of an adverse event in diving is impossible because it doesn’t exist and yet this is what the language we use focuses on. We look for a root cause or a trigger event for an accident or incident. The research from Denoble et al, which described four stages (trigger event, disabling event, disabling injury and cause of death) of fatalities misses the context behind the trigger events and yet it is still used in incident analyses. Compare this to modern safety investigation programmes which have moved away from a root cause approach to a more systemic approach, like Accimap or Human Factors Analysis and Classification System (HFACS) that take into account systems thinking and human factors principles/models. 

A response from Petar J Denoble’s response, Click Here

There are no formal investigation and analysis programmes or tools in the sports diving sector so any data that is produced is heavily biased by personal perspectives. However, that gap will be addressed before the end of 2021 when an investigation course will be launched to the public by The Human Diver. 

This two-day programme will provide an introduction to a systems- and human factors-based approach to event learning and will be based on current best practices from high-risk industries and academia and then tailored and focused on non-fatal events in the diving industry. There will also be a number of research programmes being developed over the next year or so which look at incidents, their causality and how to report them. The methodology will be relevant to fatalities but these investigations are often undertaken by law enforcement officers or coroners.

Photo by Kirill Egorov.

For the diving community, there is a need to look at how adverse events happen, not by attributing agency to individuals, but to look wider, to the system and the context so that we can understand how it made sense for that human agent to do what they did at the time. Ivan Pupulidy covers this clearly in the US Forest Service Learning Review, “In order to change culture, you have to change the assumptions that drive the culture.”

After note: The article was heavily influenced by the work of Crista Vesel whose referenced paper examined agentive language and how it influenced how the US Forest Service moved from Serious Accident Investigation Guide to a Learning Review. The review allowed more genuine inquiry to occur and find out the real reasons why serious events, including fatalities, occurred. You can find Vesel’s paper here: “Agentive Language in Accident Investigation: Why Language Matters in Learning from Events.”

Footnotes:

1. Lexico. Explore: agent. http://www.lexico.com/en/definition/ agent (accessed July 30, 2021). 

 2. Agentive Language in Accident Investigation: Why Language Matters in Learning from Events Crista Vesel ACS Chem. Health Saf. 2020, 27, 1, 34–39. 2020 3. Myers, D. Social Psychology, 11th ed.; McGraw-Hill: New York, 2013; pp 100−117

4. Fausey, C.; Long, B.; Inamon, A.; Boroditsky, L. Constructing agency: the role of language. Frontiers in Psychology 2010, 1, 1−11. 

5. Dekker, S. Why We Need New Accident Models; Lund University School of Aviation: Sweden, 2005.

6. Fausey, C. M.; Boroditsky, L. In English and Spanish Speakers Remember Causal Agents Differently, Proceedings of 30th Annual Meeting of the Cognitive Science Society, Washington, DC, July, 2008. https://escholarship.org/uc/item/4425600t (accessed November 13, 2019).

7. Denoble, P.J; Caruso J.L.; de L Dear G.; Pieper C.F. and Vann R.D. Common Causes of Open Circuit Recreational Diving Fatalities. 2008

8. Learning Review (LR) Guide (March 2017); U.S. Department of Agriculture Forest Service accessed 30 Jul 2021


Gareth Lock has been involved in high-risk work since 1989. He spent 25 years in the Royal Air Force in a variety of front-line operational, research and development, and systems engineering roles which have given him a unique perspective. In 2005, he started his dive training with GUE and is now an advanced trimix diver (Tech 2) and JJ-CCR Normoxic trimix diver. In 2016, he formed The Human Diver with the goal of bringing his operational, human factors, and systems thinking to diving safety. Since then, he has trained more than 350 people face-to-face around the globe, taught nearly 2,000 people via online programmes, sold more than 4,000 copies of his book Under Pressure: Diving Deeper with Human Factors, and produced “If Only…,” a documentary about a fatal dive told through the lens of Human Factors and a Just Culture. In September 2021, he will be opening the first ever Human Factors in Diving conference. His goal: to bring human factors practice and knowledge into the diving community to improve safety, performance, and enjoyment.

Continue Reading

Thank You to Our Sponsors

Subscribe

Education, Conservation, and Exploration articles for the diving obsessed. Subscribe to our monthly blog and get our latest stories and content delivered to your inbox every Thursday.

Latest Features

In this article published online on  September 2, 2021, Gareth Lock systematically examines the role of innate attribution biases and language, talks about agency and attribution, and explains why incident investigation may fail to help prevent similar incidents from occurring again. As an example of a failed approach, Lock refers to the paper “Common causes of open-circuit recreational scuba fatalities”, which I co-authored with my colleagues in 2008. While I appreciate Gareth’s work in general and the content of this particular article, I have to point out that our paper never intended to do what Gareth assumes and attributes to it.
1. In our paper, we do not investigate individual incidents. Instead, we attempted an epidemiological analysis based on the reported results of separate incident investigations.
2. We do not claim that triggers are the root causes. We provide clear, pragmatic definitions for all four categories we used in the paper.
3. We never attribute agency in the sense of subjective factors; our only agent is similar to an epidemiological agent, like a mechanical agent of injury (boat hitting diver), CO causing intoxication, and similar.
4. We are aware that there were causes beyond what was reported and that in most cases probably there were multiple causes, and we state it explicitly in the paper.
5. We aimed to identify contributing factors that could be targeted with preventive interventions (which we did not prescribe).
6. We assumed, that although we may never know the primordial cause(s), we still could intervene by preventing the domino effect or by interrupting the chain of events leading towards the fatal outcome. If we were not right in assuming it, why bother with teaching divers all possible corrective measures in an adverse event?I am looking forward to a bright future with much-improved incident analysis methods. I hope that my younger colleagues will have high-quality reports to work with trying to devise the best preventive interventions.

-PJ Denoble