Connect with us

Education

Decompression Series Part Four: Finding Shelter in an Uncertain World

In the final of this four-part series on the history and development of tech decompression protocols, GUE founder and president, Jarrod Jablonski weaves together various forays into decompression science, including Brian Hill’s pioneering pearl diver study, the NEDU’s work on deep stops, evidence of individual susceptibility, and probabilistic decompression models in an attempt to define the state of our understanding. It may give you pause to stop. Feel free to add your comments.

Published

on

By Jarrod Jablonski

Header photo courtesy of the GUE archives

Did you miss Part III? Read it here.

The human quest to explore below the water’s surface began some 5,000 years ago . Since that time, our species has pursued deeper and longer immersions, charting a course through hundreds of years of diving activity and associated research. Many of the advances in procedure, technique, and equipment are a direct result of the compelling and valuable data and experience documented during underwater explorations. As with many novel activities, this process of advancement required pushing physical and intellectual barriers. 

During the 1980s and 90s, advances in technology supported an activity that became known as technical diving. This diving led to the development of ascent practices which were somewhat different from those of scientific, military, and commercial divers. A unique set of needs and limited relevant examples encouraged a great deal of experimentation among these early explorers, including adjustments to breathing gases and the distribution of decompression stops used during their ascent. Some technical divers began using a slower ascent from depth, in the hope this would control the formation of bubbles. These slow ascents became known as “deep stops” and were practiced in the hope they could reduce decompression stress and/or shorten decompression time. 

In fact, the idea of bubble control was not new. During the 1960s, physiologist Brian Hills sought to characterize the profiles of pearl divers who had been operating since the late 1800s. These divers were interesting because they were ascending in two-thirds of the time required by Navy tables, a time that would cut even more decompression from most modern-day ascent schedules. Hills believed the reduced decompression times were the result of a unique ascent profile, including stops deeper than those called for by the Navy tables. Years later, technical diving explorers started adopting similar techniques while reporting reductions in total decompression time. It is difficult to qualify if this perceived success was actually occurring since the groups were relatively small, not carefully monitored, and simultaneously adjusting numerous other factors during their ascent. Even absent these complications, the generally low risk of decompression sickness can greatly complicate evaluations between different strategies. 

The enthusiasm for deep stops likely reached its peak in the late 1990s and was dealt a serious blow by the previously discussed Navy Experimental Diving Unit (NEDU) study that was released in 2011. This study, and others, propose that deep stops are less efficient and may actually increase the risk of decompression sickness. The reader should refer to part three of this series for discussion and references. This series contends that opposition to deep stops is supported by prevailing research, but that a range of other variables need to be considered in order to effectively develop best practices. These aspects are particularly relevant to experienced divers, who report decompression sickness problems when eliminating slower ascents from depth.

Jarrod Jablonski with the Halcyon PVR-BASC semi-closed rebreather aka “The Fridge.” Photo courtesy of GUE archives

It is not my intent to re-litigate the previous three sections of this article, but an interesting, and I believe, underappreciated aspect of Brian Hills’ pearl diver study provides a nice segue. What I find most interesting are the roughly 3,000 deaths and injuries of an unknown quantity that helped shape those unique ascent profiles. In other words, how was this conclusion affected by the elimination of those who are more susceptible to injury, and how much was due to a lack of rigor in the study ? Hills concluded that the success of the profiles was “due to the much deeper initial decompression stops used” by the pearl divers. In a similar way, technical divers took note of the history, the encouragement from experts, and the perceived success by those in their community. 

Given new and mounting evidence against deep stops, can we now definitively conclude that Hills, the pearl divers, and the tech divers were wrong? Are we sure the perceived success was imagined? If some success occurred, was it more about the generally low levels of risk in decompression sickness? Or could something else worth considering be at play? Asked another way, we might inquire how the conclusions reached by Hills and those technical divers are different from the way modern-day decompression tables have come into being.

The history of pearl diving and deep stops is very different from that of most decompression research in at least two substantial ways. The first difference has to do with methodologies, and the second with objectives. In terms of methodology, most decompression research is conducted using the scientific method: developing a testable hypothesis and, hopefully, crafting well-devised experiments in order to interrogate the hypothesis. Open publication of methods and results, internal and external debate, and reproducibility of results are among the many ways in which a hypothesis will be tested over time, narrowing the results toward either a more or less trusted conclusion. The history of deep stops, and possibly to a lesser extent, that of pearl divers, share few, if any, of the rigors commonly associated with the scientific method.

Individual Susceptibility

Looking to the history of decompression research, the objective of a particular study is implicit, if not explicit, in the development and testing of a hypothesis. With decompression profiles, we seek to balance the safety of the majority while not unduly affecting the whole. For example, we seek ascent profiles that keep a high percentage of individuals from being injured while not greatly extending the decompression time of the group as a whole. What would the results look like if we instead sought the most efficient decompression for a select minority of individuals?

 Some researchers joke among themselves that they already know who will get bent among a group of test individuals. This is because research trials require a lot of volunteers among a relatively small population of willing participants, meaning that some of the same individuals are often involved in multiple experiments. This is not to say that a few individuals have skewed all research, but rather to say that a minority of subjects in all research projects can affect the outcome by being particularly susceptible to decompression stress. 

This individual susceptibility is likely no surprise to anyone and is relatively well established among researchers, as is the variability in one individual from one day to the next. We see such variability in almost every conceivable area of our lives, affecting the way we respond to everything from drugs and alcohol to food and criticism. How could it be otherwise? We are all a kind of genetic experiment, refined through time with an endless series of personal and species-wide successes and failures. If we are variably sensitive to decompression stress, as seems almost certain, then in what myriad of ways might that be playing out? 

It appears that some individuals bubble more and some less on the same profile. Might they also be more or less sensitive to whatever collection of bubbles are generated? Is it possible that we develop different collections of symptoms to various types of decompression stress? That we are individually more or less sensitive to similar symptoms? Some of these factors we believe to be true and some we might suspect to be true. Many others lurk in the background, and all impact our sense of what we might call decompression stress. 

Casey Mckinley, Jarrod Jablonski, and George Irvine before a dive with the Woodville Karst Plain Project (WKPP). Photo courtesy of the GUE archives.

Given a world filled with individuals, we must do our best to bridge the divide. The good news is that we do this relatively well seeing that some differences are important but most are not usually extreme. The tail of the distribution represented by a small number of resistant individuals may well be quite small. This means that building profiles for resistant individuals might not have much impact and/or might be unreasonably dangerous. Either way, this individual variability is highly relevant and holds promise for the future. The next big advancement in health care will likely involve personalized medicine. Most of us may not live to see the usefulness of these developments in medicine, much less in decompression research, but the process is nonetheless hopeful. For example, research on heart rate variability ( HRV ) might be one such development, allowing a theoretical computer to monitor your individual stress and adjust the ascent accordingly.

Managing individual susceptibility to a fluctuating range of variables is complicated, especially when many of these variables remain undiscovered, or at least poorly understood. Clearly, all is not lost, as we do a very good job managing the problem of decompression sickness. Depending upon our measure of success, we could say this problem is effectively solved. The fact that we are arguing about the nuances of decompression-stop arrangement and obsessing about relatively small adjustments to our total decompression time speaks to this success. We are likely refining along the margins beyond the point of diminishing returns. However, we should not fool ourselves into thinking that we have all the answers. 

It’s The Data, Stupid

Another way to look at the science of decompression is to say it has mostly been a data-gathering exercise around which we fit slowly evolving boundary conditions. The boundary conditions are prescribed by algorithms and work quite well as long as we stay roughly within their range. It is quite possible we are not capturing any kind of truth about the way things work but rather refining our boundaries as we gather more data. It is true that we briefly foray into the field for some bubble-dynamics or that we strive to define the boundaries with process-markers like immune response, but none of these aspects has yet to produce a credible change in current practices. 

By far the most useful part of decompression research has been the accumulation of data and the refinement of algorithms that capture these outcomes. Ideally, these algorithms would extend well beyond the data they describe, supporting “safe” diving profiles where sparse or even no data exists. Yet, evidence suggests that our models are especially bad in these outlier territories including very deep and/or very long dives. Most divers with meaningful experience in the 100+ meter range will admit they have little assurance of a clean ascent absent any symptoms of decompression sickness. These aspects further suggest that we are working in the proverbial dark, or at least just barely within the distant illumination of modern knowledge.  This appears true at least with respect to specific determinations of cause (mistakes made) and effect (DCS incidence). Attempts to manage this uncertainty are in process among researchers spanning the globe. 

Most experts are convinced that bubbles play a role in developing symptoms of decompression sickness, and most of these believe the effect is significant. In this regard, we have perhaps not come so far from Haldane or Buhlmann, who were both well aware of bubbles but lacked the tools to manage their development throughout a diver’s ascent. Likewise, the most recent deep stop studies do not propose that bubbles are irrelevant, only that deep stops appear inefficient and, in at least some cases, can increase risk. On the other side, we have evidence that slower ascents and/or deeper stops can reduce bubbling, but we remain unclear about the degree of importance the bubbling itself represents, especially over the long ascents conducted by technical divers. 

Even a perfect model of bubbles might fail to predict or appreciably reduce decompression sickness, given the many complications in asserting the specific effect of bubbles in a given individual or within a particular injury. We are probably far from a perfect bubble model and perhaps even farther from determining how the wide array of variables might impact different individuals over time. 

Perhaps we can find a way to manage our uncertainty while still progressing our understanding of the likelihood of a given outcome. For good reason, this process is reminiscent of mysteries coming to light in other fields. We seem to be discovering that more knowledge in a given area does not always result in a clearer understanding. Less than 50 years ago, most people were convinced we had “solved” the mystery of elementary particles, bundling the atom in nice packages of three constituents with simple-sounding names. Now the more we learn, the better we measure, the deeper we look, the more unsettling is the complexity. 

Probabilistic Models and Uncertainty

Despite the confusing world around us, we have managed to achieve a high degree of success, and this continues despite our uncertainty. Management of this uncertainty can be mitigated by the use of probabilistic models and is currently common in other disciplines. This is an interesting and promising field, though it seems unlikely probabilistic decompression models will greatly change our current decompression profiles. This assumption may be wrong but seems appropriate, partly because we already have very low levels of decompression sickness, and partly because we have many supporting dives validating current time/depth profiles. 

Jarrod Jablonski towing decompression bottles at the surface during a GUE project dive. Photo courtesy of the GUE archives.

Adjustments like deep stops temporarily promised to reduce decompression time, perhaps by as much as one-third, but failed to materialize when tested more rigorously. This seems likely to remain true, at least as long as we assert a primary objective in maintaining very low DCS risk for the overwhelming majority. There may be a variety of small improvements to be found, but our current approach seems broadly “correct,” at least within the bounds of most active diving profiles. 

In some ways, we already manage uncertainty but do so indirectly by assigning a very low level of acceptable risk to the profiles that we test. This ultimately impacts the resulting decompression schedule. Using probabilistic models might allow us to permit a high level of risk, which could conceivably shorten decompression time. However, it remains to be seen if these models will be released in a way that allows users to accept high levels of risk. Even if such options become available, I wonder how many divers would use them in an aggressive way. Regardless of these factors, probabilistic models might allow a rational selection of risk, especially for those with the requisite understanding. 

Current and foreseeable models may not be describing any sort of truth, but they do appear good at determining useful boundaries (time and depth limitations) around which a desired outcome (limited DCS risk) appears most likely. I do not mean to belittle that success in the least. We maintain a high degree of confidence we will not suffer decompression sickness on most dives, and that is no small achievement. Yet, it also brings us full circle and back to the idea that modern-day decompression tables are largely determined by those most susceptible to decompression sickness. 

The NEDU study was stopped when it reached a threshold relating to DCS outcome. In this case, 10 of 198 dives resulted in DCS symptoms. Most were mild, late onset, Type I, but with two cases of rapidly progressing CNS manifestations. Two of the DCS cases were experienced by one individual. Ethical considerations require that a manned diving trial with DCS as an end point be designed to limit unnecessary injury to divers by maintaining a low level of DCS risk. This is a sensible and inevitable outcome of human trials. 

I am not advocating for a change to this strategy, but I am curious how this process affects our understanding of DCS, since we know little about the reactions occurring in more than 90 percent of test subjects. Would these individuals begin experiencing low-level symptoms after longer exposures? How much longer? Would we suddenly start seeing dangerous Type 2 symptoms in a rapidly escalating percentage of individuals? This rapidly escalating risk seems likely based upon experience with provocative profiles, but the details remain poorly defined. 

Team of divers descending into the cave. Photo courtesy of the GUE archives.

Maybe some individuals are more resistant to bubble formation while they or others are less sensitive to the bubbles that form. We can find many cases of prolific bubbling absent DCS symptoms. Meanwhile, DCS symptoms can be present with no detectable bubbles. This is to be expected, as symptoms are at least partly related to where bubbles are located. But these results might also hint at other differences in our response to bubbling. What if some divers form bubbles easily and/or experience high susceptibility to any formed bubbles? How would that knowledge affect any decompression recommendations? Is it conceivable that what works well for one diver or even the majority of divers is not optimal for all divers? 

All of this ambiguity should lead a thinking person to question the certainty of their pronouncements. We might be inclined to reduce our deep gradient and ascend more quickly from depth,  as the developing evidence indicates. But we should also respect the dive buddy that says they get bent when moving quicker in deep water. We can’t definitively say what works best, but we can say what seems to work well in the majority of cases for the majority of people. For most divers, these debates are largely academic, since the differences in profiles amount to minutes in one direction or another. 

Technical divers are progressively more affected by changes in recommended ascent profiles in relation to the length of their dives. Yet, even tech dives of relatively modest lengths show impacts of less than 10 minutes and are usually not worth nearly as much anxiety as one can find in the community. Having said this, it is easy to appreciate the desire to maximize efficiency. I am merely trying to suggest one should not be in a big hurry to change what seemed successful in the past. Those wishing to balance experience with evolving science might begin to raise their deeper gradients in a progressive fashion over time while paying attention to how they and their dive buddies respond. Or a person that perceives success with their current approach might choose to hold tight and make few, if any changes. I am arguing that we should recognize both opinions have merit and that we should take each perspective into account when working within our team to establish a given ascent schedule. 

The one definitive thing we can say about decompression is that it works well in the vast majority of cases, and when it doesn’t work, we probably will not know the exact reason. That reality is unlikely to change in the foreseeable future, although we certainly need to keep trying. A knowledgeable friend of mine once said that if we get bent, it is because we did not do enough decompression. Truer words have never been spoken. 

Personal Note:

I am very curious to hear about your experiences and opinions regarding evolving decompression science. Are most of you convinced that deep stops bring no value? How many think they are dangerous? Do you think I make too much of individual susceptibility, or do you see that in your own experiences? I welcome all points of view, critical and otherwise. Let the games begin :-).


Jarrod is an avid explorer, researcher, author, and instructor who teaches and dives in oceans and caves around the world. Trained as a geologist, Jarrod is the founder and president of GUE and CEO of Halcyon and Extreme Exposure while remaining active in conservation, exploration, and filming projects worldwide. His explorations regularly place him in the most remote locations in the world, including numerous world record cave dives with total immersions near 30 hours. Jarrod is also an author with dozens of publications, including three books.

Education

Understanding Oxygen Toxicity: Part 1 – Looking Back

In this first of a two-part series, Diver Alert Network’s Reilly Fogarty examines the research that has led to our current working understanding of oxygen toxicity. He presents the history of oxygen toxicity research, our current toxicity models, the external risk factors we now understand, and what the future of this research will look like. Mind your PO2s!

Published

on

By

By Reilly Fogarty

Header photo courtesy of DAN

Oxygen toxicity is a controversial subject among researchers and an intimidating one for many divers. From the heyday of the “voodoo gas” debates in the early 1990s to the cursory introduction to oxygen-induced seizure evolution that most divers receive in dive courses, the manifestations of prolonged or severe hyperoxia can often seem like a mysterious source of danger. 

Although oxygen can do great harm, its appropriate use can extend divers’ limits and improve the treatment of injured divers. The limits of human exposure are tumultuous, often far greater than theorized, but occasionally–and unpredictably–far less.  

Discussions of oxygen toxicity refer primarily to two specific manifestations of symptoms: those affecting the central nervous system (CNS) and those affecting the pulmonary system. Both are correlated (by different models) to exposure to elevated partial pressure of oxygen (PO2). CNS toxicity causes symptoms such as vertigo, twitching, sensations of abnormality, visual or acoustic hallucinations, and convulsions. Pulmonary toxicity primarily results in irritation of the airway and lungs and decline in lung function that can lead to alveolar damage and, ultimately, loss of function. 

The multitude of reactions that takes place in the human body, combined with external risk factors, physiological differences, and differences in application, can make the type and severity of reactions to hyperoxia hugely variable. Combine this with a body of research that has not advanced much since 1986, a small cadre of researchers who study these effects as they pertain to diving, and an even smaller group who perform research available to the public, and efforts to get a better understanding of oxygen toxicity can become an exercise in frustration. 

Piecing together a working understanding involves recognizing where the research began, understanding oxygen toxicity (and model risk for it) now, and considering the factors that make modeling difficult and increase the risk. This article is the first in a two-part series. It will cover the history of oxygen toxicity research, our current models, the external risk factors we understand now, and what the future of this research will look like. 

Early Research

After oxygen was discovered by Carl Scheele in 1772, it took just under a century for researchers to discover that, while the gas is necessary for critical physiological functions, it can be lethal in some environments. The first recorded research on this dates back to 1865, when French physiologist Paul Bert noted that “oxygen at a certain elevation of pressure, becomes formidable, often deadly, for all animal life” (Shykoff, 2019). Just 34 years later, James Lorrain Smith was working with John Scott Haldane in Belfast, researching respiratory physiology, when he noted that oxygen at “up to 41 percent of an atmosphere” was well-tolerated by mice, but at twice that pressure mouse mortality reached 50 percent, and at three times that pressure it was uniformly fatal (Hedley-White, 2008). 

Interest in oxygen exposure up to this point was largely medical in nature. Researchers were physiologists and physicians working to understand the mechanics of oxygen metabolism and the treatment of various conditions. World War II and the advent of modern oxygen rebreathers brought the gas into the sights of the military, with both Allied and Axis forces researching the effects of oxygen on divers. Chris Lambertsen developed the Lambertsen Amphibious Respiratory Unit (LARU), a self-contained rebreather system using oxygen and a CO2 absorbent to extend the abilities of U.S. Army soldiers, and personally survived four recorded oxygen-induced seizures. 

Kenneth Donald, a British physician, began work in 1942 to investigate cases of loss of consciousness reported by British Royal Navy divers using similar devices. In approximately 2,000 trials, Donald experimented with PO2 exposures of 1.8 to 3.7 bar, noting that the dangers of oxygen toxicity were “far greater than was previously realized … making diving on pure oxygen below 25 feet of sea water a hazardous gamble” (Shykoff, 2019). While this marked the beginning of the body of research that resembles what we reference now, Donald also noted that “the variation of symptoms even in the same individual, and at times their complete absence before convulsions, constitute[d] a grave menace to the independent oxygen-diver” (Shykoff, 2019). He made note not just of the toxic nature of oxygen but also the enormous variability in symptom onset, even in the same diver from day to day. 

The U.S. Navy Experimental Diving Unit (NEDU), among other groups in the United States and elsewhere, worked to expand that understanding with multiple decades-long studies. These studies looked at CNS toxicity in: immersed subjects with a PO2 of less than 1.8 from 1947 to 1986; pulmonary toxicity (immersed, with a PO2 of 1.3 to 1.6 bar, and dry from 1.6 to 2 bar) from 2000 to 2015; and whole-body effects of long exposures at a PO2 of 1.3 from 2008 until this year. 

The Duke Center for Hyperbaric Medicine and Environmental Physiology, the University of Pennsylvania, and numerous other groups have performed concurrent studies on similar topics, with the trend being a focus on understanding how and why divers experience oxygen toxicity symptoms and what the safe limits of oxygen exposure are. Those limits have markedly decreased from their initial proposals, with Butler and Thalmann proposing a limit of 240 minutes on oxygen at or above 25 ft/8 m and 80 minutes at 30 ft/9 m, to the modern recommendation of no greater than 45 minutes at a PO2 of 1.6 (the PO2 of pure oxygen at 20 ft/6 m). 

Between 1935 and 1986, dozens of studies were performed looking at oxygen toxicity in various facets, with exposures both mild and moderate, in chambers both wet and dry. After 1986, these original hyperbaric studies almost universally ended, and the bulk of research we have to work with comes from before 1986. For the most part, research after this time has been extrapolated from previously recorded data, and, until very recently, lack of funding and industry direction coupled with risk and logistical concerns have hampered original studies from expanding our understanding of oxygen toxicity. 

Primary Toxicity Models

What we’re left with are three primary models to predict the effects of both CNS and pulmonary oxygen toxicity. Two models originate in papers published by researchers working out of the Naval Medical Research Institute in Bethesda, Maryland, in 1995 (Harabin et al., 1993, 1995), and one in 2003 from the Israel Naval Medical Institute in Haifa (Arieli, 2003). The Harabin papers propose two models, one of which fits the risk of oxygen toxicity to an exponential model that links the risk of symptom development to partial pressure, time of exposure, and depth (Harabin et al., 1993). The other uses an autocatalytic model to perform a similar risk estimate on a model that includes periodic exposure decreases (time spent at a lower PO2). The Arieli model focuses on many of the same variables but attempts to add the effects of metabolic rate and CO2 to the risk prediction. Each of these three models appears to fit the raw data well but fails when compared to data sets in which external factors were controlled.  


Comparison of predicted and recorded oxygen toxicity incidents by proposed model (Shykoff, 2019).

The culmination of all this work and modeling is that we now have a reasonable understanding of a few things. First, CNS toxicity is rare at low PO2, so modeling is difficult but risk is similarly low. Second, most current models overestimate risk above a PO2 of 1.7 (Shykoff, 2019). This does not mean that high partial pressures of oxygen are without risk (experience has shown that they do pose significant risk), but the models cannot accurately predict that risk. Finally, although we cannot directly estimate risk based on the data we currently have, most applications should limit PO2 to less than 1.7 bar (Shykoff, 2019).  

NOAA Oxygen Exposure Limits  (NOAA Diving Manual, 2001).

For the majority of divers, the National Oceanic and Atmospheric Administration’s (NOAA) oxygen exposure recommendations remain a conservative and well-respected choice for consideration of limitations. The research we do have appears to show that these exposure limits are safe in the majority of applications, and despite the controversy over risk modeling and variability in symptom evolution, planning dives using relatively conservative exposures such as those found in the NOAA table provides some measure of safety. 

The crux of the issue in understanding oxygen toxicity appears to be the lack of a definitive mechanism for the contributing factors that play into risk predictions. There is an enormous variability of response to hyperoxia among individuals–even between the same individuals on different days. There are multiple potential pathways for injury and distinct differences between moderate and high PO2 exposures, and the extent of injuries and changes in the body are both difficult to measure and not yet fully understood. 

Interested in the factors that play into oxygen toxicity risk and what the future of this research holds? We’ll cover that and more in the second part of this article in next month’s edition of InDepth.

Additional Resources:

  1. Shykoff, B. (2019). Oxygen Toxicity: Existing models, existing data. Presented during EUBS 2019 proceedings.
  2. Hedley-Whyte, John. (2008). Pulmonary Oxygen Toxicity: Investigation and Mentoring. The Ulster Medical Journal 77(1): 39-42.
  3. Harabin, A. L., Survanshi, S. S., & Homer, L. D. (1995, May). A model for predicting central nervous system oxygen toxicity from hyperbaric oxygen exposures in humans
  4. Harabin, A. L., Survanshi, S. S. (1993). A statistical analysis of recent naval experimental diving unit (NEDU) single-depth human exposures to 100% oxygen at pressure. Retrieved from https://apps.dtic.mil/dtic/tr/fulltext/u2/a273488.pdf
  5. Arieli, R. (2003, June). Model of CNS O2 toxicity in complex dives with varied metabolic rates and inspired CO2 levels
  6. NOAA Diving Manual. (2001). 

Two Fun (Math) Things:

CALCULATOR FOR ESTIMATING THE RISK OF PULMONARY OXYGEN TOXICITY by Dr. Barbara Shykoff

The Theoretical Diver: Calculating Oxygen CNS toxicity


Reilly Fogarty is a team leader for risk mitigation initiatives at Divers Alert Network (DAN). When not working on safety programs for DAN, he can be found running technical charters and teaching rebreather diving in Gloucester, MA. Reilly is a USCG licensed captain whose professional background includes surgical and wilderness emergency medicine as well as dive shop management.

Continue Reading

Historical records reveal the Greek philosopher Aristotle describing the use of a snorkel, relating the occurrence of ruptured eardrums, and outlining the use of the first diving bell by Alexander the Great.

Summary by Mitchell SJ, Doolette DJ. Extreme scuba diving medicine.

“The few studies available at the time of adoption of deep stops by technical divers [53,55] have been interpreted to support this notion. The earliest of these papers, an observational study of the practices of pearl divers in the Torres Strait of Australia [53], often cited as unqualified support for deep stops, is difficult to obtain and worth summarizing here. These pearl divers performed air dives to depths up to 80 msw followed by empirically-derived decompression schedules that had deeper stops and were somewhat shorter than accepted navy decompression schedules. Thirteen depth/time recordings were made of such dives, and these dives resulted in 6 cases of DCS (46% incidence). The remaining data was a count of dives performed from four fishing vessels over a two month period and these 468 man-dives resulted in 31 reported cases of DCS (7% incidence). It takes a certain cognitive dissonance to interpret these high incidences of DCS as supporting a deep stops approach.”

In: Feletti F, editor. Extreme sports medicine. Basel: Springer International Publishing; 2016. p. 313-33.

Note: Data regarding the 468 man dives was collected by interviewing Japanese surface tenders and looking at their (Japanese) logs, relying mainly on their memories for what the decompression profiles were, and how many DCS cases occurred.

LeMessurier DH, Hills BA.
Decompression sickness: a thermodynamic approach arising from a study of Torres Strait diving techniques.
Hvalradet Skrifter 1965;48:84.

Heart rate variability is the physiological phenomenon of variation in the time interval between heartbeats. It is measured by the variation in the beat-to-beat interval. Other terms used include: "cycle length variability," "RR variability," and "heart period variability".

https://www.health.harvard.edu/blog/heart-rate-variability-new-way-track-well-2017112212789

Statistics includes the process of finding out about patterns in the real-world using data such as the incidence of injury for a given time at a given depth.

When solving statistical problems, it is often helpful to make models of real world situations based on observations of data, on assumptions about the context, and on theoretical probability. The model can then be used to make predictions, test assumptions, and solve problems.

A deterministic model does not include elements of randomness. Every time you run the model with the same initial conditions you will get the same results. Most simple mathematical models of everyday situations are deterministic. For example, calculating the return on a loan with a given interest rate over x number of years. Simple statistical statements, which do not mention or consider variation, could be viewed as deterministic models.

A probabilistic model includes elements of randomness. Every time you run the model, you are likely to get different results, even with the same initial conditions. A probabilistic model is one which incorporates some aspect of random variation. Deterministic models and probabilistic models for the same situation can give very different results.

Probabilistic decompression models are designed to calculate the risk (or probability) of decompression sickness (DCS) occurring on a given decompression profile. These models can vary the decompression stop depths and times to arrive at a final decompression schedule that assumes a specified probability of DCS occurring. The model does this while minimizing the total decompression time.

Probabilistic models allow selection of risk in ways that support rational choices. As with most tools, this power can also be used irrationally though the tool should not to be blamed for such abuse. For example, one might do three dives a day for five days, each with an established one percent risk. The risk of at least one DCS in that series is 1-(1-0.01)^15=0.14.

Alternatively, a diver might select one big dive with all the risk captured in that one dive, i.e. 14%, benefiting from the associated faster decompression. The extent to which modelers might allow users to make those choices remains an open question. Meanwhile, the consequences of accepting higher or even very high risk remain largely unknown. For example, to what extent would a 20% risk of DCS make me vulnerable to serious forms of DCS sickness? Should I be allowed to take those chances if I choose. If so, what sort of disclaimer is needed and/or what should be required of me to ensure I understand the risk I am taking?

More on Probabilistic Models:
https://en.wikipedia.org/wiki/Decompression_theory#Probabilistic_models