Are Accidents caused by Cheese, Chains, Lions or Ducks?
We've all heard about Swiss Cheese models and Error Chains, but are these frameworks fit for purpose? Or maybe the answer lies with Lions or Ducks?
Accidents in complex systems occur due to an accumulation of multiple factors and failures. There have been a few main models created to attempt to model these systems and help to visualise the causal factors in accidents. The two most famous models are:
- Swiss Cheese Model
- Error Chain
When we started creating our content for Use Before Flight, we had to decide which of these models to standardise all our content under one system. As we added more and more diverse content of different case studies, accident reports and news articles, we struggled to achieve this standardisation. So could we come up with something better? First we had to look into what issues we were finding with the existing models.
Swiss Cheese Model
This has long been a favourite of the aviation industry and used prevalently in Crew Resource Management days.
To describe this complex system, J Reason famously developed the Swiss Cheese model to represent these factors and failures as holes in square slices of cheese, the cheese itself being the barriers in the system. If all the holes of the cheese line up then an accident occurs. You’re effectively seeking to have a cheddar rather than a swiss variety; no holes = no chance of an accident!
There are various advantages to this model – you can quite clearly in hindsight attribute a barrier and hole to each layer to visualise the evolution of the accident, but some people much cleverer than me have summarised nicely the issues we were facing in integrating this model into our content:
Luxhoj & Kauffeld (2003) writes that:
One of the disadvantages of the Reason model is that it does not account for the detailed interrelationships among causal factors. Without these distinct linkages, the results are too vague to be of significant practical use.
Dekker (2002, p. 119-120) adds that:
The layers of defence are not static or constant, and not independent of each other either. They can interact, support or erode one another. The Swiss cheese analogy is useful to think about the complexity of failure, and, conversely, about the effort it takes to make and keep a system safe. It can also help structure your search for distal contributors to the mishap. But the analogy itself does not explain:
- Where the holes are or what they consist of
- Why the holes are there in the first place
- Why the holes change over time, both in size and location
- How the holes get to line up to produce an accident
Finally, Shappell & Wiegmann (2000) notes that:
In many ways, Reason’s ‘Swiss cheese’ model of accident causation has revolutionized common views of accident causation. Unfortunately, however, it is simply a theory with a few details on how to apply it in a real-world setting. In other words, the theory never defines what the ‘holes in the cheese’ really are, at least within the context of everyday operations.
For us we wanted something that wasn’t just available in hindsight, otherwise how would we ever move from a reactive safety system to one that is proactive and possibly predictive?
An Error Chain is a different approach to modelling accidents where each accident is made up of a set of errors originating in operational and human factors. The idea is that all contribute step by step to the accident and if you break the chain at any stage, you stop the accident occurring.
This from a pilot’s perspective is easier to visualise their role in an accident, usually as the last link in the chain and it helps them to see how they can proactively prevent an accident from occuring. However, accidents are not just caused by errors, there are also additional external factors that could be the tipping point that go on to cause an accident. This could be a sudden gust of wind, a flock of birds or a failure of an aircraft system. This error model doesn’t include these issues which can be fundamental to an accident.
A classic example of this would be the Hudson River ditching. In this accident, there was effectively no errors that lead to the outcome of landing an A320 on the river in the middle of New York. The crew (both flightdeck and cabin crew) did an exemplary job of safely negotiating the issue of ingesting multiple birds in both engines. Trying to discuss this particular event in terms of an error chain is therefore quite difficult!
Lions & Ducks
The idea finally came to me after a chat with my dad.
Many pilots I talk to on the flightdeck got into flying because their mother or father were already pilots or working in the Air Force or Airlines. However, in my case, my father and I started our flying careers at the same time; I joined the Air Cadets while my father completed his microlight flying licence.
Later, as I started flying with the RAF at university, my father completed his instructor rating and then as I got my first job as an Airbus First Officer with GB Airways my father started his own flying school.
We therefore shared and discussed a lot of learning points, near misses and experiences along the way as we followed our parallel but different flying paths. On discussing the issue of how to best describe and analyse accidents, my dad said:
“Son, you don’t get eaten by a lion, you get nibbled to death by ducks.”
The penny dropped that this is a superb way to summarise accidents and incidents in general and also allows for a precise analysis of each accident in terms of identifying the ducks that caused it. Thinking back on all the incidents I have been involved with either as a participator or spectator, there is not one that was caused by a large single event (the lion). Each and every one was caused by a number of smaller, sometimes unperceptable threats that all add up to cause something untoward (the ducks).
This methodology also has a number of key benefits over the Swiss Cheese Model and Error Chain:
- Ducks can come at you one at a time, interact to make them bigger than the sum of their parts, or even trip over each other and cancel themselves out
- Ducks can change over time, getting bigger or stronger, or changing location which represents the dynamic nature of the flightdeck environment better than static cheese or chains
- Ducks represent the external threats that can then lead us to make an error, thereby identifying the root case of accidents and not the symptoms
- They are easy to identify both before, during and after an accident and therefore contribute more to a proactive or predictive safety system
Throughout the site we use duck bullet-points to clearly identify the individual threats in the case studies to help us to be better at noticing, quantifying, qualifying and then managing each threat.
We also use lightbulb bullet-points to demonstrate the key learning points from each of these identified threats.
We spent a lot of time thinking this through (as you can see from this article) so we hope you find it useful!
- Dekker, S. (2002). The Field Guide to Human Error Investigations. Ashgate.
- Luxhoj & Kauffeld (2003). vol 5. The Rutgers Scholar, 2003).
- Shappell & Wiegmann. (2000). The Human Factors Analysis and Classification System— HFACS. FAA. US Department of Transportation, p. 2.