James Reason · University of Manchester · Aviation Safety Frameworks
Reason's Swiss Cheese Model pictures defences as slices of cheese, each with holes. Holes are created by active failures at the sharp end and latent conditions lying dormant in the system. An accident occurs when holes in successive slices line up and a hazard trajectory passes through every defence.
Overview of the framework
James Reason introduced the model in Human Error (1990) and developed it fully in Managing the Risks of Organizational Accidents (1997). It distinguishes active failures — unsafe acts made at the sharp end, visible and proximal — from latent conditions, upstream decisions by designers, managers, and regulators that lie dormant in the system until combined with local triggers. Defences come in four layers (awareness, detection, protection, recovery) and include engineered systems, procedures, training, and culture. Every defence has holes; holes move around and sometimes align. The task of safety management is not to eliminate holes (impossible) but to keep them small, dispersed, and observable (Reason, 1997, 2000).
The model was adapted for aviation by Shappell and Wiegmann (2000) into the Human Factors Analysis and Classification System (HFACS), which operationalises the four tiers as unsafe acts, preconditions, unsafe supervision, and organisational influences — now a standard taxonomy in civil and military investigation. A 2006 EUROCONTROL study by Reason, Hollnagel and Paries revisited the model's limits and recommended complementing it with resilience-based views.
Figure 1 · Swiss Cheese — hazard trajectory passes only when holes in successive defences momentarily line up.
When to use it
Typical applications
Structuring accident and incident investigations — what defence failed at each tier?
Communicating to boards why good organisations still have accidents.
Designing multi-layered defences rather than relying on a single barrier.
Training safety leaders to see beyond individual blame.
Aviation relevance
HFACS taxonomy (Shappell & Wiegmann, 2000) used by USAF, USN, US Army, FAA and many civil investigators.
Implicit in ICAO Annex 13 accident-investigation framework for causal and contributory factors.
Foundation for Threat & Error Management and many CRM curricula.
Integrates with SMS hazard-register terminology (active failure, latent condition, defence).
Benefits
Intuitive image. The cheese metaphor is memorable and widely recognised across safety-critical industries.
Names latent conditions. Shifts attention from sharp-end operators to upstream design, management, and regulation.
Supports defence-in-depth thinking. Encourages multiple, independent barriers rather than a single heroic one.
Basis for HFACS. Provides the taxonomic spine for HFACS, now a standard investigation tool.
Compatible with just culture. Distinguishes unsafe acts from the conditions that shape them, underpinning fair response models.
Bridges operational and organisational. Links pilot actions to boardroom decisions in one picture.
Widely adopted. Referenced by ICAO, EASA, FAA, ACSF, and most airline investigation manuals.
Limitations
Metaphor, not method. The model inspires thinking; it is not a procedure. Operationalising it requires a separate taxonomy (e.g., HFACS).
Linear causality. Trajectories imply one-way flows through discrete layers; complex socio-technical interactions are better captured by STAMP or FRAM.
Treats humans as hole-producers. Risks reinforcing the view that humans are a liability rather than a source of everyday adaptation (cf. Safety-II).
Weak on non-accident scenarios. Does not explain how day-to-day success happens, only how failure does.
Tiers are vague. The four layers map neatly to HFACS but less neatly to other sectors and safety cases.
Revisited in 2006. Reason, Hollnagel, and Paries acknowledged the model's limits and recommended complementing it with resilience-based approaches.
In short
The Swiss Cheese Model is the most influential safety metaphor of the past forty years. Use it to explain defence-in-depth and latent conditions to non-specialists — and pair it with HFACS for investigation, with STAMP/FRAM for complex systems, and with Safety-II when you need to understand success as well as failure.
References (APA 7)
Reason, J. (1990). Human error. Cambridge University Press.
Reason, J. (1997). Managing the risks of organizational accidents. Ashgate.
Reason, J. (2000). Human error: Models and management. BMJ, 320(7237), 768–770.
Reason, J., Hollnagel, E., & Paries, J. (2006). Revisiting the Swiss cheese model of accidents (EEC Note No. 13/06). EUROCONTROL Experimental Centre.
Shappell, S. A., & Wiegmann, D. A. (2000). The Human Factors Analysis and Classification System — HFACS (DOT/FAA/AM-00/7). Federal Aviation Administration.
Wiegmann, D. A., & Shappell, S. A. (2003). A human error approach to aviation accident analysis. Ashgate.
Further reading
Reason, J. (2008). The human contribution: Unsafe acts, accidents and heroic recoveries. Ashgate.
Dekker, S. (2014). The field guide to understanding 'human error' (3rd ed.). Ashgate.
Leveson, N. (2011). Engineering a safer world: Systems thinking applied to safety. MIT Press. [For the systems-theoretic critique of the model.]
Hollnagel, E. (2014). Safety-I and safety-II. Ashgate.
Perneger, T. V. (2005). The Swiss cheese model of safety incidents: Are there holes in the metaphor? BMC Health Services Research, 5(1), 71.