Hollnagel's taxonomy refines the defence-in-depth idea by distinguishing four qualitatively different barrier systems: physical, functional, symbolic and incorporeal. A barrier function is what needs to be achieved (prevent, contain, protect, restore); a barrier system is the concrete means of achieving it. Well-designed defences-in-depth mix barrier systems so that no single failure class — material, informational, or normative — can defeat the whole.
In Barriers and Accident Prevention Hollnagel separates the purpose of a barrier from the means used to deliver it. The purpose is captured by a barrier function: to prevent an event, contain its energy, protect targets from consequences, or enable restoration. The means is the barrier system that implements the function, and the four systems differ in how they act and how they fail.
Physical (material) barriers stop energy, mass or people directly — firewalls, containment vessels, runway end safety areas, cockpit doors. Functional (active or dynamic) barriers set logical or temporal preconditions — interlocks, TCAS resolution advisories, engine-start inhibits, password-gated commands. Symbolic barriers rely on perception and interpretation by an agent — signs, colour codings, callouts, alarms, placards. Incorporeal barriers exist only in rules, norms, and culture — regulations, SOPs, licences, just-culture expectations. Each has different reliability characteristics and different failure modes, which is why mixing systems is essential to robust defence-in-depth.
Distinguishing function (purpose) from system (means) forces analysts to ask whether a given control actually delivers the intended barrier function — a test many named "controls" quietly fail.
Makes it obvious when a system relies on a single barrier class; a stack of four SOPs looks like depth but is only one kind of barrier, all vulnerable to the same normative drift.
Gives designers a structured palette: if a physical barrier is impractical, consider functional, then symbolic, then incorporeal — and make the trade-off in reliability explicit.
Feeds naturally into FRAM and STAMP: functions are the unit of analysis, barrier systems are the realisations, and their coupling can be modelled explicitly rather than assumed.
Real controls often combine classes — an alarm with an interlock is both symbolic and functional — so strict typing can become contested and reduce inter-rater reliability.
The taxonomy is qualitative; quantifying the failure probability of a symbolic or incorporeal barrier is notoriously difficult, and LOPA-style credit rules are context-specific.
Like other defence-in-depth models, it can underplay the way barriers are actively maintained and degraded by everyday work — a concern Hollnagel himself later addressed with FRAM.
Treating safety culture as an incorporeal barrier is conceptually useful but hard to operationalise: cultures cannot be simply installed, tested or retired like a physical system.
Hollnagel reframes defences-in-depth as a portfolio of four barrier systems — physical, functional, symbolic, incorporeal — each realising prevent / contain / protect / restore functions. Robust safety mixes systems so that no single failure mode defeats the whole.
Hollnagel, E. (2004). Barriers and accident prevention. Ashgate.
Hollnagel, E. (1999). Accidents and barriers. In J.-M. Hoc, P. C. Cacciabue, & E. Hollnagel (Eds.), Expertise and technology (pp. 175–197). Lawrence Erlbaum.
Sklet, S. (2006). Safety barriers: Definition, classification, and performance. Journal of Loss Prevention in the Process Industries, 19(5), 494–506.
de Dianous, V., & Fiévez, C. (2006). ARAMIS project: A more explicit demonstration of risk control through the use of bow–tie diagrams and the evaluation of safety barrier performance. Journal of Hazardous Materials, 130(3), 220–233.
Reason, J. (1997). Managing the risks of organizational accidents. Ashgate.
International Civil Aviation Organization. (2018). Safety management manual (Doc 9859, 4th ed.). ICAO.
Hollnagel, E., Woods, D. D., & Leveson, N. (Eds.). (2006). Resilience engineering: Concepts and precepts. Ashgate.
Center for Chemical Process Safety. (2018). Bow ties in risk management. Wiley.
Energy Institute. (2019). Guidance on human factors safety critical task analysis (2nd ed.).
de Ruijter, A., & Guldenmund, F. (2016). The bowtie method: A review. Safety Science, 88, 211–218.