Monte Carlo methods estimate properties of an uncertain system by sampling its inputs from probability distributions, propagating each sample through the system model, and aggregating the outputs. Where analytical methods can give a single point estimate of a risk metric, Monte Carlo gives the full output distribution — so analysts can report not just expected loss but tail risk, percentiles, conditional value at risk, and sensitivity to each input.
Monte Carlo simulation was named for the casino district by Stanisław Ulam, John von Neumann and Nicholas Metropolis at Los Alamos in 1946–49 while developing weapons-physics calculations on early electronic computers. It rests on the law of large numbers: as the number of samples grows, the empirical distribution of model outputs converges to the true distribution. The method is general — it can attack any deterministic or stochastic computation that is too complex for analytical solution — and is now ubiquitous in physics, finance, project management, climate modelling, transport demand, reliability engineering, and probabilistic safety assessment.
A typical workflow: (1) define the system model Y = f(X₁, ..., Xₙ); (2) characterise each input by a probability distribution and capture dependencies (correlation matrices, copulas); (3) sample input vectors using pseudo-random or quasi-random sequences (Latin Hypercube, Sobol); (4) evaluate the model for every sample; (5) aggregate to produce the empirical distribution of Y, statistics (mean, variance, percentiles, exceedance probability) and sensitivity measures (Spearman rank, Sobol indices). Variance-reduction techniques — antithetic variates, control variates, importance sampling, stratified sampling — let you reach the necessary tail-precision with fewer samples. Markov-chain Monte Carlo (MCMC) extends the method to sampling from posterior distributions in Bayesian inference.
Replaces a point estimate with an empirical distribution — letting analysts quote tail risk, exceedance probability and confidence bands rather than misleadingly precise means.
Works for any model that can be evaluated computationally; no closed-form analytical tractability required, so it scales to genuinely complex aerospace and ATM problems.
Sample-based sensitivity measures (Spearman, Sobol, regression-based) reveal which uncertain inputs dominate the output distribution and where data investment will pay off.
Pairs naturally with FTA, ETA, Bayesian networks and project-management models — a Monte Carlo wrapper around an existing model often unlocks honest uncertainty quantification.
Output distributions are only as good as the input distributions and dependence structure; defaulting to convenient (often Normal) priors can flatten genuine tail risk.
Standard sampling needs O(1/ε²) samples for relative precision ε — inefficient for very rare events; importance sampling or subset simulation are usually required for 10⁻⁶ – 10⁻⁹ regions.
For high-fidelity models (CFD, FEA, agent-based ATM) each evaluation may take minutes; surrogate models, polynomial chaos and emulators are often necessary to make Monte Carlo tractable.
Pseudo-random seeds, software versions and sampling schemes must be archived; otherwise certification reviewers cannot verify the run, and minor implementation changes can shift tail estimates noticeably.
Monte Carlo turns any deterministic risk model into a probability machine: sample the inputs, run the model, and read off percentiles and sensitivities. The discipline is in the input distributions, the dependence structure, and choosing variance-reduction methods that reach the tail you actually care about.
Metropolis, N., & Ulam, S. (1949). The Monte Carlo method. Journal of the American Statistical Association, 44(247), 335–341.
Robert, C. P., & Casella, G. (2004). Monte Carlo statistical methods (2nd ed.). Springer.
Rubinstein, R. Y., & Kroese, D. P. (2017). Simulation and the Monte Carlo method (3rd ed.). Wiley.
Vose, D. (2008). Risk analysis: A quantitative guide (3rd ed.). Wiley.
Helton, J. C., & Davis, F. J. (2003). Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering & System Safety, 81(1), 23–69.
International Electrotechnical Commission. (2019). Risk management — Risk assessment techniques (IEC 31010:2019). IEC.
Saltelli, A., Ratto, M., Andres, T., Campolongo, F., Cariboni, J., Gatelli, D., Saisana, M., & Tarantola, S. (2008). Global sensitivity analysis: The primer. Wiley.
Au, S.-K., & Beck, J. L. (2001). Estimation of small failure probabilities in high dimensions by subset simulation. Probabilistic Engineering Mechanics, 16(4), 263–277.
Glasserman, P. (2003). Monte Carlo methods in financial engineering. Springer.
Lewandowski, R., & Lemay, J.-F. (2018). Monte Carlo simulation in aviation safety risk analysis. Aviation Psychology and Applied Human Factors, 8(1), 30–41.