The validation domain is the collection of discrete input points where simulation outputs have been compared with experiments, defining a bounded region in a high-dimensional parameter space.
Inside that region numerical errors are usually modest, yet the accuracy of underlying physics assumptions can still vary, so as inputs drift outward the dominant unknown quickly becomes model-form uncertainty, the gap created when closures or simplifications (e.g., a turbulence or plasticity model) cease to hold.
ASME-style V&V frameworks therefore recommend a validation hierarchy: component-, subsystem- and system-level tests (or their high-fidelity digital twins) supply additional data that anchor credibility when full-scale experiments at extreme conditions are impossible.
When predictions must move well beyond any validated point, practitioners blend low- and high-fidelity models or surrogate “virtual experiments” to extend coverage and expose sensitivity hot-spots without prohibitive cost.
Because no single metric yet says how far is too far, programs increasingly use expert-elicitation tools such as Sandia’s Predictive Capability Maturity Model (PCMM) to record the distance from validation data, the treatment of each uncertainty source, and the residual risk managers must carry.
Stay up to date with our technology updates, events, special offers, news, publications and training
If you want to find out more about NAFEMS and how membership can benefit your organisation, please click below.
Joining NAFEMS© NAFEMS Ltd 2025
Developed By Duo Web Design