This Website is not fully compatible with Internet Explorer.
For a more complete and secure browsing experience please consider using Microsoft Edge, Firefox, or Chrome

Model Predictive Capability

5. Model Predictive Capability

“Model predictive capability” is the quantified confidence that a verified, calibrated and validated simulation can reliably forecast system behaviour under conditions for which no test data yet exist. In other words, a forward-looking measure that extends beyond traditional V&V.

Its credibility hinges on rigorously characterising predictive uncertainty, which combines numerical-solution error, input-data variability and, most critically, model-form uncertainty (e.g., turbulence or plasticity closures) that can dominate the error budget.

Established frameworks such as ASME V&V 20 and NASA-STD-7009B require analysts to quantify each of those contributions (often via mesh/time-step refinement, stochastic propagation of parameter ranges, and sensitivity studies) and to report a total uncertainty interval around every quantity of interest.

To gauge how completely a project has met these expectations, organisations increasingly use Sandia’s Predictive Capability Maturity Model (PCMM), which scores activities in code verification, solution verification, validation, uncertainty quantification and documentation before any prediction claims are made.

Communicating predictive capability therefore means presenting those quantified uncertainties (plus evidence of sensitivity to modelling assumptions) in management reports so decision-makers can judge whether the residual risk is acceptable or whether further testing or model refinement is needed.

In safety-critical or regulatory contexts, a tightly bounded predictive-uncertainty statement can justify extending operating envelopes or reducing physical prototypes, whereas large or poorly understood uncertainties signal that additional experiments must precede high-stakes decisions.