This Website is not fully compatible with Internet Explorer.
For a more complete and secure browsing experience please consider using Microsoft Edge, Firefox, or Chrome

AI Model Validation for Simulation Data: Knowing When to Trust and When to Doubt

AI Model Validation for Simulation Data: Knowing When to Trust and When to Doubt

10 March 2025, Online (webex)

9​:00 am - 5:00 pm (MEZ, UTC+1, Berlin)

Course language: English

This course will give you the tools to trust or debunk AI models for critical engineering decisions — whether you built it yourself, inherited a pretrained network, or are evaluating a pretrained solution. By the end of the day, you’ll have hands-on experience setting up validation workflows that reveal where models perform well and where they need more data or tuning.


Learning Objectives

  • Select and compute core validation metricsand understand their pros and cons in engineering contexts.
  • Design validation workflows—including train/test splits, k-fold, and hold-out schemes—that respect the peculiarities of simulation data (temporal, geometric, or batch correlations).
  • Quantify uncertainty by measuring prediction variance (via ensembles or Monte Carlo dropout) so you can attach confidence bands to every prediction and know when to fall back on a full solve.
  • Conduct hypothesis tests using p-values and effect sizes to compare two models (e.g. your in-house surrogate vs. a vendor’s black-box), and correctly interpret statistical significance without over-relying on arbitrary thresholds.
  • Detect and diagnose failure modes by injecting synthetic anomalies (e.g. mesh irregularities, parameter outliers) and seeing how errors correlate with inputs.
  • Translate validation insights into actionable decisions: whether to retrain or augment data, adjust hyperparameters, or employ more rigorous testing or documentation.

Hands-On Exercise
Participants will validate both self-built and pretrained/third-party models on curated datasets—measuring standard errors, uncertainty bands, and statistical significance—then present a concise summary of where each model can (and cannot) be trusted.


C​ourse Contents

  • Introduction to probabilities and probabilistic systems
  • What are probabilistic systems?
    - Definition
    - Probabilistic vs. deterministic systems
  • Why AI and machine learning are probabilistic
  • Overview of common ML methods
  • Types of uncertainty
  • Defining uncertainty
  • Aleatoric
  • Epistemic
  • Uncertainy Quantification for ML
  • Definition and goals of UQ
  • Deep Ensembles
  • Distribution prediction
  • Monte Carlo Dropout
  • Prediction intervals
  • Quantile estimates
  • Model Validity
  • Definition and goals of model validation
  • Loss metrics
    - Common error metrics (MAE, MSE, Accuracy, Precision, Recall)
  • Coefficient of Determination
  • Residual Analysis
  • P-Values
  • Calibration of uncertainty estimates
  • Independent validation data
  • Practical Examples
  • Measuring Validity for Self-Trained Models
  • Measuring Validity for Supplied Models
  • How validation metrics are cheated
  • Practical Guides
  • Evaluating models from research papers
  • Evaluating pretrained and supplied models

Details

Event Type Training Course
Member Price £606.48 | $817.89 | €700.00
Non-member Price £857.74 | $1156.72 | €990.00

Dates

Start Date End Date Location
10 Mar 202610 Mar 2026Online, Online

T​rainer

M​ax Kassera (yasAI)
Max Kassera studied mechanical engineering with a minor in economics at the University of Kaiserslautern-Landau, where he first applied machine learning and artificial intelligence to turbocharger design in 2017. After graduating, he was awarded two German government grants to develop AI software for mechanical engineering, which led to the incorporation of yasAI in 2022. With yasAI, Max began training engineers in applying AI to simulation projects with a focus on simulations and fluid mechanics.


Requierements

  • Basic understanding of engineering simulation data
  • Basic knowledge of neural networks is beneficial for evaluating self-trained models but not strictly required
  • Knowledge of statistics is not required, the course will provide a gentle (re-)introduction
  • Programming experience is not required

 

Duration & Format

  • One-day live online workshop (8 hours, including breaks)
  • Interactive lecture, hands-on validation exercises, and group discussion

Organisation

Duration
9​:00 am - 5:00 pm (MEZ, UTC+1, Berlin)
Login phase from 30 min before course starts.

Language
English

L​ogin and course material
W​e will send you login details and course material a few days before the course starts.
R​ecordings
T​he course will not be recorded.

Course Fee
Non NAFEMS members: 990 Euro / person*
NAFEMS member: 700 Euro / person*
Included in the fees are digital course notes and a certificate.
* plus VAT if applicable.
Please note - unpaid registrations will be cancelled one week prior to the event start date unless previous contact has been made with our Accounts Department or the course organiser. If not, then our cancellation policy will be enforced. If required, the cancellation policy can be viewed on the event page on our website.

NAFEMS membership fees (company)
A standard NAFEMS site membership costs 1,365 Euro per year, an academic site and entry membership costs 855 Euro per year.

Cancellation Policy

Course cancellation by NAFEMS
If not enough participants we keep the right to cancel the course one week before. The course can be canceled also in case of disease of the speakers or force majeure. In these cases the course fees will be returned.

Organisation / Contact
NAFEMS
e-mail: roger.oswald@nafems.org

Accreditation Policy

The course is agreed and under control of NAFEMS Education and Training Working Group (ETWG).