This Website is not fully compatible with Internet Explorer.
For a more complete and secure browsing experience please consider using Microsoft Edge, Firefox, or Chrome

Reliability Analysis for Expensive Computer Models

The computational cost of industrial-scale models can cause problems when performing sampling-based reliability analysis. This is due to the fact that the failure modes of engineering systems typically occupy a small region of the performance space and thus require relatively large sample sizes to accurately estimate their characteristics.

This talk explores two methods for reducing the cost of reliability analysis whilst preserving the accuracy of estimated quantities. The first approach, based on Markov chain Monte Carlo sampling, can be used when several thousands of code evaluations are available. The second method, built on the ideas of Gaussian process-based optimisation, lowers this requirement from tens to hundreds of evaluations.

Document Details

AuthorHristov. P
Date 25th August 2020
OrganisationUniversity of Liverpool


Back to Previous Page