The computational cost of industrial-scale models can cause problems when performing sampling-based reliability analysis. This is due to the fact that the failure modes of engineering systems typically occupy a small region of the performance space and thus require relatively large sample sizes to accurately estimate their characteristics.
This talk explores two methods for reducing the cost of reliability analysis whilst preserving the accuracy of estimated quantities. The first approach, based on Markov chain Monte Carlo sampling, can be used when several thousands of code evaluations are available. The second method, built on the ideas of Gaussian process-based optimisation, lowers this requirement from tens to hundreds of evaluations.
|Date||25th August 2020|
|Organisation||University of Liverpool|