This Website is not fully compatible with Internet Explorer.
For a more complete and secure browsing experience please consider using Microsoft Edge, Firefox, or Chrome

Applications of ML and ROM for Robust Optimization

This conference paper was submitted for presentation at the NAFEMS World Congress 2025, held in Salzburg, Austria from May 19–22, 2025.

Abstract

Traditional Optimization, Reliability analysis (UQ) and Robust Optimization (RO) using finite elements and similar discretization methods are costly due to their iterative nature. This leads to adapting strategies which lead to oversimplifications of the design models which may end up with catastrophic failures or lack of quality. This problem is due to the fact that robustness (which may be considered as the inverse of fragility of a given system) cannot be correctly evaluated if small variations and their consequences are ignored. However, traditional RSM (response surface methods or more generally surrogate methods) are not capable of capturing small changes and hypersensitivities of the system response. Various ROM (Reduces Order Model) type solutions have been reported to create fast and sufficiently precise surrogate models allowing for real-time evaluations requiring multiple (looped in case of RO) iterations. This partially solves the problem of numerous and costly simulations without necessarily downgrading the precision and accuracy of the individual model evaluations. A second problem persists in the fact the currently available reliability methods are based on a simple statistical analysis, mainly the first two statistical moments (average and standard deviation) of the analysis runs. This proves to be inefficient in performance due to excessive averaging of the problem but also in terms of the computation runs. At every optimization iteration search of the optimal point a new statistics should be established around the proposed point, requiring a new DOE (design of experiments) in the vicinity and surrounding the new point. This has the effect of multiplying every single function evaluation during the optimization process by a factor of N which is the size of the DOE. We shall remedy this drawback by introducing the concept of Complexity and Entropy. In this paper we shall investigate a novel machine learning methodology, and associated solutions based on the concept of entropy of information.

Document Details

ReferenceNWC25-0007491-Paper
AuthorKayvantash. K
LanguageEnglish
AudienceAnalyst
TypePaper
Date 19th May 2025
OrganisationHexagon
RegionGlobal

Download


Back to Previous Page