This conference paper was submitted for presentation at the NAFEMS World Congress 2025, held in Salzburg, Austria from May 19–22, 2025.
Abstract
Optimisations in the product development process are very computationally and time-intensive and do not always generate the maximum possible potential. In order to improve and methodically analyse this process, automatable CAx process chains are to be linked with AI agents in order to improve the optimisation result and to accelerate the development process by reducing simulation time. For the training process of the AI agents, a CAx process chain is used as an environment that includes a parametric-associative coupling of CAD and CAE in order to realise automated and update-stable iterations. The current object of investigation for the AI agents is the use of Deep Reinforcement Learning (DRL). In particular, the following DRL-approaches were used in the research project and examined for their usability in terms of time expenditure and optimisation quality: ? DQN + Rainbow (Deep Q-Network) ? PPO (Proximal Policy Gradient) ? DDPG (Deep Deterministic Policy Gradient) ? TD3 (Twin Delayed DDPG) ? SAC (Soft Actor Critic) However, there are a number of challenges in training AI agents for the optimisation process, which will be discussed in the presentation. AI-based optimisations are therefore in direct competition with traditional optimisations in order to justify the greater effort required for the necessary training processes. A decisive hurdle is the data generation for the training process, which is carried out by the numerical simulation as part of the CAx process chain, which is very time-consuming. For this reason, various options such as design of experiments, simplification of the simulation models, use of surrogate models and optimisation of the workflow were investigated in order to reduce simulation times, which makes the training processes more efficient. As well-known the definition of reward functions have a major influence on the convergence of a training process. In this case, the particular challenge lies in the fact that the trained agents are to be applied to different components. With this in mind, the reward functions must be designed using generic parameters. This offers the advantage that trained AI agents can be used as a tool for similar problems (transfer learning). The findings on the challenges mentioned will be discussed in the presentation. The AI-based optimisations will be presented using several structural-mechanical examples from automotive engineering.
Reference | NWC25-0007077-Paper |
---|---|
Authors | Libin. M Strube. M Mathew. M Schneider. F Mueller. M |
Language | English |
Audience | Analyst |
Type | Paper |
Date | 19th May 2025 |
Organisation | Ostfalia University of Applied Sciences |
Region | Global |
Stay up to date with our technology updates, events, special offers, news, publications and training
If you want to find out more about NAFEMS and how membership can benefit your organisation, please click below.
Joining NAFEMS© NAFEMS Ltd 2025
Developed By Duo Web Design