This conference paper was submitted for presentation at the NAFEMS World Congress 2025, held in Salzburg, Austria from May 19–22, 2025.
Abstract
The rise of Large Language Models (LLMs) has opened transformative opportunities across industries and engineering simulation processes. This session delves into the innovative use of LLMs in post-processing simulation software output logs, addressing a critical challenge for end users and enterprise organisations alike: extracting actionable insights efficiently. Enterprises today generate terabytes of simulation data daily, and there is an increasing need for automation in retrieving simulation results, classifying errors (e.g., mesh inaccuracies, memory limitations, licensing issues), and preparing AI-ready datasets from trusted simulation outputs. LLMs offer a powerful solution, acting as a simulation co-pilot to automate these tasks with precision. This enables enterprises to troubleshoot workflows more efficiently while reducing manual effort. This paper explores the end-to-end process of deploying local LLMs in simulation workflows, maintaining data privacy and data isolation. It covers key steps, including benchmarking LLM performance, applying continuous integration and deployment (CI/CD) pipelines, and validating customer-specific post-processing use cases. By leveraging these strategies, enterprises can establish a seamless data pipeline that automates error classification and generates insights. Real-world use cases will illustrate the application of LLM-powered tools on industry-standard HPC simulation software such as (but not limited to) Siemens Star-CCM+, Dassault Systèmes Abaqus, and Ansys Fluent. A live demonstration will further highlight the practical benefits, showcasing how organisations can achieve effective and rapid troubleshooting, reduce reliance on support teams, and increase user autonomy. These improvements not only enhance operational efficiency but also contribute to cost-effective workflows through scalable and structured data strategies. Beyond the technical implementation, the paper will discuss the broader implications of integrating LLMs into simulation workflows. Enterprises can position themselves for the future by enabling more intelligent data practices while driving innovation and collaboration across multidisciplinary teams. This session is designed for engineering leaders, IT professionals, and simulation experts looking to enhance their HPC operations through cutting-edge technologies. Attendees will leave with actionable insights on harnessing LLMs to transform simulation data management, reduce inefficiencies, and unlock the full potential of AI in engineering.
Reference | NWC25-0007169-Paper |
---|---|
Author | Imrie. J |
Language | English |
Audience | Analyst |
Type | Paper |
Date | 19th May 2025 |
Organisation | Rescale |
Region | Global |
Stay up to date with our technology updates, events, special offers, news, publications and training
If you want to find out more about NAFEMS and how membership can benefit your organisation, please click below.
Joining NAFEMS© NAFEMS Ltd 2025
Developed By Duo Web Design