This Website is not fully compatible with Internet Explorer.
For a more complete and secure browsing experience please consider using Microsoft Edge, Firefox, or Chrome

Back to Basics for CAE: Demystifying Input Files for and with Generative AI

These slides were presented at the NAFEMS World Congress 2025, held in Salzburg, Austria from May 19–22, 2025.

Abstract

Finite element analysis (FEA) has been a pillar of computer-aided engineering (CAE) since the 1960s. The longevity of this simulation technology presen¬ts both opportunities as well as unique challenges for leveraging generative AI for both new and experienced users. While there are mushrooming large language model (LLM) based assistants available for such users, many of the assistants operate at a superficial level based on training documents and manuals; they are not sufficient for supporting a user intent on understanding, debugging and quickly resolving issues that lie at the input file level, the gateway to the FEA solver. In this context, there are three unique challenges to leveraging generative AI. First, the inputs to FEA solvers are text-based with syntaxes, definitions and descriptions exposed through keywords that follow a non-intuitive, unique taxonomy and can be documented in manuals that can have thousands of pages. Next, these input file formats never adapted beyond the original implementation intended for punch cards which allow input to be unsorted but create the burden of ID management. This approach is in direct conflict with modern input decks that are geared towards HPC and can easily be gigabytes in file size. The sorting of such a file could be specific to a pre-processor, company best practices or simply the historical build-up of a model. The unsorted nature of this directly contracts with the natural flow of human language. In this work, we present a unique approach to solving the problem by establishing a graph representation of the input file, establishing edges based on cross-referencing ID schemes and traversing this graph. This approach enables input-file specific document retrieval and provides users of every skill-level the ability to obtain prompts and responses at a higher level (pure documentation) and deeper level (input file). Future ongoing work explores optimizing the developed algorithms that reduce token consumptions and help users leverage their compute resources in a more cost-effective manner. In this proof-of-concept work, we combine a parser, graph, a graph traversal method, and documentation retrieval (based on the graph) to deliver an in-context Generative AI experience relative to the exact features of an input file a user is interrogating. In ongoing trials, we expect reduced onboarding times, reduced debugging times, and reduced touches of traditional documentation.

Document Details

ReferenceNWC25-0007493-Pres
AuthorsSett. S Kendall. C
LanguageEnglish
AudienceAnalyst
TypePresentation
Date 19th May 2025
OrganisationsHexagon Northwestern University
RegionGlobal

Download


Back to Previous Page