This Website is not fully compatible with Internet Explorer.
For a more complete and secure browsing experience please consider using Microsoft Edge, Firefox, or Chrome

Findings From 6 Years of Applied Research: A Foundation for AI Use Cases - Illustrated with Two Practical Examples

These slides were presented at the NAFEMS World Congress 2025, held in Salzburg, Austria from May 19–22, 2025.

Abstract

In engineering, the implementation of artificial intelligence methods can be a useful tool to break the boundaries of data silos and efficiently process data in the field of simulation-based design. In addition to implementing comprehensible and efficient data management systems, identifying and capturing the relationships between the respective engineering data elements provides usable context that sustainably enriches the data used. Contextualization makes the data highly structured, making it ideal for feeding into AI models to gain deeper insights, identify patterns and support informed decision-making. This is also the idea behind the concept of the Data Context Hub (DCH): which was developed as a research project over 6 years at Virtual Vehicle Research (Graz) together with automotive and rail OEMs. The platform brings together information from R&D and manufacturing data sources as well as from telemetry data streams or storage locations like data lakes. The DCH then creates an explorable context map in the shape of knowledge graphs from area-specific data models. These are essential for streamlining processes, reducing risks and identifying new opportunities in the data-driven development of innovative products. The use of state-of-the-art AI models also supports developers in gaining deeper insights from the data, predicting trends and automating tasks. In the joint presentation by Context64.ai and GNS Systems, the experts will present the findings from applied research on contextual graph databases. Using two practical examples from the automotive and manufacturing industries, the experts will demonstrate how Data Context Hub helps companies transform complex data into clear, actionable insights. Use case 1 concerns buildability studies in the automotive industry, where simulation times have been significantly reduced. Due to the high complexity of the product configurations, an exponential number of possible combinations and the associated slow development cycles, conventional tools and methods proved insufficient for the company. The Data Context Hub's AI-based platform served as a virtual buildability checker by providing a solution specifically for capturing, simplifying and querying complex configuration dependencies. The implemented solution presents the configuration rules to the user as nodes in a directed graph structure that can map the relationships and dependencies between different components. By consistently capturing dependencies between assembly constraints, configuration rules and special manufacturing processes, far-reaching insights can be gained that give engineers in the production process of innovative vehicles, for example, conclusions about the impairment of the rear axle by a selected battery pack. This approach allows a faster and more precise determination of the configurations that can be built, thereby shortening development times. In a second use case, it becomes clear that the platform optimally transforms data-intensive AI applications by creating a structured, context-aware basis for information retrieval and decision-making. The use of Large Language Models (LLM) as a chatbot provides precise, relevant answers to specific user queries about certain data contexts in technical use cases. Like the currently best-known ChatGPT, the chatbot uses language queries to independently generate content in the form of correlations between the data. For example, the chatbot would respond to a request for a summary of all simulations carried out with a specific material in the last month with an interpretation of the underlying knowledge graphs. To do this, the trained model uses the internal 'œMemory for Your AI' (M4AI) module to search through the linked data relationships and find relevant information based on semantic similarity. This works as follows: In the first phase, a defined data model searches a large dataset for relevant information. In the following phase, retrieval augmented generation models (RAG) are used to optimize the initial search. The model then identifies relevant information based on semantic similarity and not by searching for exact matches. The information obtained in this way flows into the subsequent generation process. This enables users to better understand the answers provided and make decisions more quickly. The concept behind the Data Context Hub combines the ability to retrieve information from a large database with the generative capabilities of state-of-the-art AI technologies such as LLMs in a pioneering way in the field of simulation-driven design.

Document Details

ReferenceNWC25-0007435-Pres
AuthorWoll. C
LanguageEnglish
AudienceAnalyst
TypePresentation
Date 19th May 2025
OrganisationGNS Systems
RegionGlobal

Download


Back to Previous Page