This Website is not fully compatible with Internet Explorer.
For a more complete and secure browsing experience please consider using Microsoft Edge, Firefox, or Chrome

Safety of AI Systems in Modeling and Simulation

These slides were presented at the NAFEMS World Congress 2025, held in Salzburg, Austria from May 19–22, 2025.

Abstract

The integration of artificial intelligence (AI) into modeling and simulation systems has significantly expanded their capabilities, enabling improved accuracy, adaptability, and efficiency. These systems are increasingly applied in high-stakes domains, including aerospace, healthcare, and industrial processes, where failure can have severe consequences. While AI-powered modeling and simulation systems offer remarkable opportunities, they also introduce unique safety risks, such as model instability, data biases, and unpredictable behaviors. Addressing these challenges is critical to ensuring the reliability and acceptance of these technologies in safety-critical applications. This paper specifies safety requirements and provides guidelines for AI-based modeling and simulation systems, focusing on key safety principles: robustness, reliability, quality management, transparency, explainability, data privacy, data management, and lifecycle management. These principles form a comprehensive framework for mitigating risks and fostering trust in AI systems. Robustness and reliability are foundational to AI safety, ensuring that systems function consistently under both expected and unexpected conditions, producing accurate and dependable results over time. Quality management underpins these principles, emphasizing structured development processes and rigorous testing to minimize systematic errors and ensure adherence to functional requirements. Transparency and explainability address the need to understand how AI systems make decisions and why specific outputs are produced. These attributes are pivotal for building trust among stakeholders, enabling designers, developers, regulators, and end-users to scrutinize and confidently engage with AI systems. Data privacy ensures the responsible collection, storage, use, and sharing of personal information, aligning with regulatory requirements and safeguarding individual and organizational data. Effective data management ensures the secure handling of input and output data while fostering compliance with ethical and regulatory standards. Lastly, lifecycle management maintains the safety, reliability, and compliance of AI models throughout their operational lifespan, adapting to technological, regulatory, and user needs. By integrating these principles, this framework provides a pathway for developing AI-based modeling and simulation systems that are not only innovative but also safe, reliable, and trustworthy. This paper seeks to engage the modeling and simulation community in adopting structured approaches to AI safety, bridging the gap between technological advancements and safety-critical applications.

Document Details

ReferenceNWC25-0006974-Pres
AuthorYoung. L
LanguageEnglish
AudienceAnalyst
TypePresentation
Date 19th May 2025
OrganisationUL Solutions
RegionGlobal

Download


Back to Previous Page