
Rimac Technology is at the forefront of designing and building advanced electric vehicle components for global OEMs, including battery systems, e-axles, control electronics, and software. In this talk, Ivan Krajinović will delve into how an integrated modelling approach, from initial requirements to comprehensive feedback loops from testing, with models in the middle which drives the development of next-generation battery packs at Rimac Technology.
The presentation will highlight the team's position as a vertically integrated, cross-functional department of Rimac Technology, encompassing both modelling and testing capabilities under one roof. Additionally, it will explore the close integration with other development aspects, such as Verification and Validation (V&V) and the manufacturing of high-volume products. While the primary focus will be on next-generation battery packs, the discussion will also touch upon the development of powertrain components.

AI has been a constant presence in the headlines for several years now, with daily articles detailing how many people will lose their jobs and which professions are at risk from AI development. What exactly are AI methods, what opportunities do they offer, and how can they support the work of engineers and scientists?
This lecture will attempt to answer these questions, discussing it from the theoretical perspective to the practical guidance. AI methods developed over 70 years are present in many tools used daily, but the widespread availability of LLM-based models (ChatGPT, DeepSeek, LLAMA, Gemini) has both fascinated and frightened humanity. The limitations of AI methods and the ideas for their application in engineering practice will be presented during lecture. The role of scientists and engineers, in collaboration with AI methods, creates new areas that simplify the design and computational processes. It might seem that the use of AI methods would reduce the demand for engineers, while it actually allows for the creation of better products from an companies perspective. It requires engineers to familiarize themselves with new methods, ways of using and verifying AI-based results, and even the employment of AI specialists.
In the presentation we introduce tools related to global optimization methods, machine learning, neural networks, large language models, and data-driven modeling. Then we discuss recent papers on the application of AI in the design and monitoring of structures and materials. Using AI will enable a similar shift in tools as replacing a slide rule (every engineer in 70s last century used one) with a calculator. The most important aspect of this shift is the fundamental engineering knowledge that allows for the assessment of the correctness of results regardless of the tool used. The ability to spot AI hallucinations and errors is an engineer most valuable skill nowadays.

The aim of this presentation is to discuss the link between Non-Destructive Technics and the fatigue limit for metallic materials. Each NDT have its own limitation. We would like ideally to know size, location, type and morphology of the defect behind the indication. The idea is here to understand how it is possible to enrich uncertain information from NDT with modelling based on a large experimental database.

In today's fast-paced product development landscape, simulations have become an indispensable tool for design development. With numerous simulations being conducted daily, organizations face significant challenges in managing and deploying them. A crucial question arises: how are these results utilized to inform product design and development decisions? Often, businesses ask: do we really benefit from simulations, and what level of simulation capacity or capability do we need to succeed in the commercial market with our product? How can we measure simulation efficiency, and what key performance indicators (KPIs) should be implemented to evaluate simulation effectiveness most effectively?
As simulations become increasingly integral to daily product development and validation, quality takes center stage. The trend toward reducing prototype testing and relying on simulation-based validation necessitates a system that maintains quality, ensures repeatability, and makes results trustworthy, transparent, and easy to analyze.
Furthermore, the incorporation of Industry 4.0 components, such as big data, Digital Twin, and the Internet of Things (IoT), can enable the creation of a data-driven engineering framework. To achieve this, simulation management must adhere to the core principle of Industry 4.0: connectivity. By leveraging these technologies, organizations can unlock the full potential of simulations, drive business success, and stay competitive in today's fast-paced market.
Ultimately, the effective management of simulation processes and data is critical to ensuring that simulations deliver tangible benefits. Implementing a robust system that integrates Industry 4.0 components, automates routine tasks, and prioritizes quality is essential.
Some organizations have implemented large-scale commercial platforms. This strategic decision is driven by the recognition that simulation management is not an isolated function, but rather an integral part of a broader digital transformation strategy. By investing in a comprehensive platform, companies can integrate simulation data and processes with other business functions, such as product lifecycle management, data analytics, and artificial intelligence. This enterprise-wide approach enables organizations to maximize their return on investment, improve collaboration across departments, and drive innovation throughout the entire product development cycle. The goal is to create a unified digital thread that weaves together simulation, design, testing, and manufacturing, ultimately leading to faster time-to-market, reduced costs, and improved product quality.
As the industry embarks on its next revolution, Artificial Intelligence (AI) is poised to play a pivotal role in transforming data-driven engineering. The expectation is that AI algorithms will be leveraged to support decision-making processes, enhancing the efficiency and accuracy of simulation-based design and validation. To prepare for this future, it is essential to develop a system that can seamlessly integrate AI capabilities into simulation management. This requires a strategic approach to designing a framework that can harness the power of AI, enabling real-time data analysis, predictive modeling, and informed decision-making.

The space sector is booming with opportunities, as its availability has increased significantly over the past few years. Private astronaut missions, originally aimed at space tourism, have become an opportunity for accelerated growth in space sciences. This session will explore the role of HUNOR – Hungarian to Orbit and provide insights into the science behind the experiments.

Numerical simulation is at a turning point. From the early days of finite difference methods to the finite element and finite volume methods of today, engineers have built a quiet revolution on matrices and meshes. These “classical” methods already encode an extraordinary amount of structure: conservation laws, compatibility, and geometry are all woven into sparse algebraic systems. In this keynote, we revisit that legacy through a modern lens, showing that an assembled finite element model can be read as a deep, highly constrained computational graph - very close in spirit to a neural network, but one whose architecture is dictated by physics rather than statistics.
This reinterpretation invites us to reflect on the commonalities and differences between first principles-based analysis methods and those emerging from the rich field of Machine Learning. We reveal the fundamental challenge in physics simulation: properly representing structures embedded in the fabric of natural phenomena.
Such a realization reveals a powerful guiding concept that leads to Tonti’s cell- based formulation of physics. By organizing physical laws on primal–dual cell complexes and cleanly separating topology, metric, and constitutive behavior, the so-called Cell Method reveals a common infrastructure underlying classical methods such as finite elements, finite volumes, finite differences, and spectral schemes. That is a scaffold which must remain exact - discrete balance laws and incidence relations - dissociated from what can be safely learned - constitutive closures, effective properties, and multiscale corrections. Instead of expecting neural networks to rediscover geometry and conservation laws from scratch, we can embed compact learning modules inside a rigorous discrete field theory, preserving structure while gaining flexibility.
Viewed this way, the future of simulation is not a choice between matrix solvers and machine learning, but a synthesis: intelligent platforms that understand meshes and models, that can propose and test new closures, which help us navigate uncertainty in complex designs. In this vision, artificial intelligence (AI) becomes a force multiplier, enabling a new role, that of an engineer–scientist. This synthetic intelligence running inside a machine becomes a co-scientist, working within the same cell complexes and variational principles we already trust.
The keynote will explore how this shift - from “physics-informed neural nets” to “cell-structured learning” and operator-centric thinking - could transform our practice in analysis, design, and certification, and why now is the right time to shape that transformation rather than just react to it.
Stay up to date with our technology updates, events, special offers, news, publications and training
If you want to find out more about NAFEMS and how membership can benefit your organisation, please click below.
Joining NAFEMS© NAFEMS Ltd 2025
Developed By Duo Web Design