The term “Digital Twin” was defined by Dr. Michael Grieves at the University of Michigan around 2001. He defines “Digital Twin” as a digital replica of physical assets, processes and systems that have been manufactured. He promoted the idea of comparing a Digital Twin to its engineering design to better understand what was produced versus what was designed, tightening between design and execution.
Digital Twin starts with the existence of a physical part –as early as a prototype exists, and they last until product end-of-life. We can put the digital twin life into three major phases: twins for designs, twins for manufacturing and twins for operations. Physics-based models that use numerical methods such as Finite Element Analysis (FEA) have been used as digital twins for the design phase since 1960s and nowadays they are ubiquitous design tools used to find optimal designs faster. Same set of tools have also been used for predicting a parts’ response to a manufacturing process allowing engineers to consider manufacturing effect in the design process avoiding design issues later in the product lifecycle. Concepts for twins for operations such as machine learning has also been around since 1960s but their use became popular only recently thanks to recent advances in data pipelining and data science. These data-based predictive models are used for diagnostics (anomaly detection and root-cause analysis) and prognostics (remaining useful life prediction) of engineering systems reducing scheduled maintenance and eliminating costly failures. The two types of predictive models also supplement each other in the sense that physics-based models can be improved using information from real-time data such as identification of missing critical loadcases that happens in the operating environment. Similarly, real-time data can be supplemented with data from physics-based models for cases that are not covered in the real-time data.