This Website is not fully compatible with Internet Explorer.
For a more complete and secure browsing experience please consider using Microsoft Edge, Firefox, or Chrome

Verification & Validation – An Introduction

Verification & Validation – An Introduction

7-minute read
NAFEMS SGMWG - August 26th 2021

 

In the first of a series, the NAFEMS Simulation Governance & Management Working Group hosted a discussion introducing the concept of Verification and Validation. As well as reading the conversation here, you can watch the video on YouTube, as well as listen to the audio on the NAFEMS Podcast.

The discussion was hosted by Chris Rogers, Chair of the NAFEMS Simulation Governance & Management Working Group, alongside members of the group, William Oberkampf (WO), Gregory Westwater (GW), Keith Meintjes (KM), and Alexander Karl (AK).


How would you define the terms ’Verification’ and Validation’?

Q (GW) A common issue that I've seen is that there's a lot of confusion about what the terms verification and validation mean, and a lot of times they're used interchangeably. Can you give a brief definition of what those two terms mean for simulation governance?

A (WO) Okay, that's a good place to start because I agree that these terms mean different things in different settings. In the computational simulation world, Verification deals with the issues of numerical solution reliability, accuracy, and everything to do with the issue of the solution of the mathematical model. Validation deals with the issue of how well the model results compare to experimental measurements or experimental observations. So those are two completely different issues. Verification has two parts to it; code verification, which deals with the testing of the software itself, and solution verification which deals with the actual quantification of the numerical solution error. In validation we always try to compare the simulation data with the physical measurements, a comparison between data and reality.

 

Why worry about verification when you are using commercial software?

Q (GW) Let’s say my company is using commercial software. We’re paying thousands of dollars or euros per seat on that software, so isn't verification part of their responsibility? Why do I need I need to worry about it?

A (WO)That's a very common question and I think every organization is asking it because these licenses for commercial software are relatively expensive, so they need to produce reliable results. The software companies do a lot of verification testing. Their emphasis is on two parts. The first is called regression testing which is testing on different operating systems, comparison with different regression suites, and also version control. The second aspect has to do with what we call code verification in the sense of comparing numerical solutions with very accurate independent solutions. These are usually analytical solutions to the mathematical model, and our mathematical models are usually differential equations, but they can be integral equations. Either ordinary differential equations or partial differential equations.

Regression and code verification testing are the primary responsibility of the software company. But we have found that as simulation has increased in how widely it's used, the code verification testing in terms of accuracy compared to analytical solutions has not been as robust as what it should be in many cases. That's an area that needs to be improved on the software side. Also, sometimes it's appropriate for the organization that buys the license to do their own code verification testing. How much should be done depends on what the goals of the simulation are in your own company.

(AK) An additional comment to that one is that it's really all about the fact that you are still responsible for your simulation result, not the simulation software provider. So that's why it's good to understand what your software is actually doing and do the verification yourself.

(GW) So if I understood correctly, some of the code verification is making sure that the code tells me two plus two equals four, and the software companies are generally handling that well, but the responsibility around the solution verification is where additional effort is needed, particularly by the end user? Is that what you mean?

(WO) I don't think that's quite right. There are two aspects of code verification, one of them is usually called regression testing, and those are consistency checks, and testing on different operating systems and that sort of thing. But the other issue in code verification is the issue of the accuracy of the numerical solution to the mathematical model. You used the example of two plus two, but in the real world that we work in, these mathematical models are extremely complicated. That is, the mathematics. It's a calculus problem; it’s always a calculus problem. And we have these mathematical models which are extraordinarily powerful and the testing of all those solutions for different boundary conditions, initial conditions, material properties, etc That's what also has to be tested by the software company.

And, as Alex said, you have to be certain that the accuracy of those solutions, from a reliability compared to analytical solutions perspective, is adequate That testing is separate from the solution verification which is an error estimation problem, that’s a different issue. In solution verification, you don't have analytical solutions to the problems you’re working on. Those are typically what we call the application of interest, or the intended use.

Why Worry about Validation?

Q (GW) On the topic of validation, I can understand why an end user needs to validate. I think that ties into the old adage; garbage in garbage out. So, in my work, say all of my design work is subject to final approval via physical testing. In that case, is validation a necessity for me? Wouldn't that just really be a risk reduction activity?

A (WO) It can be thought of in terms of risk reduction, but people need to realize the capability of modeling and simulation and the computers that we have and the software. For example, take linear elasticity theory. When you are first learning linear elasticity, you don't really appreciate the power of those equations. It is just unbelievable that there's actually an infinite number of solutions to the linear elasticity problems. So, for every linear elasticity problem that has ever existed and will ever exist in the future, that set of software is supposed to be able to compute the solution accurately.

And so, what we want to do with the validation activity is to compare those simulation results- whether it's linear elasticity or fluid mechanics, laminar flow, turbulent flow, reacting flow-with the experimental measurements that we have because in every mathematical model, you have approximations and assumptions that are built into the model itself. For example, if you go from the general field of continuum mechanics down to linear elasticity, there are a tremendous number of assumptions that get you to that point, and maybe those assumptions are not valid for your actual application. That's why we have to do comparison with experiment.

(GW) So when you're talking about the assumptions there, you're talking not only about the assumption I'm making that a specific support type or constraint is appropriate, but about assumptions inherent to the numerical implementation of how we've chosen to represent the system.

(WO) Yes and there are actually two parts to the assumptions. One of them is the assumption in the formulation of, let’s just use linear elasticity, homogeneous uniform materials. All of those assumptions are built into the equations themselves. But then as you say, there are also completely different sets of equations that deal with the boundary conditions and characterization of the system itself, material properties; those are all additional pieces of information.

(GW) I think some of that is about cognizance. When I click on a boundary condition, I'm making an assumption there, but sometimes I forget about the assumptions that were inherent in choosing a linear elastic model over some other model, and so forth.

(WO) That’s a really good example. A lot of young people have grown up around modern software packages. I grew up where we had to make our own packages, essentially, of every simulation we had. So now to switch from, let's say a linear elasticity model to a plasticity model, is just a click of a button, but with that click you have completely changed your mathematical model and the assumptions and approximations that you make. People don't really think about how powerful these software packages are.

How can I justify the effort and cost of a formal V&V system?

Q (GW) Say in an organization I've been using simulation, or our organization's been using simulation, successfully for decades without a formal V&V type system. How do I justify to my boss that we need to go through that level of effort? Doesn't our past success speak for itself?

A (WO) That's a reasonable argument, but what we continue to do with simulation is move to more complex problems. Let’s say moving from laminar flow to turbulent flow, or turbulent non-reacting flow to reacting or multi-phase flow. We continually improve the application space where we apply these models. And so, as we apply them over many different types of situations, we can’t assume that any of our new solutions are as reliable as the old ones. This is because every time, even if you stay with a given mathematical model, let's say linear elasticity, you change the loading condition, the boundary conditions, you change from uniaxial loading to triaxial loading. Every time you change the mathematical model in terms of its boundary conditions and you get a different load structure, different stress structure, strain structure. Every one of those is different, so that's the kind of thing that we have to always be testing, relative to validation experiments, how well our simulations compare with the real world.

(KM) You need to also know how good you are. You're not sitting in a constant space, for example we know through Moore's law that in five years’ time your computers are going to be ten times faster and cost half as much. And when your organization comes along and says, can we take prototypes out of our development process? That's when you need to really know how good your simulation is. You can't just rely on the fact that ‘we've used it in the past’. So, verification, and validation especially, will allow you to make those kinds of decisions to improve your processes.