As mentioned in the last Knowledge Base article, conventional analysis techniques involve the use of safety factors as a way of accounting for variation in analysis input parameters. This can often result in overly conservative designs. By contrast, probabilistic analysis describes a process where the variation in input parameters can be directly represented in a model.
Traditionally, engineering analysis models have specific numerical input values; material properties have discrete values and nominal or minimum material dimensions are used. This is deterministic analysis. However, the validity or conservatism in the results from such analyses depend on the real-life variability or uncertainty of the input values. In some situations, accounting for this variability within analysis can be critical, or at least more cost-effective, than over-designing products with expensive materials or manufacturing processes.
In reality, every aspect of an analysis model is subjected to scatter (in other words, is uncertain in some way). Material property values, for example, have inherent scatter which itself differs between different material types and properties - the scatter of the Young’s Modulus for many engineering materials could be described as a Gaussian distribution with a standard deviation of around ±3%. Similarly, component dimensions can only be reproduced within certain manufacturing tolerances. The same variation can also apply to finite element model loads.
Although it is possible to account for one distributed variable in a deterministic analysis, by using some basic statistics, it is when a number of input variables have a well-understood distribution that probabilistic analysis can be most useful. Multiple distributed inputs can interact in unpredictable ways, in some cases to give higher than expected probabilities of bad things happening, like structural failure. Only probabilistic analysis can represent this.
Variability is represented in a probabilistic analysis by using statistical distribution functions, rather than single values, to describe each input. The probabilities of a critical result exceeding a certain value are obtainable from a probabilistic analysis, but so too are the effect variations in the inputs can have on this result. This can be useful to determine for example, the effect of loosening tolerances and/or changing material quality on product returns or warranty claims. Given that many analyses are already complicated enough, there may be resistance to the idea of introducing the greater complexity of a probabilistic analysis into the design cycle.
Commercial justification can be made if the extra cost of probabilistic analysis, which might allow us to accept a wider range of scatter of an input variable, is less than the process costs of trying to reduce this scatter.
Inherent in this justification and in the use of probabilistic analyses in general, is the concept of acceptable levels of product failure, however small. This may not always be appropriate for some products or applications, and may be effectively prohibited if the applicable standards or regulatory bodies specify failure criteria based on a deterministic approach. Coming to terms with the concept of acceptable levels of failure may also tend to deter managers from accepting this approach, even though it is ultimately more relevant to safe product design than the unreal absolutism of a deterministic analysis. The public also (perhaps rightly) may be concerned that, for example, aircraft components are designed for a fixed life and should never fail prematurely, (component failure is not the same as crashing) but the ‘laws of probability’ mean that some inevitably will, even while air travel remains a very safe mode of transport.
Many design standards now appear simplistic or crude, which was a necessary feature for safe structural design, before the ability to solve multiple simultaneous equations with computers was widely available. They nonetheless often have a long history of success, instilling confidence in their ability to produce safe designs. However, continually extending their use to more complex structures may eventually overstretch their remit.
It is often straightforward to account for one (and only one) distributed variable in a deterministic analysis, by using some basic statistics - various standards for weld fatigue design in steel and aluminium (such as BS7608, 5400, 8118) for example, allow welds to be designed with a specified failure probability, and these can be used in conjunction with a normal deterministic FEA. The probability distribution of a single input parameter (in this case weld strength) can be considered by simply comparing predicted stresses from a deterministic analysis with a single value of maximum allowable stress taken from the standard, as a post-processing step. The allowable stress equates to a known failure probability for the weld configuration in question. This is different to a probabilistic analysis, because it considers probability distributions for only one variable (weld strength), while load variations, material properties etc are not considered. An issue often neglected in the approach just described, is accounting for the number of welds (or more specifically, the number of critical welds) in a structure. There are obvious problems with an approach to designing a structure with a hundred critical welds, allowing each one a 1% probability of failure.