This Website is not fully compatible with Internet Explorer.
For a more complete and secure browsing experience please consider using Microsoft Edge, Firefox, or Chrome

Q&A

Questions & Answers


The following questions were raised during the Q&A session and we were unable to answer them live ...

1) What are the pitfalls to avoid when running a design of experiment?

As shortly said online, most of the errors come from human steps. The preparation and exploration/ exploration are those where the pitfalls are. Mainly, it is crucial to apply them within a collaborative team. The mix of skills and knowledge is the best guarantee to justify the association of objectives, steps, design spaces, resources and planning. Otherwise, one can jump into adead end or even worst into a situation where a hypothesis is taken as a priori fact.

The other source of pitfalls appears during the exploration of results. Again, the collaborative team can avoid the rejection of a factor whose main effect is negligible but which interacts a lot with another one.

Let’s stay critic all over these crucial steps

2) There have been many software including the application of DOE, e.g. MiniTab, SPSS, SAS.What do you think about using softwarecompared to the manual design?

I also gave some elements to answer this point. What is important is to understand what a software provides in a specific situation. To do so, the best way is to do “by hand” once the process. Thecommercial softwares tends to make the user lazy but at the same time provides an ergonomic GUI.

Like using a formula, where a sense of scale helps very much to validate a result, it is highly advisable to compare the results with the intuitions of the team, what ever the software used. In addition, they generally offer some ways to browse results through differentviews. Useful software are not only DOE oriented but may be Principal Component Analysis (PCA) which gives right away some leads to follow.

3) Did you also work with Adaptive DoE Methods ? What is your experience ?

Within a collaborative project, under french research ministry, I experienced the benefits that this method could bring compared to classical a priori DOE. I would say that these benefits (about 30% better) are potential and not always effective, since the adaptative approach needs to set up parameters within a function to optimize (Kriging method). This set up is not stable, so it is difficult to establish a valid procedure anytime. But It is worth persevering in this direction.

 

4) In your opinion, what key improvements have been made over the last two decades to extend the applicability and usability of design of experiments?

Progress is evident in academic world both strictly in fundamentals, statistic, mathematics to optimize design space screening and in tools which integrate more and more facilities.

But in industry, the progress in applying these methods would come from a better integration into the organisation. Shortly, the earlier in the development process, the better !

5) Should we always consider that all experiments are equally important ? In engineering, datasets are not homogeneous (i.e. experimental/numerical, low/high accuracy)

ANOVA gives some useful information about the possible singularity. In particular, Cook criteria say how much an experiment disturbs the whole modelling. If accuracy is not constant over the design space then some assumptions required by ANOVA do not apply. Seehomoscedasticity and question 11.

About the importance of some experiments compared to some others, consider that the formal criterion to rank some experiments among some others is the raise of power. And this criterion is reached with different matrix optimality properties (A-Optimal, D-optimal, trace, eigen values…).

 

6) I hope we get the cookie recipe at the end

The best action you can do is to build yourself a simple DOE to optimize the taste or flavor for example, with a simple 3 ingredients. As I tried to present, recipes are highly representative of all points to manage in DOE methods. The course I will give under Nafems organization next falls is very “recipes” oriented.

7) Is there a best practice to follow when it comes to validating a DoE with another one? Like performing a screening DoE such as Full Factorial and a space filling DoE such as Latin HyperCube and validate against each other?

Let’s take care of possible confusions. The validation of the DOE, that is the steps and results obtained, follow some strict rules belonging to ANOVA . Once done, the comparison with a previous DOE, performed with the same rigour but potentially with another way to navigate in design space might be very useful.But not to exclude one of them but rather to complement each other.

If the objectives and the factors level are not compatible with the factorial design, other DOE (LHS…) may be used taking in account the lost of knowledge. The results may lead to extend this first reduced DOE towards the factorial, even partially. There are methods for that called for example D-optimal. In other words, DOE may be understood as an iterative process using different methods to converge to the initial objective.

8) How do I determine the form of the polynomial?

The polynomial may be either an objective or resulting of a blind application a the DOE process.

In the first case, the collaborative Team has established for example that there is only 3 factors, only two interactions and one quadratic term concerning one of the factor. The symbolic polynomial would look like this :

Y = I + A + B + C + AC + BC + C2

In the second case, when nothing is known and when the factorial design is used, ANOVA will give the terms of the polynomial with their significance. So there is no hypothesis and the polynomial could have all the principal terms + Interactions + quadratic and possibly higher order terms if more levels are used.

9) What the value ofF_test for a term to be significant?

To appreciate this, all the definition and calculation of this value from ANOVA method has to be understood. Roughly, the threshold considered is the probability to observed an event due to pure error instead of a factor effect. Typically, this threshold is set to 5 %, This the so-called “p-value”. Below 5%, the event, or terms is significant, otherwise, no !

10) To reduce the number experiments why not use Box-Benhnken, Taghuchi or other methods?

These methods are proposed for specific situations. For example Plackett Design samples the Design space the best way when no interactions are expected. Central Composite Design(CCD) targets some non linear behavior. Again, a lot of least costly DOE are possible, rather than factorial, as long as the a priori knowledge justifies them.

11) What about the 'quality' of each experiment ?How repeatable is the experiment ?How accurately does the measured value reflect the values of interest?

The quality of the DOE is globally quantified by the Power value which stands for the probability to get a real effect or to reject a wrong one.

The quality of each experiment is under experimenter responsibility and it is a part of whatthe collaborative team has to deal with.

Repeating the experiments allows first to reduce the variance of a point in the design space, but also and maybe especially to evaluate the pure error, base of the F-test for the significance.

On of the benefit of the DOE methods is to have quantitative and qualitative results relatively insensitive to the accuracy of each experiments aa long as the lack of accuracy for each one is quite constant. This is the assumption of homoscedasticity.

12) Please share the open source tool list for DOE

The only open source I am using for these DOE application is “R”. R is not a software as such, but a language dedicated to statistics and more generally to data analysis.

It is so powerful and so rich in terms of packages that the problematic is not “how to do this ?” but rather “where to find this which has already be done !”. In addition the users community is extremely large.

R has the capability to exchange and cooperate with Python and it is just a question of time to dress the scripts in order to make them readable and deployable with a comfortable GUI.