This Website is not fully compatible with Internet Explorer.
For a more complete and secure browsing experience please consider using Microsoft Edge, Firefox, or Chrome

Q&A Session

Democratizing CAE 

Webinar Q&A Session

Due to the high number of questions asked during the "Democratizing CAE" webinar, our presenters took time afterwards to provide written responses to the questions that went unanswered during the live event due to time constraints. Questions asked and answered during the live event were captured in the webinar replay.

JB = Juan Betts

MP = Malcolm Panthaki

MT = Mike Tiller

GV = Glenn Valine


Q: I basically agree with all what is said, but my question is how to learn the 8M engineers enough to detect when they "crash" in the simulation, before they"crash" in the production?

JB: These Apps have controls to prevent users from getting into trouble. We’ve also implemented control that detect endless loops or “crashing” of a given code. Over time as you continually improve the process and determine failure modes, you make the application more robust to prevent these crashes.

MP: Expert analysts are able to review CAE results and use a set of rules to check their validity and accuracy. These rules can be “programmed” into the Simulation Apps and feedback provided to the non-expert engineers so they know when the analysis results are not accurate enough. In these cases, the non-experts will need to consult the experts for their advice and guidance. 

________________________________________________________________

Q: For which country (all world) do you get the numbers 750k Experts, 8M engineers, ... ?

JB: These numbers are worldwide numbers and focus on MCAE (rather than ECAE). These numbers came out of a study by Cambashi, Beyond CAE and Front End Analytics last year and you can contact us if you would like more details on the numbers specifics.

________________________________________________________________

Q: Will the October presentations be EASA customers?

JB: The October presentation will show end user cases using a variety of vendor tools. For example, GKN Driveline’s presentation during the intro used MSC SimManager.

MP: During each of the next three webinars in this series, there will be two simulation end-user presentations describing their experiences “democratizing CAE”. A variety of automation template authoring tools and web GUI development tools will be demonstrated, EASA amongst them. 

________________________________________________________________

Q: What is the role of VVUQ in CAE democratization?

MP: I cannot over emphasize the role of VV&UQ in making CAE democratization safe and robust. While this is critical when experts are performing simulation, it is more so for non-experts. It is the responsibility of the experts who are embedding their rules in the automation templates to define the usable bounds of the Simulation Apps and to ensure that “analysis quality indicators” are exposed to the non-experts. All models that are embedded in these templates should be verified and validated, as always, if the results are going to be meaningful. 

________________________________________________________________

Q: This is great news for the OEMs.. How do you think it will help the engineering sevices organizations...?

JB: Absolutely! The margins of service organizations are being squeezed and productivity improvement requirements are now the norm. Intelligent Apps allow OEMs to more easily trust results, have traceability into what was actually done by engineering services providers, as well as, ensure they only share the IP needed to service providers. 

MT: One thing that democratization will help with is serving the needs of small and medium sized organizations (either distinct companies or sub-organizations) by create a distinction between producers and consumers. I see a future where engineering services organizations and other are able to create applications with engineering content (models, etc) that can be distributed easily to those small and medium sized organizations that don't have the resources to develop such things in house. 

MP: We are currently working with a few engineering services organizations here in the US and a few in China. They are excited about this new ability to bring a higher level of ROI to their customers by rapidly creating automation templates and web-deployed Simulation Apps for them. This level of service increases their value in the eyes of their customers. Also, these CAE service providers can create SimApps for multiple manufacturing companies in a given market segment, if they have expertise in that segment. We are currently working with a service provider that has a high level of expertise in gearbox, transmission and axle design. They are now providing a more complete solution to their customers – deep expertise in this domain, as well as a software solution that provides process automation and Simulation Apps. 

________________________________________________________________

Q: Can the speakers identify the specific branded software tools used in the cases presented here?

JB: One of the seven principles of Intelligent Apps is that they are agnostic in nature. This means that they can work with a variety of software tools in the market. There are specific SPDM tools and other Appification software produce by several vendors that make the creation and deployment easier

MT: Speaking for my presentation, the applications I presented as "Real-World" use Xogeny's Xenarius Generator product which generates applications that provide data management and job scheduling capabilities behind a modern web-based user interface. The underlying models shown were simply models that support FMI export (see http://fmi-standard.org). 

MP: The template authoring frameworks are most effective when they can work seamlessly across a variety of commercial and homegrown simulation & CAD tools, integrating simulation processes and data. The Comet Template Authoring environment is vendor-agnostic and has adaptors to a variety of commercial CAD, CAE and general math tools. The web (front-end) GUIs that were demonstrated were rapidly created using the EASA web GUI development framework. 

________________________________________________________________

Q: Do you think use of Apps will be initially focused on the general user or within a specific company?

JB: We believe both of these groups benefit from these Apps. Of course like with any new technology there are “early adopter” companies with executive support, who make it their mission to gain strategic competitive advantage using these tools.

MT: I'm not sure I understand the question totally, but I would say that Apps will initially be used to support non-expert users.  They will provide a way to deploy engineering expertise to those who need analysis capabilities but are not able to create those analysis capabilities themselves. 

MP: The initial wave of Apps will be focused on the needs of particular companies – these Apps will adhere to a company’s best-practices and simulation standards. More general Apps will be customized to suit the particular needs of each company. IMO, after the first wave of Apps have become broadly adopted within companies, a second wave of Apps that are more “general-purpose” (i.e., usable by users from various companies) will emerge. I suspect that these users will be ones that have never run simulations directly within their organizations in the past. The October 27th webinar in this series will highlight such a user. 

________________________________________________________________

Q: Aren't you afraid that giving the possibility to "anyone" to do simulation, you might induce errors when analyzing results ? How can you prevent this?

JB: These apps are built with controls that prevent a non-expert user from creating an invalid designs and analyses. The complexity of these controls can go from simply bounding the inputs to complex feedback/feed forward controls depending on the use case and user experience.  

MT: The model developers are not only responsible for developing the simulations but also for formulating criteria for their use.  There are many ways to address such errors.  First is prevention.  If the software is able to identify poorly formulated designs or simulation conditions, it can prevent a simulation from taking place.  If a simulation is performed, then their can be criteria on the results themselves to make sure none of the results are suspicious.  It is also possible for the simulations that are being performed to be inspected or"certified" in some way by an expert when people wish to take actions based on those results. 

MP: It is the responsibility of the experts who are creating the automation templates (and rules) to ensure that the Simulation Apps are as safe to use as possible, and to expose “analysis quality indicators” to non-expert users. Ultimately, there is no substitute for good engineering intuition – every user should be trained to recognize invalid results when they are “obviously wrong”. 

________________________________________________________________

Q: How to ensure that the users are entering data that is within the original though validity range (basic hypothesis) of the APP developer? For me this check activity is more important in time and effort than the modelling itself.

JB: These apps are not meant to replace general purpose FEA tools. There will always be a need for experts to use general purpose FEA tools in product development. However, these apps may expand an expert’s reach across the enterprise for repetitive tasks or for non-experts to benefit from analysis.

MT: The applications I showed already include the ability to set limits on parameters.  These limits can either be "soft" (warning the user that a value is likely outside the acceptable range but allowing simulation anyway) or "hard" (values outside the range a strictly not allowed).  Furthermore, the underlying models themselves can detect if they ever "stray" into invalid regions and terminate the simulation with a diagnostic message indicating that some fundamental assumption has been violated. 

________________________________________________________________

Q: What is taken to democratize CAE so that 80M potential users as well as 8 M engineers can take advantage of it?

JB: Like the introduction of many technologies, there is the typical S curve adoption. Early adopters take the lead and experience benefits from the use of this paradigm shift. As others see this benefit, they start adopting these technologies as well. Throughout this process there will be successes and failure (lesson learned) that make this technology and its implementation more robust for the early majority.

________________________________________________________________

Q: What tools are available for creating these apps?  Our simulation experts are not generally experts in HTML, XML, etc programming

JB: There are tools out there like EASA with codeless authoring environments that create all the HTML and XLM coding.

MT: The applications I showed were produced using Xogeny's Xenarius Generator.  It does not require any knowledge of HTML, XML, Javascript or CSS.  The applications are produced by transforming existing engineering information (in the demos, using FMU - http://fmi-standard.org).  This information is "captured" in the native model development environment tools.  The engineers only need to be familiar with the tools they are already using. 

MP: I believe that it is critical that the Simulation Template authoring environment and the Web GUI authoring environment be as graphical as possible, minimizing the need for programming and scripting. Unfortunately, we have found that scripting cannot be completely avoided as our customers wish to do things we do not support graphically. However, with these criteria met, we have shown that complex Simulation Templates and Apps can be developed in a few days and then easily maintained, as they evolve. 

________________________________________________________________

Q: Yes but an expert can check rapidly if the results are reasonable, an non expert does not know necessarily, so he cannot know if this or that app is valid

JB: Yes, some of these apps automatically alert experts, when results don’t make sense. This is built into the platform by the expert and not for the non-expert user to determine.

MT: The applications I showed already include the ability to set limits on parameters.  These limits can either be "soft" (warning the user that a value is likely outside the acceptable range but allowing simulation anyway) or "hard" (values outside the range a strictly not allowed).  Furthermore, the underlying models themselves can detect if they ever "stray" into invalid regions and terminate the simulation with a diagnostic message indicating that some fundamental assumption has been violated. 

MP: By codifying their analysis checks into the template and by exposing “analysis quality indicators” to the web GUI, App developers can ensure that non-experts are given guidance on evaluating the quality of the analysis results. By definition, they will never be as good as the experts in making this evaluation, but that is not a requirement for making Apps robust and safe to use. As Juan Betts suggested in his presentation, if we needed to be experts in the mechanics and physics of automobiles to drive them, then few people would drive!

________________________________________________________________

Q: Since FEA issued in many of the Apps shown in the presentation, what are the objective metrics used for assessing the errors of approximation?

JB: These metrics are built into the App itself. They can include the typical FEA error metrics such as element distortion, smoothness, etc.

MP: I would turn the question back to the experts: “What metrics do you use to assess errors of approximation in your analyses?” It is exactly these metrics that must be embedded in the Apps to make them safe to use by non-experts. The metrics, while often deeply mathematical in nature, should be exposed in a manner that is easily understood and digested by non-experts. 

________________________________________________________________

Q: How would a general user interpret the validity of the solution of produced by the web app?

JB: Non-expert users do not interpret the validity of results. These app need to go through a validation process to ensure failure modes are prevented from occurring. This means building controls that prevent non-expert users from creating invalid designs or analyses. The complexity of these controls can go from simply bounding the inputs to complex feedback/feed forward controls depending on the use case and user experience.

________________________________________________________________

Q: I believe my preferred FEM tool (COMSOL Multiphysics) have all what you are talking about here, including large CAD interfaces, and multiple physics modelling capability, with a new APP platform

JB: Yes, there are many tools out there that are developing this capability and more are likely to come as this industry grows.

MP: Then you have found an excellent tool that meets your needs! As with any major set of problems, multiple commercial (and in-house) solutions often emerge, each with their own focus and differentiating characteristics. 

________________________________________________________________

Q: Is it possible to have a demo on the web apps shown in last presentation?

All: Yes, please contact the speakers directly. Each name in the speaker list at the top of this page is hyperlinked to an email address.