This Website is not fully compatible with Internet Explorer.
For a more complete and secure browsing experience please consider using Microsoft Edge, Firefox, or Chrome

Extended Reality (XR) - The Future of AI System Training?

This paper was produced for the 2019 NAFEMS World Congress in Quebec Canada

Resource Abstract

Artificial Intelligence (AI) is making a huge impact in our lives: personal assistants that use natural language processing to understand us, facial recognition that sorts our photos, and soon cars that will drive us autonomously. Machine learning, a subset of general AI, utilizes multiple architectures and methods, including deep neural networks, reinforcement learning, and imitation learning to solve complex problems. Reinforcement learning is a type of machine learning technique that enables an agent to learn in an interactive environment by trial and error using feedback from its own actions and experiences. In contrast, a method called imitation learning follows the actions of an outside teacher (usually a human) to learn a task. Often imitation learning is utilized as a precursor to reinforcement learning.



This paper examines how extended reality (XR) can be utilized to create synthetic training data for deep neural networks, and virtual environments for imitation and reinforcement learning. Unlike standard virtual reality, which includes spatial environments, user navigation and basic object interactions, XR environments can include virtual machinery, cameras and sensors, control software, human avatars and much more. Validated XR environments create a true digital twin of a corresponding physical system. Products and systems, containing both hardware and software, can be developed within the digital twin at a much faster pace and lower cost than with physical prototypes. Machine learning training data created synthetically in these systems replaces physical data that is difficult, costly or even impossible to acquire. In addition, use of synthetic data allows for domain randomization (DR), a technique that has been shown to improve results over physical data alone, as it helps the neural network ignore spurious features in the training dataset [1].



Two XR environment examples are presented: a specialized manufacturing robot and a “cashierless” retail store simulation. These simulations integrate 3D environments with camera systems, multiple sensor modalities, manufacturing robots and human avatars. The result were digital twins that were utilized for system development, to gain better insights, and train and test AI systems.



While the use-cases vary widely and occur in different industries, the benefits are the same: faster development time, lower cost and greater collaboration among stakeholders. For example, manufacturing processes developed in XR environments would integrate not only the R&D teams, but also the facility management, safety, training and maintenance representatives, even if those stakeholders were in different physical locations. With a shared XR environment and online PLM data, every stakeholder has equivalent knowledge and understanding of the development status at all times. They can virtually attend every “onsite” meeting, no matter “where” the meeting occurs. This collaboration not only greatly improves team communication, but also enables the creation of intelligent AI systems that understand and integrate their individual needs and objectives. The author proposes that in the near future, all products and systems will be developed this way, with the transfer to physical space occurring much later in the process.

Document Details

ReferenceNWC_19_260
AuthorJarrett. J
LanguageEnglish
TypePaper
Date 18th June 2019
OrganisationKinetic Vision
RegionGlobal

Download

Purchase Download

Order RefNWC_19_260 Download
Non-member Price £5.00 | $6.33 | €5.85

Back to Previous Page