This Website is not fully compatible with Internet Explorer.
For a more complete and secure browsing experience please consider using Microsoft Edge, Firefox, or Chrome

CAE Goes Mainstream with GPU-Accelerated Computing

This presentation was made at CAASE18, The Conference on Advancing Analysis & Simulation in Engineering. CAASE18 brought together the leading visionaries, developers, and practitioners of CAE-related technologies in an open forum, to share experiences, discuss relevant trends, discover common themes, and explore future issues.

Resource Abstract

GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate deep learning, analytics, and engineering applications. Pioneered in 2007 by NVIDIA, GPU accelerators now power energy-efficient data centers in government labs, universities, enterprises, and small-and-medium businesses around the world.

GPU-accelerated computing, sometimes referred to as general-purpose GPU computing (GPGPU), offloads compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU. From a user's perspective, applications simply run much faster.

GPUs are increasingly being used for CAE visualization and computing thanks to 2-10x gains in wall-clock time they provide over CPU only systems. All major computing hardware manufacturers including workstations and server as well as cloud providers have adopted NVIDIA GPUs as part of their systems and datacenters as over 500 applications are GPU accelerated today. Specifically, in the CAE domain, over 50 FEA, CFD, and CEM applications run on NVIDIA GPUs.

NVIDIA provides different ways to port applications to the GPU. These include SDKs, libraries, compiler directives and CUDA®, a parallel computing platform and programming model developed by NVIDIA for general computing on GPUs.
New computational methods that demand high computing power are now popular as they are being developed to run on GPUs. These include the Lattice-Boltzmann Method(LBM), Smoothed-Particle Hydrodynamics(SPH) for CFD, Discrete Element Modeling(DEM) for particle simulation, Ray Tracing for radiation, etc. Design methods/tools such as generative design, topology optimization, interactive design-simulation and simulation of downstream manufacturing processes such as additive manufacturing and 3D printing also benefit from GPUs dramatically improving produtivity.

This session will review the acceleration techniques for some of the major FEA, CFD, and CEM applications along with representative benchmarks using the latest NVIDIA GPU hardware. Attendees will learn ways to optimize hardware and software resources for the maximum return on investment.

Document Details

ReferenceCAASE_Jun_18_60
AuthorRajagopalan. B
LanguageEnglish
TypePresentation
Date 6th June 2018
OrganisationNVIDIA
RegionAmericas

Download


Back to Previous Page