This Website is not fully compatible with Internet Explorer.
For a more complete and secure browsing experience please consider using Microsoft Edge, Firefox, or Chrome

Standardised Benchmarks for Increasingly Complex Computational Frameworks



Abstract


Increases in the complexity of mathematical models and computing devices demand benchmarks that verify the accuracy and correctness of solutions. The quality of a simulation depends on a chain of events that include i) the mathematical model's suitability, ii) quality of the realisation of the computational model, iii) reliability of the computing device, and iv) the practitioner's application thereof. As the quality of a simulation depends on executing each aspect to perfection, the role of benchmarking in ensuring the integrity of these steps cannot be understated. NAFEMS plays a significant role in standardising benchmarking, as exemplified with the recent publication of the Computational Fluid Dynamics Benchmarks - Volume 1 in 2017. Unfortunately, benchmarks for computational granular dynamics have been limited to individual academic studies for particle systems with limited particles, idealised particles shapes and interactions. Recent increases in the complexity of particle systems that include various shape representations, path-dependent contact laws and coupled multi-physics interactions are demanding standardised benchmarks that deal with these complexities. Designing appropriate standardised benchmarks is challenging, and careful consideration is required to avoid misinterpretation and false confidence when passed. Ideally, each benchmark is designed to target specific aspects of a computational framework with purpose, meaning and clarity. Benchmarks need to complement each other to explore with diversity targeted aspects of a computational framework. A benchmark set needs to clearly define the implications when it is passed and explicitly state which parts of a computational framework are excluded or insufficiently assessed by the set. The latter would guide the development of complementary and diverse benchmark sets instead of duplicating the assessment goals of existing benchmark sets. However, having benchmark ideals is easily stated. Still, it is challenging to realise: Which results would serve as a comparison? Which numbers of these results are the most informative towards the benchmark's aims? How many numbers are sufficient for the comparison? These requirements are further complicated when assessing the energy and time efficiency during benchmarking. The former assessment of energy required per solution plays an increasingly important role as simulation codes are becoming more accessible and utilised globally. This paper explores potential approaches and considerations that may spur on and encourage the development of standardised computational granular dynamic benchmarks that embrace the complexities that modern computing has normalised.

Document Details

ReferenceNWC21-492-b
AuthorWilke. N
LanguageEnglish
TypePresentation
Date 26th October 2021
OrganisationPretoria University
RegionGlobal

Download


Back to Previous Page