Webinar: Peeking into the black box of GROMACS performance (2026-03-17)
Molecular dynamics workloads are notoriously hungry for compute, and every percentage point of efficiency translates into hours saved and bigger science delivered. This webinar opens up the “black box” of GROMACS performance, showing you how to read what the engine is telling you and how to turn that insight into faster, more reliable simulations.
On March 17, 2026, core GROMACS developers and HPC researchers will walk through practical strategies to boost throughput, demystify performance logs, and connect simulation setup choices with outcomes on real hardware. If you’ve ever wondered why one run flies while another crawls, this session will equip you with the tools to find out—and fix it.
Why this matters
Modern hardware is heterogeneous, with CPUs, GPUs, and complex memory and interconnect hierarchies. GROMACS can exploit these resources brilliantly—but only if your configuration, system properties, and parallelization strategy are aligned. The webinar focuses on the decisions that most influence performance and the evidence in the GROMACS logs that reveals what’s happening during a simulation.
Inside the session
- Interpreting mdrun logs: Learn how to extract the signal from the noise in GROMACS output, identify bottlenecks, and understand the impact of your settings.
- System setup and properties: See how system size, constraints, cutoffs, and electrostatics choices affect scaling and time-to-solution.
- Hardware–software interplay: Understand how compilers, MPI, threading, GPU drivers, and runtime parameters interact with the GROMACS execution model.
- Parallelization best practices: Get guidance on domain decomposition, thread/MPI ranks, GPU offloading choices, and load balancing.
- Configuration hygiene: Practical tips for pinning, affinity, precision settings, and I/O to keep your runs stable and fast.
- New GROMACS benchmark suite: An introduction to a standardized framework for assessing and comparing performance across diverse systems.
What you’ll learn
- How to use GROMACS log data to diagnose CPU/GPU utilization issues and communication overheads.
- Which parameters to tweak first for the largest performance gains and how to verify their impact.
- How system characteristics influence strong/weak scaling behavior.
- How to run fair, reproducible benchmarks and compare results across clusters and workstations.
Who should attend
- MD practitioners running GROMACS on workstations or clusters who want faster time-to-science.
- HPC support staff and research software engineers who tune user workloads.
- Students and researchers aiming to build a robust mental model of GROMACS performance.
Speakers
Andrey Alekseenko
Andrey Alekseenko is a researcher at the KTH Center for Scientific Computing (KCSC) at KTH Royal Institute of Technology in Stockholm and a core developer of the GROMACS molecular dynamics engine. His work focuses on performance optimizations across heterogeneous architectures, helping GROMACS make the most of modern CPUs and GPUs while maintaining scientific fidelity.
Szilárd Páll
Szilárd Páll is a researcher at the KTH Center for Scientific Computing (KCSC) at KTH Royal Institute of Technology in Stockholm. He helped reformulate key parallel algorithms in molecular dynamics for modern processor architectures and co-authored the first heterogeneous CPU–GPU parallelization of GROMACS. His recent work centers on efficient asynchronous task scheduling and strong-scaling MD on exascale heterogeneous systems.
Key takeaways
- A clear, repeatable process for analyzing GROMACS performance using mdrun logs.
- Actionable configuration and parallelization techniques that deliver measurable speedups.
- An understanding of how system and algorithmic choices affect scaling on different hardware.
- Access to a new, standardized benchmark suite to evaluate and compare performance.
Event details
Date: March 17, 2026
Format: Live webinar with expert presentations and practical guidance
Whether you run single-node jobs or push the limits of multi-GPU clusters, this session will help you turn GROMACS logs into performance insight—and performance insight into speed.