Scalability Evaluation of HPC Multi-GPU Training for ECG-based LLMs
Journal
2025 MIPRO 48th ICT and Electronics Convention
Date Issued
2025-06-02
Author(s)
Petrovski, Nikola
DOI
10.1109/mipro65660.2025.11131711
Abstract
Training large language models requires extensive processing, made possible by many high-performance computing resources. This study compares multi-node and multi-GPU environments for training large language models using electrocardiogram data. It provides a detailed mapping of current frameworks for distributed deep learning in multi-node and multi-GPU settings, including Horovod from Uber, DeepSpeed from Microsoft, and the built-in distributed capabilities of PyTorch and TensorFlow. We compare various multi-GPU setups for different dataset configurations, utilizing multiple HPC nodes independently and focusing on scalability, speedup, efficiency, and overhead. The analysis leverages HPC infrastructure with SLURM, Apptainer (Singularity) containers, CUDA, PyTorch, and shell scripts to support training workflows and automation. We achieved a sub-linear speedup when scaling the number of GPUs, with values of 1.6x for two GPUs and 1.9x for four GPUs.
