Repository logo
Communities & Collections
Research Outputs
Fundings & Projects
People
Statistics
User Manual
Have you forgotten your password?
  1. Home
  2. Faculty of Computer Science and Engineering
  3. Faculty of Computer Science and Engineering: Conference papers
  4. Scalability Evaluation of HPC Multi-GPU Training for ECG-based LLMs
Details

Scalability Evaluation of HPC Multi-GPU Training for ECG-based LLMs

Journal
2025 MIPRO 48th ICT and Electronics Convention
Date Issued
2025-06-02
Author(s)
Petrovski, Nikola
DOI
10.1109/mipro65660.2025.11131711
Abstract
Training large language models requires extensive processing, made possible by many high-performance computing resources. This study compares multi-node and multi-GPU environments for training large language models using electrocardiogram data. It provides a detailed mapping of current frameworks for distributed deep learning in multi-node and multi-GPU settings, including Horovod from Uber, DeepSpeed from Microsoft, and the built-in distributed capabilities of PyTorch and TensorFlow. We compare various multi-GPU setups for different dataset configurations, utilizing multiple HPC nodes independently and focusing on scalability, speedup, efficiency, and overhead. The analysis leverages HPC infrastructure with SLURM, Apptainer (Singularity) containers, CUDA, PyTorch, and shell scripts to support training workflows and automation. We achieved a sub-linear speedup when scaling the number of GPUs, with values of 1.6x for two GPUs and 1.9x for four GPUs.
Subjects

Multi-GPU HPC

Multi-node HPC

ECG

Large Language Models...

Distributed Deep Lear...

⠀

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Accessibility settings
  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify