Faculty of Computer Science and Engineering

Permanent URI for this communityhttps://repository.ukim.mk/handle/20.500.12188/5

The Faculty of Computer Science and Engineering (FCSE) within UKIM is the largest and most prestigious faculty in the field of computer science and technologies in Macedonia, and among the largest faculties in that field in the region. The FCSE teaching staff consists of 50 professors and 30 associates. These include many “best in field” personnel, such as the most referenced scientists in Macedonia and the most influential professors in the ICT industry in the Republic of Macedonia.

Browse

Search Results

Now showing 1 - 2 of 2
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Effect of Bézier Interpolation on Similarity of Heartbeats in Electrocardiograms
    (IEEE, 2022-11-15)
    ;
    Tonkovikj, Lucija
    ;
    Petrovski, Nikola
    Sampling frequency and bit resolution impact the presentation of heartbeats in electrocardiograms. One of the most challenging tasks in automated medical interpretation is the determination of the heartbeat type, whether the beat was initiated by a normal sequence, or by the ventricle. The usual way to detect the beat class is to analyze the morphological features including shape, width and height of the heartbeat. In this paper, we tackle this problem by checking the similarity of the heartbeats and set a research hypothesis that Bézier interpolation can improve the correctness of the similarity check. We found that the similarity check of two heartbeats can be improved a lot by using Bézier interpolation on the samples around the detected peak.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Scalability Evaluation of HPC Multi-GPU Training for ECG-based LLMs
    (IEEE, 2025-06-02)
    ;
    Petrovski, Nikola
    ;
    Training large language models requires extensive processing, made possible by many high-performance computing resources. This study compares multi-node and multi-GPU environments for training large language models using electrocardiogram data. It provides a detailed mapping of current frameworks for distributed deep learning in multi-node and multi-GPU settings, including Horovod from Uber, DeepSpeed from Microsoft, and the built-in distributed capabilities of PyTorch and TensorFlow. We compare various multi-GPU setups for different dataset configurations, utilizing multiple HPC nodes independently and focusing on scalability, speedup, efficiency, and overhead. The analysis leverages HPC infrastructure with SLURM, Apptainer (Singularity) containers, CUDA, PyTorch, and shell scripts to support training workflows and automation. We achieved a sub-linear speedup when scaling the number of GPUs, with values of 1.6x for two GPUs and 1.9x for four GPUs.