Overview of Interpretable and Explainable Artificial Intelligence for Atrial Fibrillation
Journal
2025 33rd Telecommunications Forum (TELFOR)
Date Issued
2025-11-25
Author(s)
Tudjarski, Stojancho
Angjelevska, Ana
Madevska Bogdanova, Ana
DOI
10.1109/telfor67910.2025.11314451
Abstract
Accurate and interpretable detection of irregular work of the heart, such as atrial fibrillation, from electrocardiogram (ECG) signals is crucial for timely diagnosis and effective patient management. While machine learning (ML) models, particularly deep learning architectures, have achieved state-of-the-art performance in ECG arrhythmia classification, their black-box nature limits clinical adoption. This paper explores explainable artificial intelligence (XAI) techniques applicable to ML models trained on ECG data, highlighting both global and local interpretability approaches. We provide an overview of posthoc methods, including SHAP, LIME, PFI, and LIG, among others, treating various types of ECG recordings. As a practical case study, we present our findings analyzing the results of PFI and LIG methods applied to a transformer-based model fine-tuned for atrial fibrillation detection and explain its decision process. Our findings underscore the value of integrating XAI into ECG analysis pipelines to enhance transparency, foster clinician trust, and support more informed decision-making in cardiovascular diagnostics.
