Now showing 1 - 2 of 2
  • Some of the metrics are blocked by your 
    Item type:Publication,
    MAKEDONKA: Applied Deep Learning Model for Text-to-Speech Synthesis in Macedonian Language
    (MDPI AG, 2020)
    ;
    ;
    ;
    Eftimov, Tome
    ;
    This paper presents MAKEDONKA, the first open-source Macedonian language synthesizer that is based on the Deep Learning approach. The paper provides an overview of the numerous attempts to achieve a human-like reproducible speech, which has unfortunately shown to be unsuccessful due to the work invisibility and lack of integration examples with real software tools. The recent advances in Machine Learning, the Deep Learning-based methodologies, provide novel methods for feature engineering that allow for smooth transitions in the synthesized speech, making it sound natural and human-like. This paper presents a methodology for end-to-end speech synthesis that is based on a fully-convolutional sequence-to-sequence acoustic model with a position-augmented attention mechanism—Deep Voice 3. Our model directly synthesizes Macedonian speech from characters. We created a dataset that contains approximately 20 h of speech from a native Macedonian female speaker, and we use it to train the text-to-speech (TTS) model. The achieved MOS score of 3.93 makes our model appropriate for application in any kind of software that needs text-to-speech service in the Macedonian language. Our TTS platform is publicly available for use and ready for integration.
  • Some of the metrics are blocked by your 
    Item type:Publication,
    Exploring the Potential of Topological Data Analysis for Explainable Large Language Models: A Scoping Review
    (Zenodo, 2026)
    Sekuloski, Petar
    ;
    Kitanovski, Dimitar
    ;
    ;
    ;
    Large language models (LLMs) have become central to modern artificial intelligence, yet their internal decision-making processes remain difficult to interpret. As interest grows in making these models more transparent and reliable, topological data analysis (TDA) has emerged as a promising mathematical approach for exploring their structure. This scoping review maps the current landscape of research where TDA tools—such as persistent homology and Mapper—are used to examine LLM components like attention patterns, latent representations, and training dynamics. By analyzing topological features across layers and tasks, these methods provide new ways to understand how language models generalize, respond to unfamiliar inputs, and shift under fine-tuning. The review also considers how TDA-based techniques contribute to broader goals in interpretability and robustness, especially in detecting hallucinations, out-of-distribution behavior, and representational collapse. Overall, the findings suggest that TDA offers a rigorous and versatile framework for studying LLMs, helping researchers uncover deeper patterns in how these models learn and reason.