Faculty of Computer Science and Engineering
Permanent URI for this communityhttps://repository.ukim.mk/handle/20.500.12188/5
The Faculty of Computer Science and Engineering (FCSE) within UKIM is the largest and most prestigious faculty in the field of computer science and technologies in Macedonia, and among the largest
faculties in that field in the region.
The FCSE teaching staff consists of 50 professors and 30 associates. These include many “best in field” personnel, such as the most referenced scientists in Macedonia and the most influential professors in the ICT industry in the Republic of Macedonia.
Browse
Search Results
- Some of the metrics are blocked by yourconsent settings
Item type:Publication, Enhancing LLMs with LoRA Fine-Tuning Using Medical Data and Knowledge Graph Enrichment for Improved Healthcare Outcomes(IEEE, 2025-06-02) ;Jankov, A.; This research paper investigates the enhancement of large language models (LLMs) within the medical domain, focusing on members of the Llama family of LLMs. While LLMs have demonstrated remarkable success across various general-purpose natural language processing tasks, their application in specialized domains like medicine is often hindered by limited training on domain-specific data, resulting in suboptimal accuracy and contextual relevance. To address these limitations, this research employs low-rank adaptation (LoRA) to fine-tune LLMs on real-world patientphysician dialogues, effectively capturing the intricacies of medical discourse. Additionally, the knowledge of the LLM is enriched with the SPOKE knowledge graph, a structured repository of medical domain information, allowing the model to generate outputs that are both contextually and scientifically grounded. The experimental results underscore the transformative impact of this dual approach, demonstrating significant advancements in tasks such as automatic diagnosis generation and personalized drug recommendation. However, this research should be viewed as an exploratory proof of concept. Significant limitations, including the constrained evaluation scope and the critical need for expert clinical validation and thorough ethical review, must be addressed in future work before considering real-world applicability. - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Exploring Large Language Models for Data Augmentation: A Case Study for Text Style Transfer(IEEE, 2025-06-02); ; Text style transfer is the task that involves modifying a sentence to adapt to a desired target style while preserving its original meaning. It often requires high-quality parallel datasets that are not always available. This paper explores data augmentation techniques for text style transfer, leveraging large language models (LLMs) to address the challenge of dataset scarcity. Our approach generates synthetic parallel data by prompting LLMs to paraphrase and/or rewrite sentences in diverse styles, enabling the creation of larger and more varied datasets. We demonstrate the applicability of this approach across three tasks: formality transfer with the GYAFC dataset, sentiment transfer with the Yelp dataset, and personal style transfer with the Shakespeare dataset. This work introduces an approach to enhance dataset availability, aiming to foster further research in the field and support a broader application of LLMs. The experiments were performed only with English language datasets. - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Preserving Macedonian Culinary Heritage: Fine-Tuning a Large Language Model for Recipe Generation in a Low-Resource Language(IEEE, 2025-12-08) ;Peshevski, Dimitar ;Sasanski, Darko; We introduce the first fine-tuned large language model for recipe instruction generation in Macedonian. Building on VezilkaLLM-Instruct, a 4-billion parameter model, we fine-tune it using a curated dataset of 36,000 recipes with detailed cooking instructions. Our key contributions include: (1) the development of a domain-adapted language model for a low-resource language; (2) the demonstration that relatively small LLMs can be effectively adapted to specialized culinary tasks; and (3) the proposal of a dual evaluation framework that combines semantic similarity and verb overlap analyses to assess both content generalization and procedural accuracy. Fine-tuning results in a mean cosine similarity of 0.90 and significantly increases the overlap of domain-specific cooking verbs, indicating improved generation quality. These results highlight the potential of targeted fine-tuning approaches for domain-specific applications in underrepresented languages and provide a foundation for further research in computational culinary heritage. - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Advancing AI in Higher Education: A Comparative Study of Large Language Model-Based Agents for Exam Question Generation, Improvement, and Evaluation(MDPI AG, 2025-03-04) ;Nikolovski, Vlatko; The transformative capabilities of large language models (LLMs) are reshaping educational assessment and question design in higher education. This study proposes a systematic framework for leveraging LLMs to enhance question-centric tasks: aligning exam questions with course objectives, improving clarity and difficulty, and generating new items guided by learning goals. The research spans four university courses—two theory-focused and two application-focused—covering diverse cognitive levels according to Bloom’s taxonomy. A balanced dataset ensures representation of question categories and structures. Three LLM-based agents—VectorRAG, VectorGraphRAG, and a fine-tuned LLM—are developed and evaluated against a meta-evaluator, supervised by human experts, to assess alignment accuracy and explanation quality. Robust analytical methods, including mixed-effects modeling, yield actionable insights for integrating generative AI into university assessment processes. Beyond exam-specific applications, this methodology provides a foundational approach for the broader adoption of AI in post-secondary education, emphasizing fairness, contextual relevance, and collaboration. The findings offer a comprehensive framework for aligning AI-generated content with learning objectives, detailing effective integration strategies, and addressing challenges such as bias and contextual limitations. Overall, this work underscores the potential of generative AI to enhance educational assessment while identifying pathways for responsible implementation.
