Enhancing LLMs with LoRA Fine-Tuning Using Medical Data and Knowledge Graph Enrichment for Improved Healthcare Outcomes
Journal
2025 MIPRO 48th ICT and Electronics Convention
Date Issued
2025-06-02
Author(s)
Jankov, A.
DOI
10.1109/mipro65660.2025.11131859
Abstract
This research paper investigates the enhancement of large language models (LLMs) within the medical domain, focusing on members of the Llama family of LLMs. While LLMs have demonstrated remarkable success across various general-purpose natural language processing tasks, their application in specialized domains like medicine is often hindered by limited training on domain-specific data, resulting in suboptimal accuracy and contextual relevance. To address these limitations, this research employs low-rank adaptation (LoRA) to fine-tune LLMs on real-world patientphysician dialogues, effectively capturing the intricacies of medical discourse. Additionally, the knowledge of the LLM is enriched with the SPOKE knowledge graph, a structured repository of medical domain information, allowing the model to generate outputs that are both contextually and scientifically grounded. The experimental results underscore the transformative impact of this dual approach, demonstrating significant advancements in tasks such as automatic diagnosis generation and personalized drug recommendation. However, this research should be viewed as an exploratory proof of concept. Significant limitations, including the constrained evaluation scope and the critical need for expert clinical validation and thorough ethical review, must be addressed in future work before considering real-world applicability.
