Faculty of Electrical Engineering and Information Technologies
Permanent URI for this communityhttps://repository.ukim.mk/handle/20.500.12188/10
Browse
3 results
Search Results
- Some of the metrics are blocked by yourconsent settings
Item type:Publication, Bias in vital signs? Machine learning models can learn patients’ race or ethnicity from the values of vital signs alone(BMJ, 2025-07-10); ; ; ; Mullan, Irene DankwaObjectives To investigate whether machine learning (ML) algorithms can learn racial or ethnic information from the vital signs alone. Methods A retrospective cohort study of critically ill patients between 2014 and 2015 from the multicentre eICU-CRD critical care database involving 335 intensive care units in 208 US hospitals, containing 200 859 admissions. We extracted 10 763 critical care admissions of patients aged 18 and over, alive during the first 24 hours after admission, with recorded race or ethnicity as well as at least two measurements of heart rate, oxygen saturation, respiratory rate and blood pressure. Pairs of subgroups were matched based on age, gender, admission diagnosis and disease severity. XGBoost, Random Forest and Logistic Regression algorithms were used to predict recorded race or ethnicity based on the values of vital signs. Results Models derived from only four vital signs can predict patients’ recorded race or ethnicity with an area under the curve (AUC) of 0.74 (±0.030) between White and Black patients, AUC of 0.74 (±0.030) between Hispanic and Black patients and AUC of 0.67 (±0.072) between Hispanic and White patients, even when controlling for known factors. There were very small, but statistically significant differences between heart rate, oxygen saturation and blood pressure, but not respiration rate and invasively measured oxygen saturation. Discussion ML algorithms can extract racial or ethnicity information from vital signs alone across diverse patient populations, even when controlling for known biases such as pulse oximetry variations and comorbidities. The model correctly classified the race or ethnicity in two out of three patients, indicating that this outcome is not random. Conclusion Vital signs embed racial information that can be learnt by ML algorithms, posing a significant risk to equitable clinical decision-making. Mitigating measures might be challenging, considering the fundamental role of vital signs in clinical decision-making. - Some of the metrics are blocked by yourconsent settings
Item type:Publication, A Survey of Bias in Healthcare: Pitfalls of Using Biased Datasets and Applications(Springer, Cham, 2023-07-09) ;Velichkovska, Bojana; ;Gjoreski, Hristijan; Osmani, VenetArtificial intelligence (AI) is widely used in medical applications to support outcome prediction and treatment optimisation based on collected patient data. With the increasing use of AI in medical applications, there is a need to identify and address potential sources of bias that may lead to unfair decisions. There have been many reported cases of bias in healthcare professionals, medical equipment, medical datasets, and actively used medical applications. These cases have severely impacted the quality of patients’ healthcare, and despite awareness campaigns, bias has persisted or in certain cases even exacerbated. In this paper, we survey reported cases of different forms of bias in medical practice, medical technology, medical datasets, and medical applications, and analyse the impact these reports have in the access and quality of care provided for certain patient groups. In the end, we discuss possible pitfalls of using biased datasets and applications, and thus, provide the reasoning behind the need for robust and equitable medical technologies. - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Investigating Presence of Ethnoracial Bias in Clinical Data using Machine Learning(2021-09) ;Velichkovska, Bojana ;Gjoreski, Hristijan; ; Celi, Leo AnthonyAn important target for machine learning research is obtaining unbiased results, which require addressing bias that might be present in the data as well as the methodology. This is of utmost importance in medical applications of machine learning, where trained models should be unbiased so as to result in systems that are widely applicable, reliable and fair. Since bias can sometimes be introduced through the data itself, in this paper we investigate the presence of ethnoracial bias in patients’ clinical data. We focus primarily on vital signs and demographic information and classify patient ethnoraces in subsets of two from the three ethnoracial groups (African Americans, Caucasians, and Hispanics). Our results show that ethnorace can be identified in two out of three patients, setting the initial base for further investigation of the complex issue of ehtnoracial bias.
