Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.12188/27905
DC FieldValueLanguage
dc.contributor.authorRizinski, Maryanen_US
dc.contributor.authorPeshov, Hristijanen_US
dc.contributor.authorMishev, Kostadinen_US
dc.contributor.authorChitkushev, Ljubomiren_US
dc.contributor.authorVodenska, Irenaen_US
dc.contributor.authorTrajanov, Dimitaren_US
dc.date.accessioned2023-09-11T11:35:59Z-
dc.date.available2023-09-11T11:35:59Z-
dc.date.issued2022-08-29-
dc.identifier.urihttp://hdl.handle.net/20.500.12188/27905-
dc.description.abstractRapid technological developments in the last decade have contributed to using machine learning (ML) in various economic sectors. Financial institutions have embraced technology and have applied ML algorithms in trading, portfolio management, and investment advising. Large-scale automation capabilities and cost savings make the ML algorithms attractive for personal and corporate finance applications. Using ML applications in finance raises ethical issues that need to be carefully examined. We engage a group of experts in finance and ethics to evaluate the relationship between ethical principles of finance and ML. The paper compares the experts’ findings with the results obtained using natural language processing (NLP) transformer models, given their ability to capture the semantic text similarity. The results reveal that the finance principles of integrity and fairness have the most significant relationships with ML ethics. The study includes a use case with SHapley Additive exPlanations (SHAP) and Microsoft Responsible AI Widgets explainability tools for error analysis and visualization of ML models. It analyzes credit card approval data and demonstrates that the explainability tools can address ethical issues in fintech, and improve transparency, thereby increasing the overall trustworthiness of ML models. The results show that both humans and machines could err in approving credit card requests despite using their best judgment based on the available information. Hence, human-machine collaboration could contribute to improved decision-making in finance. We propose a conceptual framework for addressing ethical challenges in fintech such as bias, discrimination, differential pricing, conflict of interest, and data protection.en_US
dc.publisherIEEEen_US
dc.relation.ispartofIEEE Accessen_US
dc.subjectEthics, machine learning, explainability, finance, fintech, financial servicesen_US
dc.titleEthically Responsible Machine Learning in Fintechen_US
dc.typeJournal Articleen_US
item.fulltextWith Fulltext-
item.grantfulltextopen-
crisitem.author.deptFaculty of Computer Science and Engineering-
Appears in Collections:Faculty of Computer Science and Engineering: Journal Articles
Files in This Item:
File Description SizeFormat 
Ethically_Responsible_Machine_Learning_in_Fintech.pdf3.65 MBAdobe PDFView/Open
Show simple item record

Page view(s)

47
checked on May 11, 2024

Download(s)

6
checked on May 11, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.