Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.12188/34729
DC FieldValueLanguage
dc.contributor.authorDomazetovska Markovska, Simonaen_US
dc.contributor.authorGavriloski, Viktoren_US
dc.contributor.authorPecioski, Damjanen_US
dc.contributor.authorAnachkova, Majaen_US
dc.contributor.authorShishkovski, Dejanen_US
dc.contributor.authorAngjusheva Ignjatovska, Anastasijaen_US
dc.date.accessioned2026-01-31T22:20:53Z-
dc.date.available2026-01-31T22:20:53Z-
dc.date.issued2025-12-05-
dc.identifier.urihttp://hdl.handle.net/20.500.12188/34729-
dc.description.abstract<jats:p>Urban noise is a major environmental concern that affects public health and quality of life, demanding new approaches beyond conventional noise level monitoring. This study investigates the development of an AI-driven Acoustic Event Detection and Classification (AED/C) system designed for urban sound recognition and its integration into smart city application. Using the UrbanSound8K dataset, five acoustic parameters—Mel Frequency Cepstral Coefficients (MFCC), Mel Spectrogram (MS), Spectral Contrast (SC), Tonal Centroid (TC), and Chromagram (Ch)—were mathematically modeled and applied to feature extraction. Their combinations were tested with three classical machine learning algorithms: Support Vector Machines (SVM), Random Forest (RF), Naive Bayes (NB) and a deep learning approach, i.e., Convolutional Neural Networks (CNN). A total of 52 models with the three ML algorithms were analyzed along with 4 models with CNN. The MFCC-based CNN models showed the highest accuracy, achieving up to 92.68% on test data. This achieved accuracy represents approximately +2% improvement compared to prior CNN-based approaches reported in similar studies. Additionally, the number of trained models, 56 in total, exceeds those presented in comparable research, ensuring more robust performance validation and statistical reliability. Real-time validation confirmed the applicability for IoT devices, and a low-cost wireless sensor unit (WSU) was developed with fog and cloud computing for scalable data processing. The constructed WSU demonstrates a cost reduction of at least four times compared to previously developed units, while maintaining good performance, enabling broader deployment potential in smart city applications. The findings demonstrate the potential of AI-based AED/C systems for continuous, source-specific noise classification, supporting sustainable urban planning and improved environmental management in smart cities.</jats:p>en_US
dc.language.isoenen_US
dc.publisherMDPI AGen_US
dc.relation.ispartofUrban Scienceen_US
dc.subjectacoustic event detection and classification; urban sound classes; machine learning; convolutional neural networks; feature extraction; smart cities; IoTen_US
dc.titleUrban Sound Classification for IoT Devices in Smart City Infrastructuresen_US
dc.typeArticleen_US
dc.identifier.doi10.3390/urbansci9120517-
dc.identifier.urlhttps://www.mdpi.com/2413-8851/9/12/517/pdf-
dc.identifier.volume9-
dc.identifier.issue12-
dc.identifier.fpage517-
item.fulltextWith Fulltext-
item.grantfulltextopen-
crisitem.author.deptFaculty of Mechanical Engineering-
Appears in Collections:Faculty of Mechanical Engineering: Journal Articles
Files in This Item:
File SizeFormat 
Urban_Sound_Classification_for_IoT_Devices_in_Smar.pdf3.21 MBAdobe PDFView/Open
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.