Ве молиме користете го овој идентификатор да го цитирате или поврзете овој запис: http://hdl.handle.net/20.500.12188/31549
DC FieldValueLanguage
dc.contributor.authorKonak, Orhanen_US
dc.contributor.authorvan de Water, Robinen_US
dc.contributor.authorDöring, Valentinen_US
dc.contributor.authorFiedler, Tobiasen_US
dc.contributor.authorLiebe, Lucasen_US
dc.contributor.authorMasopust, Leanderen_US
dc.contributor.authorPostnov, Kirillen_US
dc.contributor.authorSauerwald, Franzen_US
dc.contributor.authorTreykorn, Felixen_US
dc.contributor.authorWischmann, Alexanderen_US
dc.contributor.authorGjoreski, Hristijanen_US
dc.contributor.authorLuštrek, Mitjaen_US
dc.contributor.authorArnrich, Berten_US
dc.date.accessioned2024-10-08T13:17:12Z-
dc.date.available2024-10-08T13:17:12Z-
dc.date.issued2023-12-02-
dc.identifier.urihttp://hdl.handle.net/20.500.12188/31549-
dc.description.abstract<jats:p>Sensor-based human activity recognition is becoming ever more prevalent. The increasing importance of distinguishing human movements, particularly in healthcare, coincides with the advent of increasingly compact sensors. A complex sequence of individual steps currently characterizes the activity recognition pipeline. It involves separate data collection, preparation, and processing steps, resulting in a heterogeneous and fragmented process. To address these challenges, we present a comprehensive framework, HARE, which seamlessly integrates all necessary steps. HARE offers synchronized data collection and labeling, integrated pose estimation for data anonymization, a multimodal classification approach, and a novel method for determining optimal sensor placement to enhance classification results. Additionally, our framework incorporates real-time activity recognition with on-device model adaptation capabilities. To validate the effectiveness of our framework, we conducted extensive evaluations using diverse datasets, including our own collected dataset focusing on nursing activities. Our results show that HARE’s multimodal and on-device trained model outperforms conventional single-modal and offline variants. Furthermore, our vision-based approach for optimal sensor placement yields comparable results to the trained model. Our work advances the field of sensor-based human activity recognition by introducing a comprehensive framework that streamlines data collection and classification while offering a novel method for determining optimal sensor placement.</jats:p>en_US
dc.publisherMDPI AGen_US
dc.relation.ispartofSensorsen_US
dc.titleHARE: Unifying the Human Activity Recognition Engineering Workflowen_US
dc.identifier.doi10.3390/s23239571-
dc.identifier.urlhttps://www.mdpi.com/1424-8220/23/23/9571/pdf-
dc.identifier.volume23-
dc.identifier.issue23-
item.fulltextNo Fulltext-
item.grantfulltextnone-
Appears in Collections:Faculty of Electrical Engineering and Information Technologies: Journal Articles
Прикажи едноставен запис

Page view(s)

50
checked on 3.5.2025

Google ScholarTM

Проверете

Altmetric


Записите во DSpace се заштитени со авторски права, со сите права задржани, освен ако не е поинаку наведено.