Ве молиме користете го овој идентификатор да го цитирате или поврзете овој запис:
http://hdl.handle.net/20.500.12188/31549| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Konak, Orhan | en_US |
| dc.contributor.author | van de Water, Robin | en_US |
| dc.contributor.author | Döring, Valentin | en_US |
| dc.contributor.author | Fiedler, Tobias | en_US |
| dc.contributor.author | Liebe, Lucas | en_US |
| dc.contributor.author | Masopust, Leander | en_US |
| dc.contributor.author | Postnov, Kirill | en_US |
| dc.contributor.author | Sauerwald, Franz | en_US |
| dc.contributor.author | Treykorn, Felix | en_US |
| dc.contributor.author | Wischmann, Alexander | en_US |
| dc.contributor.author | Gjoreski, Hristijan | en_US |
| dc.contributor.author | Luštrek, Mitja | en_US |
| dc.contributor.author | Arnrich, Bert | en_US |
| dc.date.accessioned | 2024-10-08T13:17:12Z | - |
| dc.date.available | 2024-10-08T13:17:12Z | - |
| dc.date.issued | 2023-12-02 | - |
| dc.identifier.uri | http://hdl.handle.net/20.500.12188/31549 | - |
| dc.description.abstract | <jats:p>Sensor-based human activity recognition is becoming ever more prevalent. The increasing importance of distinguishing human movements, particularly in healthcare, coincides with the advent of increasingly compact sensors. A complex sequence of individual steps currently characterizes the activity recognition pipeline. It involves separate data collection, preparation, and processing steps, resulting in a heterogeneous and fragmented process. To address these challenges, we present a comprehensive framework, HARE, which seamlessly integrates all necessary steps. HARE offers synchronized data collection and labeling, integrated pose estimation for data anonymization, a multimodal classification approach, and a novel method for determining optimal sensor placement to enhance classification results. Additionally, our framework incorporates real-time activity recognition with on-device model adaptation capabilities. To validate the effectiveness of our framework, we conducted extensive evaluations using diverse datasets, including our own collected dataset focusing on nursing activities. Our results show that HARE’s multimodal and on-device trained model outperforms conventional single-modal and offline variants. Furthermore, our vision-based approach for optimal sensor placement yields comparable results to the trained model. Our work advances the field of sensor-based human activity recognition by introducing a comprehensive framework that streamlines data collection and classification while offering a novel method for determining optimal sensor placement.</jats:p> | en_US |
| dc.publisher | MDPI AG | en_US |
| dc.relation.ispartof | Sensors | en_US |
| dc.title | HARE: Unifying the Human Activity Recognition Engineering Workflow | en_US |
| dc.identifier.doi | 10.3390/s23239571 | - |
| dc.identifier.url | https://www.mdpi.com/1424-8220/23/23/9571/pdf | - |
| dc.identifier.volume | 23 | - |
| dc.identifier.issue | 23 | - |
| item.fulltext | No Fulltext | - |
| item.grantfulltext | none | - |
| Appears in Collections: | Faculty of Electrical Engineering and Information Technologies: Journal Articles | |
Записите во DSpace се заштитени со авторски права, со сите права задржани, освен ако не е поинаку наведено.