Learning Human Activity From Visual Data Using Deep Learning
Advances in wearable technologies have the ability to revolutionize and improve people's lives. The gains go beyond the personal sphere, encompassing business and, by extension, the global economy. The technologies are incorporated in electronic devices that collect data from consumers' bodies and their immediate environment. Human activities recognition, which involves the use of various body sensors and modalities either separately or simultaneously, is one of the most important areas of wearable technology development. In real-life scenarios, the number of sensors deployed is dictated by practical and financial considerations. In the research for this article, we reviewed our earlier efforts and have accordingly reduced the number of required sensors, limiting ourselves to first-person vision data for activities recognition. Nonetheless, our results beat state of the art by more than 4% of F1 score.
Other Information
Published in: IEEE Access
License: https://creativecommons.org/licenses/by/4.0/
See article on publisher's website: https://dx.doi.org/10.1109/access.2021.3099567
Funding
Open Access funding provided by the Qatar National Library.
History
Language
- English
Publisher
IEEEPublication Year
- 2021
License statement
This Item is licensed under the Creative Commons Attribution 4.0 International LicenseInstitution affiliated with
- Hamad Bin Khalifa University
- College of Science and Engineering - HBKU