Main Article Content
The recognition and understanding of human activities remain active research in computer vision and it is widely used in computer interaction, human monitoring system, human behavior analysis, and other fields. The problem is challenging due to its various interactions between persons and motion dynamics between human and objects. There are many algorithms to recognize human activity based on deep neural network models was proposed. The deep neural network classifier was constructed to integrate with the human action target, but the results was not classified as action though it should have. To overcome this problem, this paper proposes a combined model of deep learning convolutional network with long short-term memory for recognize human motion pose. The proposed model extracts the key point features in an automated way with OpenPose and categorizes with trajectories of a human body. This paper presents main three steps including (1) data pre-processing and feature extraction, (2) deep learning with convolutional neural network and long short-term memory, and (3) efficiency measurement and evaluation of experimental results. Results we have tested our approach on activities of daily living with URFall datasets. The experimental results indicate that our framework offers performance improvements. The proposed model can achieve significant improvements for activity classification with maximum success rate of 81% and 76.9% with 256, 128 hidden nodes in data set II.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
I/we certify that I/we have participated sufficiently in the intellectual content, conception and design of this work or the analysis and interpretation of the data (when applicable), as well as the writing of the manuscript, to take public responsibility for it and have agreed to have my/our name listed as a contributor. I/we believe the manuscript represents valid work. Neither this manuscript nor one with substantially similar content under my/our authorship has been published or is being considered for publication elsewhere, except as described in the covering letter. I/we certify that all the data collected during the study is presented in this manuscript and no data from the study has been or will be published separately. I/we attest that, if requested by the editors, I/we will provide the data/information or will cooperate fully in obtaining and providing the data/information on which the manuscript is based, for examination by the editors or their assignees. Financial interests, direct or indirect, that exist or may be perceived to exist for individual contributors in connection with the content of this paper have been disclosed in the cover letter. Sources of outside support of the project are named in the cover letter.
I/We hereby transfer(s), assign(s), or otherwise convey(s) all copyright ownership, including any and all rights incidental thereto, exclusively to the Journal, in the event that such work is published by the Journal. The Journal shall own the work, including 1) copyright; 2) the right to grant permission to republish the article in whole or in part, with or without fee; 3) the right to produce preprints or reprints and translate into languages other than English for sale or free distribution; and 4) the right to republish the work in a collection of articles in any other mechanical or electronic format.
We give the rights to the corresponding author to make necessary changes as per the request of the journal, do the rest of the correspondence on our behalf and he/she will act as the guarantor for the manuscript on our behalf.
All persons who have made substantial contributions to the work reported in the manuscript, but who are not contributors, are named in the Acknowledgment and have given me/us their written permission to be named. If I/we do not include an Acknowledgment that means I/we have not received substantial contributions from non-contributors and no contributor has been omitted.
C. Galleguillos and S. Belongie, “Context Based Object Categorization: A Critical Survey”, Computer Vision and Image Understanding, vol. 114, pp. 712-722, 2010.
B. Luo, Y. Sun, G. Li, D. Chen, and Z. Ju, “Decomposition algorithm for depth image of human health posture based on brain health,” Neural Computing and Applications, vol. 32, no. 10, pp. 6327–6342, 2020.
J. A. Graham, and M. Argyle, “A cross-cultural study of the communication of extra-verbal meaning by gestures”, International Journal of Psychology, vol 10, no.1, pp. 57–67, 1975.
A. Wilson, Paul and Barbara Lewandowska-Tomaszczyk, “Affective Robotics: Modelling and Testing Cultural Prototypes,” Cognitive Computation, vol 6, no.4. pp. 814, 2014.
S. Vishwakarma and A. Agrawal, “A survey on activity recognition and behavior understanding in video surveillance,” The Visual Computer, vol. 29, no.10, pp. 983-1009, 2013.
H. Mousavi Hondori and M. Khademi. A review on technical and clinical impact of Microsoft kinect on physical therapy and rehabilitation. Journal of Medical Engineering, 2014.
M. A. Jaffar, M. S. Zia, M. Hussain, A. B. Siddiqui, S. Akram, and U. Jamil, “An ensemble shape gradient features descriptor based nodule detection paradigm: a novel model to augment complex diagnostic decisions assistance,” Multimedia Tools and Applications, 2018.
L. Pigou, S. Dieleman, P.-J. Kindermans, and B. Schrauwen, “Chapter Sign Language Recognition Using Convolutional Neural Networks,” European Conference on Computer Vision, pp. 572-578, 2015.
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” International Conference on Learning Representations, 2015.
Y. Wang and M. Hoai. “Improving human action recognition by non-action classification,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2698-2707, 2016.
Li J., X. Liu, W. Zhang, M. Zhang, J. Song and N. Sebe, “Spatio-Temporal Attention Networks for Action Recognition and Detection,” IEEE Transactions on Multimedia, 2020.
Thurau, C. and Hlav´aˇc, V., “n-grams of action primitives for recognizing human behavior,” Computer Analysis of Images and Patterns. Springer Berlin / Heidelberg. of Lecture Notes in Computer Science, vol. 4673, pp. 93–100. 2007.
Alexandros Andre Chaaraoui, Pau Climent-Péreza, and Francisco Flórez-Revuelta, “Silhouette-based human action recognition using sequences of key poses,” Pattern Recognition Letters, vol. 34, Issue 15, pp. 1799-1807, 2013.
S. Yi and V. Pavlovic, “Sparse granger causality graphs for human action classification,” Proceedings of the 21st International Conference on Pattern Recognition, pp. 3374-3377, 2012.
O. Ojetola, E. Gaura, and J. Brusey, “Data Set for Fall Events and Daily Activities from Inertial Sensors,” In Proceedings of the 6th ACM Multimedia Systems Conference, pp. 18–20, 2015.
F. Hussain, M. Ehatisham-ul-Haq, M.A. Azam, “Activity-Aware Fall Detection and Recognition Based on Wearable Sensors,” IEEE Sensors Journal, pp. 4528–4536, 2019.
T. Nguyen, M. Cho, and T. Lee, “Automatic Fall Detection Using Wearable Biomedical Signal Measurement Terminal,” In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 5203–5206, 2009.
M. S. Alzahrani, S. K. Jarraya, H. Ben-Abdallah, and M. S. Ali, “Comprehensive evaluation of skeleton features-based fall detection from Microsoft Kinect v2,” Signal, Image and Video Processing, vol. 13, no. 7, pp. 1431–1439, 2019.
S. Ghazal, U. S. Khan, M. Mubasher Saleem, N. Rashid, and J. Iqbal, “Human activity recognition using 2D skeleton data and supervised machine learning,” IET Image Processing, vol. 13, no. 13, pp. 2572–2578, 2019.
A. Franco, A. Magnani, and D. Maio, “A multimodal approach for human activity recognition based on skeleton and RGB data,” Pattern Recognition Letters, vol. 131, pp. 293–299, 2020.
Cao, Z.; Simon, T.; Wei, S.E.; Sheikh, Y. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7291–7299, 2017.
B. Kwolek and M. Kepski, “Human fall detection on embedded platform using depth maps and wireless Accelerometer,” Computer Methods Program Biomed, vol 117, pp. 489–501, 2014.
Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, Y. Sheikh, “OpenPose: Realtime multi-person 2D pose estimation using Part Affinity Fields”, Computer Vision and Pattern Recognition, 2018.
K. Simonyan and A. Zisserman. “Very deep convolutional networks for large-scale image recognition,” Computer Vision and Pattern Recognition, 2014.
Je Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell, “Long-term recurrent convolutional networks for visual recognition and description,” Computer Vision and Pattern Recognition, 2014.