Machine learning and uLBP histograms for posture recognition of dependent people via Big Data Hadoop and Spark platform
DOI:
https://doi.org/10.15837/ijccc.2023.1.4981Keywords:
Local Binary Pattern, Hadoop, Spark, Random Forest, surveillance system at homeAbstract
For dependent population, falls accident are a serious health issue, particularly in a situation of pandemic saturation of health structures. It is, therefore, highly desirable to quarantine patients at home, in order to avoid the spread of contagious diseases. A dedicated surveillance system at home may become an urgent need in order to improve the patients’ living autonomy and significantly reduce assistance costs while preserving their privacy and intimacy. The domestic fall accident is regarded as an abrupt pose transition. Accordingly, normal human postures have to be recognized first. To this end, we proposed a novel big data scalable method for posture recognition using uniform local binary pattern (uLBP) histograms for pattern extraction. Instead of saving the pixels of the entire image, only the patterns were kept for the identification of human postures. By doing so, we tried to preserve people’s intimacy, which is very important in ehealth. To our knowledge, our work is the first to use this approach in a big data platform context for fall event detection while using Random Forest instead of complex deep learning methods. Application results of our conduct are very interesting in comparison to complex architectures such as convolutional deep neural networks (CNN) and feedforward deep neural networks (DFFNN).
References
Asif U.; Mashford B.; Von Cavallar S.; Yohanandan S.; Subhrajit R.; Jianbin T. & Stefan Harrer S. (2020). Privacy Preserving Human Fall Detection using Video Data. Proceedings of the Machine Learning for Health NeurIPS Workshop, PMLR, 116:39-51, 2020.
Delgado-Escaño R.; Castro F. M.; Cózar J. R.; Marín-Jiménez M. J.; Guil N. & Casilari E. (2020). A cross-dataset deep learning-based classifier for people fall detection and identification. Computer Methods and Programs in Biomedicine, 184 (2020) 105265.
https://doi.org/10.1016/j.cmpb.2019.105265
Dornaika, F.; Bosaghzadeh, A.; Salmane, H.; & Ruichek, Y. (2014). A graph construction method using LBP self-representativeness for outdoor object categorization, Engineering Applications of Artificial Intelligence, 36, 294-302,2014.
https://doi.org/10.1016/j.engappai.2014.08.003
Dornaika, F.; Bosaghzadeh, A.; Salmane, H. & Ruichek, Y. (2014). Graph-based semi-supervised learning with Local Binary Patterns for holistic object categorization, Expert Systems with Applications, 41(17): 7744-7753,2014.
https://doi.org/10.1016/j.eswa.2014.06.025
Dornaika, F.; Bosaghzadeh, A.(2015). Adaptive graph construction using data selfrepresentativeness for pattern classification, Information Sciences, 325: 118-139,2015.
https://doi.org/10.1016/j.ins.2015.07.005
Dornaika, F.; Moujahid, A.; El Merabet, Y. & Ruichek, Y.(2016). Building detection from orthophotos using a machine learning approach: An empirical study on image segmentation and descriptors. Expert Systems with Applications, 58: 130-142, 2016.
https://doi.org/10.1016/j.eswa.2016.03.024
Ercolano, G. & Rossi S. (2021). Combining CNN and LSTM for activity of daily living recognition with a 3D matrix skeleton representation. Intelligent Service Robotics, 2021, p. 1-11.
https://doi.org/10.1007/s11370-021-00358-7
Feichtenhofer C.; Pinz A. & Zisserman A. (2016). Convolutional two-stream network fusion for video action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1933-1941.
https://doi.org/10.1109/CVPR.2016.213
Hamdi M.; Bouhamed H.; AlGarni A.; Elmannai H. & Meshoul S. (2021). Deep Learning and Uniform LBP Histograms for Position Recognition of Elderly People with Privacy Preservation, International Journal of Computers, Communications & Control, 16 (5), 2021, page 1-15, 2021.
https://doi.org/10.15837/ijccc.2021.5.4256
He K. & SUN J. (2015). Convolutional neural networks at constrained time cost. In Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 5353-5360, 2015.
https://doi.org/10.1109/CVPR.2015.7299173
Heinrich, S., et al. (2010). Cost of falls in old age: a systematic review, Osteoporosis international, 21.6 (2010), 891-902.
https://doi.org/10.1007/s00198-009-1100-1
Herath S.; Harandi M. & Porikli F. (2017). Going deeper into action recognition: a survey. Image Vis Comput, 60, 4-21.
https://doi.org/10.1016/j.imavis.2017.01.010
Huang, D.; Shan, C.; Ardabilian, M.; Wang, Y. & Chen, L. (2011). Local binary patterns and its application to facial image analysis: a survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 41(6), 765-781, 2011.
https://doi.org/10.1109/TSMCC.2011.2118750
Hyndman D.; Ann A. & Stack E.. "Fall events among people with stroke living in the community: circumstances of falls and characteristics of fallers. Archives of physical medicine and rehabilitation Volume, 83, Issue 2, 2002, pp 165-170. https://doi.org/10.1053/apmr.2002.28030.
https://doi.org/10.1053/apmr.2002.28030
Iazzi, A.; Rziza, M. & Oulad Haj Thami, R. (2021). Fall Detection System-Based Posture-Recognition for Indoor Environments, J. Imaging 7, no. 3: 42, 2021. https://doi.org/10.3390/jimaging7030042
https://doi.org/10.3390/jimaging7030042
Jebara T.; Wang J. & Chang SF. (2009). Graph construction and b-matching for semi-supervised learning. In Proceedings of the 26th annual international conference on machine learning, 441-448. ACM, 2009.
https://doi.org/10.1145/1553374.1553432
Justus, D.; Brennan , J.; onner, S., et al. (2018). Predicting the computational cost of deep learning models. In2018 IEEE International Conference on Big Data (Big Data). IEEE, 2018. p. 3873-3882, 2018.
https://doi.org/10.1109/BigData.2018.8622396
Kripesh A.; Bouchachia H.; Nait-CharifH. (207). Activity Recognition for Indoor Fall Detection Using Convolutional Neural Network. Fifteenth IAPR International Conference on Machine Vision Applications (MVA) Nagoya University, Nagoya, Japan, May 8-12, 2017.
Kourtzi Z. & Kanwisher N. (2000). Activation in human mt/mst by static images with implied motion. J Cogn Neurosci, 12(1), 2000, 48-55.
https://doi.org/10.1162/08989290051137594
Li Z.; Gavrilyuk K.; Gavves E.; Jain M. &, Snoek CG. (2018). Video lstm convolves, attends and flows for action recognition. Comput Vis Image Underst, 166, 2018, 41-50.
https://doi.org/10.1016/j.cviu.2017.10.011
Mahshid M. & Reza S. (2019). A motion-aware ConvLSTM network for action recognition. Applied Intelligence, (2019) 49:2515-2521.
https://doi.org/10.1007/s10489-018-1395-8
Na L.; Yidan W.; Li F. & Jinbo S. (2019). Deep Learning for Fall Detection: Three-Dimensional CNN Combined With LSTM on Video Kinematic Data. IEEE Journal Of Biomedical and Health Informatics, VOL. 23, NO. 1, 2019.
https://doi.org/10.1109/JBHI.2018.2808281
Neshatpour K.; Homayoun H. & Sasan A. (2019). Icnn: The iterative convolutional neural network. ACM Transactions on Embedded Computing Systems (TECS), 2019, vol. 18, no 6, p. 1-27.
https://doi.org/10.1145/3355553
Noury N.; Fleury A.; Rumeau P.; Bourke A.; OLaighin G.; Rialle V. & Lundy J. E. (2007). Fall detection-principles and methods. 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 2007, pp. 1663-1666. doi: 10.1109/IEMBS.2007.4352627
https://doi.org/10.1109/IEMBS.2007.4352627
Ordonez F.J. & Roggen D. (2016). Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 16(1) 2016, 115.
https://doi.org/10.3390/s16010115
Poppe R. (2010). A survey on vision-based human action recognition. Image Vis Comput 28(6), 976-990.
https://doi.org/10.1016/j.imavis.2009.11.014
Ricciuti M.; Spinsante S.& Gambi E. (2018). Accurate Fall Detection in a Top View Privacy Preserving Configuration. Sensors (Basel). 2018 May 29;18(6):1754. doi: 10.3390/s18061754. PMID: 29844298; PMCID: PMC6021973.
https://doi.org/10.3390/s18061754
Simonyan K. & Zisserman A. (2014). Two-stream convolutional networks for action recognition in videos. In: Advances in neural information processing systems, 2014, pp 568-576.
Singh P.; Verma V. K.; Rai P. & Namboodiri V. P. (2019). Hetconv: Heterogeneous kernel-based convolutions for deep cnns. In : Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. p. 4835-4844.
https://doi.org/10.1109/CVPR.2019.00497
Srivastava N.; Mansimov E. & Salakhudinov R. (2015). Unsupervised learning of video representations using lstms. In: International conference on machine learning, pp 843-852.
Wright J.; Yang AY.; Ganesh A.; Sastry SS. & Ma Y. (2009). Robust face recognition via sparse representation. IEEE transactions on pattern analysis and machine intelligence. 31(2): 210-227.
https://doi.org/10.1109/TPAMI.2008.79
Wu J. M.; Li Zh., Herencsar N., et al. (2021). A graph-based CNN-LSTM stock price prediction algorithm with leading indicators. Multimedia Systems, 2021, p. 1-20.
https://doi.org/10.1007/s00530-021-00758-w
Xingjian S.; Chen Z.; Wang H.; Yeung D-Y.; Wong W-K. & Woo W-C. (2015). Convolutional lstm network: a machine learning approach for precipitation nowcasting. In: Advances in neural information processing systems, 2015, pp 802-810.
Additional Files
Published
Issue
Section
License
Copyright (c) 2023 Fayez AlFayez, Heni Bouhamed
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
ONLINE OPEN ACCES: Acces to full text of each article and each issue are allowed for free in respect of Attribution-NonCommercial 4.0 International (CC BY-NC 4.0.
You are free to:
-Share: copy and redistribute the material in any medium or format;
-Adapt: remix, transform, and build upon the material.
The licensor cannot revoke these freedoms as long as you follow the license terms.
DISCLAIMER: The author(s) of each article appearing in International Journal of Computers Communications & Control is/are solely responsible for the content thereof; the publication of an article shall not constitute or be deemed to constitute any representation by the Editors or Agora University Press that the data presented therein are original, correct or sufficient to support the conclusions reached or that the experiment design or methodology is adequate.