Predicting Human Activity – State of the Art

eng Article in English DOI: 10.14313/PAR_248/31

Ekemeyong Esther , send Teresa Zielińska Faculty of Power and Aeronautical Engineering, Warsaw University of Technology

Download Article

Abstract

Predicting human actions is a very actual research field. Artificial intelligence methods are commonly used here. They enable early recognition and classification of human activities. Such knowledge is extremely needed in the work on robots and other interactive systems that communicate and cooperate with people. This ensures early reactions of such devices and proper planning of their future actions. However, due to the complexity of human actions, predicting them is a difficult task. In this article, we review state-of-the-art methods and summarize recent advances in predicting human activity. We focus in particular on four approaches using machine learning methods, namely methods using: artificial neural networks, support vector machines, probabilistic models and decision trees. We discuss the advantages and disadvantages of these approaches, as well as current challenges related to predicting human activity. In addition, we describe the types of sensors and data sets commonly used in research on predicting and recognizing human actions. We analyze the quality of the methods used, based on the prediction accuracy reported in scientific articles. We describe the importance of the data type and the parameters of machine learning models. Finally, we summarize the latest research trends. The article is intended to help in choosing the right method of predicting human activity, along with an indication of the tools and resources necessary to effectively achieve this goal.

Keywords

activity prediction, inferring human action, robot-human interaction

Przewidywanie aktywności człowieka – stan wiedzy

Streszczenie

Przewidywanie działań człowieka to bardzo aktualny kierunek badań. Wykorzystywane są tu powszechnie metody sztucznej inteligencji. Umożliwiają one wczesne rozpoznawanie i klasyfikowanie działań człowieka. Taka wiedza jest niezwykle potrzebna w pracach nad robotami i innymi interaktywnymi systemami komunikującymi się i współpracującymi z ludźmi. Zapewnia to wczesne reakcje takich urządzeń i odpowiednie planowanie ich przyszłych działań. Jednak ze względu na złożoność działań człowieka ich przewidywanie jest trudnym zadaniem. W tym artykule dokonujemy przeglądu najnowocześniejszych metod i podsumowujemy ostatnie postępy w zakresie przewidywania aktywności człowieka. Skupiamy się szczególnie na czterech podejściach wykorzystujących metody uczenia maszynowego, a mianowicie na metodach wykorzystujących: sztuczne sieci neuronowe, metody wektorów nośnych, modele probabilistyczne oraz drzewa decyzyjne. Omawiamy zalety i wady tych podejść, a także aktualne wyzwania związane z zagadnieniami przewidywania aktywności człowieka. Ponadto opisujemy rodzaje czujników i zbiory danych powszechnie stosowane w badaniach dotyczących przewidywania i rozpoznawania działań człowieka. Analizujemy jakość stosowanych metod w oparciu o dokładność przewidywania raportowaną w artykułach naukowych. Opisujemy znaczenie rodzaju danych oraz parametrów modeli uczenia maszynowego. Na koniec podsumowujemy najnowsze trendy badawcze. Artykuł ma za zadanie pomóc przy wyborze właściwej metody przewidywania aktywności człowieka, wraz ze wskazaniem narzędzi i zasobów niezbędnych do efektywnego osiągnięcia tego celu.

Słowa kluczowe

interakcja człowiek-robot, przewidywanie akcji człowieka, przewidywanie działań

Bibliography

  1. Chiu H.-K., Adeli E., Wang B., Huang D.-A., Niebles J.C., Action-Agnostic Human Pose Forecasting, [In:] 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), IEEE, 2019, 1423–1432, DOI: 10.1109/WACV.2019.00156.
  2. Coppola C., Faria D.R., Nunes U., Bellotto N., Social Activity Recognition Based on Probabilistic Merging of Skeleton Features with Proximity Priors from RGB-D Data, [In:] 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2016, 5055– 5061, DOI: 10.1109/IROS.2016.7759742.
  3. Ravichandar H.C., Trombetta D., Dani A.P., Human Intention-Driven Learning Control for Trajectory Synchronization in Human-Robot Collaborative Tasks, „IFAC-PapersOnLine”, Vol. 51,No. 34, 2019, 1–7 DOI: 10.1016/j.ifacol.2019.01.001.
  4. Razali H., Mordan T., Alahi A., Pedestrian Intention Pre diction: A Convolutional Bottom-Up Multi-Task Approach, „Transportation Research Part C: Emerging Technologies”, Vol. 130, 2021, 103259, DOI: 10.1016/j.trc.2021.103259.
  5. Bibi S., Anjum N., Amjad T., McRobbie G., Ramzan N., Human Interaction Anticipation by Combining Deep Features and Transformed Optical Flow Components, „IEEE Access”, Vol. 8, 2020, 137646–137657, DOI: 10.1109/ACCESS.2020.3012557.
  6. Manns M., Tuli T.B., Schreiber F., Identifying Human Intention During Assembly Operations using Wearable Motion Capturing Systems Including Eye Focus, „Procedia CIRP”, Vol. 104, 2021, 924–929, DOI: 10.1016/j.procir.2021.11.155.
  7. Ryoo M.S., Grauman K., Aggarwal J.K., A task-Driven Intelligent Workspace System to Provide Guidance Feedback, „Computer Vision and Image Understanding”, Vol. 114, No. 5, 2010, 520–534, DOI: 10.1016/j.cviu.2009.12.009.
  8. Casalino A., Massarenti N., Zanchettin A.M., Rocco P., Predicting the Human Behaviour in Human-Robot Co-Assemblies: an Approach Based on Suffix Trees, [In:] 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2020, 11108–11114, DOI: 10.1109/IROS45743.2020.9341301.
  9. Ding H., Shangguan L., Yang Z., Han J., Zhou Z., Yang P., Xi W., Zhao J., FEMO: A Platform for Free-Weight Exercise Monitoring with RFIDs, [In:] Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems, 2015, 141–154, DOI: 10.1145/2809695.2809708.
  10. Kulic D., Croft E.A., Affective State Estimation for Human-Robot Interaction, „IEEE Transactions on Robotics”, Vol. 23, No. 5, 2007, 991–1000 DOI: 10.1109/TRO.2007.904899.
  11. Vaniya S.M., Bharathi B., Exploring Object Segmentation Methods in Visual Surveillance for Human Activity Recognition, [In:] 2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC), IEEE, 2016, 520–525, DOI: 10.1109/ICGTSPICC.2016.7955356.
  12. Abobakr A., Hossny M., Nahavandi S., A Skeleton-Free Fall Detection System from Depth Images using Random Decision Forest, „IEEE Systems Journal”, Vol. 12, No. 3, 2017, 2994–3005, DOI: 10.1109/JSYST.2017.2780260.
  13. Bandi C., Thomas U., Skeleton-Based Action Recognition for Human-Robot Interaction using Self-Attention Mechanism, [In:] 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), IEEE, 2021, DOI: 10.1109/FG52635.2021.9666948.
  14. Li B., Tian J., Zhang Z., Feng H., Li X., Multitask Non-Autoregressive Model for Human Motion Prediction, „IEEE Transactions on Image Processing”, Vol. 30, 2020, 2562–2574 DOI: 10.1109/TIP.2020.3038362.
  15. Li M., Chen S., Chen X., Zhang Y., Wang Y., Tian Q., Symbiotic Graph Neural Networks for 3D Skeleton-Based Human Action Recognition and Motion Prediction, „IEEE Transactions on Pattern Analysis and Machine Intelligence”, Vol. 44, No. 6, 2022, 3316–3333, DOI: 10.1109/TPAMI.2021.3053765.
  16. Pham C., Nguyen-Thai S., Tran-Quang H., Tran S., Vu H., Tran T.-H., Le T.-L., SensCapsNet: Deep Neural Network for Non-Obtrusive Sensing Based Human Activity Recognition, „IEEE Access”, Vol. 8, 2020, 86934–86946, DOI: 10.1109/ACCESS.2020.2991731.
  17. Zhu R., Xiao Z., Li Y., Yang M., Tan Y., Zhou L., Lin S., Wen H., Efficient Human Activity Recognition Solving the Confusing Activities via Deep Ensemble Learning, „IEEE Access, Vol. 7, 2019, 75490–75499, DOI: 10.1109/ACCESS.2019.2922104.
  18. Wang K., He J., Zhang L., Attention-Based Convolutional Neural Network for Weakly Labeled Human Activities’ Recognition with Wearable Sensors, „IEEE Sensors Journal”, Vol. 19, No. 17, 2019, 7598–7604, DOI: 10.1109/JSEN.2019.2917225.
  19. Gupta N., Gupta S.K., Pathak R.K., Jain V., Rashidi P., Suri J.S., Human Activity Recognition in Artificial Intel ligence Framework: A Narrative Review, „Artificial Intelligence Review”, Vol. 55, 2022, 4755–4808, DOI: 10.1007/s10462-021-10116-x.
  20. Abdussami S., Nagendraprasad S., Shivarajakumara K., Singh S., Thyagarajamurthy A., A Review on Action Recognition and Action Prediction of Human(s) using Deep Learning Approaches, „International Journal of Computer Applications”, Vol. 177, No. 20, 2019, DOI: 10.5120/ijca2019919605.
  21. Razin Y.S., Pluckter K., Ueda J., Feigh K., Predicting Task Intent from Surface Electromyography using Layered Hidden Markov Models, „IEEE Robotics and Automation Letters”, Vol. 2, No. 2, 2017, 1180–1185, DOI: 10.1109/LRA.2017.2662741.
  22. Yao B., Fei-Fei L., Action Recognition with Exemplar Based 2.5D Graph Matching, [In:] European Conference on Computer Vision, Springer, 2012, 173–186, DOI: 10.1007/978-3-642-33765-9_13.
  23. Diba A., Fayyaz M., Sharma V., Paluri M., Gall J., Stiefelhagen R., Gool L.V., Large Scale Holistic Video Understanding, [In:] European Conference on Computer Vision, Springer, 2020, 593–610, DOI: 10.48550/arXiv.1904.11451.
  24. Du Y., Lim Y., Tan Y., A Novel Human Activity Recognition and Prediction in Smart Home Based on Interaction, „Sensors”, Vol. 19, No. 20, 2019, DOI: 10.3390/s19204474.
  25. Phyo C.N., Zin T.T., Tin P., Deep Learning for Recognizing Human Activities using Motions of Skeletal Joints, „IEEE Transactions on Consumer Electronics”, Vo. 65, No. 2, 2019, 243–252, DOI: 10.1109/TCE.2019.2908986.
  26. Mici L., Parisi G.I., Wermter S., Recognition and Prediction of Human-Object Interactions with a Self-Organizing Architecture, [In:] 2018 International Joint Conference on Neural Networks (IJCNN), IEEE, 2018, DOI: 10.1109/IJCNN.2018.8489178.
  27. Schydlo P., Rakovic M., Jamone L., Santos-Victor J., Anticipation in Human-Robot Cooperation: A Recurrent Neural Network Approach for Multiple Action Sequences Prediction, [In:] 2018 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2018, DOI: 10.1109/ICRA.2018.8460924.
  28. Dixon S., Hansen R., Deneke W., Probabilistic Grammar Induction for Long Term Human Activity Parsing, [In:] 2019 International Conference on Computational Science and Computational Intelligence (CSCI), IEEE, 2019, 306– 311, DOI: 10.1109/CSCI49370.2019.00061.
  29. Duckworth P., Hogg D.C., Cohn A.G., Unsupervised Human Activity Analysis for Intelligent Mobile Robots, „Artificial Intelligence”, Vol. 270, 2019, 67–92, DOI: 10.1016/j.artint.2018.12.005.
  30. Fang L., Liu X., Liu L., Xu H., Kang W., JGR-P2O: Joint Graph Reasoning Based Pixel-to-Offset Prediction Network for 3D Hand Pose Estimation from a Single Depth Image, [In:] European Conference on Computer Vision, Springer, 2020, 120–137, DOI: 10.1007/978-3-030-58539-6_8.
  31. Hu J.-F., Zheng W.-S., Ma L., Wang G., Lai J., Zhang J., Early Action Prediction by Soft Regression, „IEEE Trans actions on Pattern Analysis and Machine Intelligence”, Vol. 41, No. 11, 2018, 2568–2583, DOI: 10.1109/TPAMI.2018.2863279.
  32. Li M., Chen S., Liu Z., Zhang Z., Xie L., Tian Q., Zhang Y., Skeleton Graph Scattering Networks for 3D Skeleton-Based Human Motion Prediction, [In:] Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, 854–864, DOI: 10.1109/ICCVW54120.2021.00101.
  33. Ranasinghe S., Al Machot F., Mayr H.C., A Review on Applications of Activity Recognition Systems with Regard to Performance and Evaluation, „International Journal of Distributed Sensor Networks”, Vol. 12, No. 8, 2016, DOI: 10.1177/1550147716665520.
  34. Xia K., Huang J., Wang H., LSTM-CNN Architecture for Human Activity Recognition, „IEEE Access”, Vol. 8, 2020, 56855–56866, DOI: 10.1109/ACCESS.2020.2982225.
  35. Saha J., Ghosh D., Chowdhury C., Bandyopadhyay S., Smart Handheld Based Human Activity recognition using Multiple Instance Multiple Label Learning, „Wireless Personal Communications”, Vol. 117, No. 2, 2021, 923–943, DOI: 10.1007/s11277-020-07903-0.
  36. Zhou X., Liang W., Kevin I., Wang K., Wang H., Yang L.T., Jin Q., Deep-Learning-Enhanced Human Activity Recognition for Internet of Healthcare Things, „IEEE Internet of Things Journal”, Vol. 7, No. 7, 2020, 6429– 6438, DOI: 10.1109/JIOT.2020.2985082.
  37. Chen J., Sun Y., Sun S., Improving Human Activity Recognition Performance by Data Fusion and Feature Engineering, „Sensors”, Vol. 21, No. 3, 2021, DOI: 10.3390/s21030692.
  38. Tian Y., Zhang J., Chen L., Geng Y., Wang X., Single Wearable Accelerometer-Based Human Activity Recognition via Kernel Discriminant Analysis and QPSO-KELM Classifier, „IEEE Access”, Vol. 7, 2019, 109216–109227, DOI: 10.1109/ACCESS.2019.2933852.
  39. Yao S., Zhao Y., Zhang A., Hu S., Shao H., Zhang C., Su L., Abdelzaher T., Deep Learning for the Internet of Things, „Computer”, Vol. 51, No. 5, 2018, 32–41, DOI: 10.1109/MC.2018.2381131.
  40. Garcia-Gonzalez D., Rivero D., Fernandez-Blanco E., Luaces M.R., A Public Domain Dataset for Real-Life Human Activity Recognition using Smartphone Sensors, „Sensors”, Vol. 20, No. 8, 2020, DOI: 10.3390/s20082200.
  41. Lawal I.A., Bano S., Deep Human Activity Recognition using Wearable Sensors, [In:] Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments, 2019, 45–48, DOI: 10.1145/3316782.3321538.
  42. Bashar S.K., Al Fahim A., Chon K.H., Smartphone Based Human Activity Recognition with Feature Selection and Dense Neural Network, [In:] 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), IEEE, 2020, 5888–5891, DOI: 10.1109/EMBC44109.2020.9176239.
  43. Yang C., Wang X., Mao S., RFID-Based 3D Human Pose Tracking: A Subject Generalization Approach, „Digital Communications and Networks”, Vol. 8, No. 3, 2022, 278– 288, DOI: 10.1016/j.dcan.2021.09.002.
  44. Rohei M.S., Salwana E., Shah N.B.A.K., Kakar A.S., Design and Testing of an Epidermal RFID Mechanism in a Smart Indoor Human Tracking System, „IEEE Sensors Journal”, Vol. 21, No. 4, 2021, 5476–5486. DOI: 10.1109/JSEN.2020.3036233.
  45. Li X., Zhang Y., Marsic I., Sarcevic A., Burd R.S., Deep Learning for RFID-Based Activity Recognition, [In:] Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM, 2016, pp. 164–175, DOI: 10.1145/2994551.2994569.
  46. Lin G., Jiang W., Xu S., Zhou X., Guo X., Zhu Y., He X., Human Activity Recognition using Smartphones With WiFi Signals, „IEEE Transactions on Human-Machine Systems”, 2022, 1–12, DOI: 10.1109/THMS.2022.3188726.
  47. Yang J., Chen X., Wang D., Zou H., Lu C.X., Sun S., Xie L., Deep Learning and Its Applications to WiFi Human Sensing: A Benchmark and A Tutorial, arXiv preprint arXiv:2207.07859 (2022).
  48. Mei Y., Jiang T., Ding X., Zhong Y., Zhang S., Liu Y., WiWave: WiFi-based Human Activity Recognition using the Wavelet Integrated CNN, [In:] 2021 IEEE/CIC International Conference on Communications in China (ICCC Workshops), IEEE, 2021, 100–105.
  49. Yan H., Zhang Y., Wang Y., Xu K., WiAct: A Passive WiFi-Based Human Activity Recognition System, „IEEE Sensors Journal”, Vol. 20, No. 1, 2019, 296–305, DOI: 10.1109/JSEN.2019.2938245.
  50. Fei H., Xiao F., Han J., Huang H., Sun L., Multi-Variations Activity Based Gaits Recognition using Commodity WiFi, „IEEE Transactions on Vehicular Technology”, Vol. 69, No. 2, 2020, 2263–2273, DOI: 10.1109/TVT.2019.2962803.
  51. Ding X., Jiang T., Zhong Y., Wu S., Yang J., Zeng J., Wi-Fi-Based Location-Independent Human Activity Recognition with Attention Mechanism Enhanced Method, „Electronics”, Vol. 11, No. 4, 2022, DOI: 10.3390/electronics11040642.
  52. Li J., Jiang T., Yu J., Ding X., Zhong Y., Liu Y., An WiFi-Based Human Activity Recognition System Under Multi-source Interference, [In:] International Conference in Communications, Signal Processing, and Systems, Springer, 2022, 937–944, DOI: 10.1007/978-981-19-0390-8_118.
  53. Sung J., Ponce C., Selman B., Saxena A., Unstructured Human Activity Detection from RGBD Images, [In:] 2012 IEEE International Conference on Robotics and Automation, IEEE, 842–849, DOI: 10.1109/ICRA.2012.6224591.
  54. Koppula H., Saxena A., Learning Spatio-Temporal Structure from RGB-D Videos for Human Activity Detection and Anticipation, [In:] International Conference on Machine Learning, PMLR, 2013, 792–800.
  55. Seidenari L., Varano V., Berretti S., Bimbo A., Pala P., Recognizing Actions from Depth Cameras as Weakly aligned multi-part bag-of-Poses, [In:] Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2013, 479–485, DOI: 10.1109/CVPRW.2013.77.
  56. Reddy K.K., Shah M., Recognizing 50 Human Action Categories of Web Videos, „Machine Vision and Applications”, Vol. 24, No. 5, 2013, 971–981, DOI: 10.1007/s00138-012-0450-4.
  57. Schuldt C., Laptev I., Caputo B., Recognizing Human Actions: a Local SVM Approach, [In:] Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., Vol. 3, IEEE, 2004, 32–36, DOI: 10.1109/ICPR.2004.1334462.
  58. Blank M., Gorelick L., Shechtman E., Irani M., Basri R., Actions as Space-Time Shapes, [In:] Tenth IEEE International Conference on Computer Vision (ICCV’05), Vol. 1, Vol. 2, IEEE, 2005, 1395–1402, DOI: 10.1109/ICPR.2004.1334462.
  59. Anguita D., Ghio A., Oneto L., Parra Perez X., Reyes Ortiz J.L., A Public Domain Dataset for Human Activity Recognition using Smartphones, [In:] Proceedings of the 21th International European Symposium on Artificial Neural Networks, Computational Intelligence and Machine learning, 2013, 437–442.
  60. Malekzadeh M., Clegg R.G., Cavallaro A., Haddadi H., Mobile Sensor Data Anonymization, [In:] Proceedings of the International Conference on Internet of Things Design and Implementation, 2019, 49–58, DOI: 10.1145/3302505.3310068.
  61. Guo L., Wang L., Lin C., Liu J., Lu B., Fang J., Liu Z., Shan Z., Yang J., Guo S., Wiar: A Public Dataset for Wifi-Based Activity Recognition, „IEEE Access”, Vol. 7, 2019, 154935–154945, DOI: 10.1109/ACCESS.2019.2947024.
  62. Baha’A A., Almazari M.M., Alazrai R., Daoud M.I., A Dataset for Wi-Fi-Based Human Activity Recognition in Line-Of-Sight and Non-Line-Of-Sight Indoor Environments, „Data in Brief”, Vol. 33, 2020, DOI: 10.1016/j.dib.2020.106534.
  63. Van Kasteren T., Noulas A., Englebienne G., Kröse B., Accurate Activity Recognition in a Home Setting, [In:] Proceedings of the 10th International Conference on Ubiquitous Computing, 2008, DOI: 10.1145/1409635.1409637.
  64. Zheng X., Wang M., Ordieres-Meré J., Comparison of Data Preprocessing Approaches for Applying Deep Learning to Human Activity Recognition in the Context of Industry 4.0, „Sensors”, Vol. 18, No. 7, 2018, DOI: 10.3390/s18072146.
  65. Kotsiantis S.B., Kanellopoulos D., Pintelas P.E., Data Preprocessing for Supervised Learning, „International Journal of Computer Science”, Vol. 1, No. 2, 2006, 111–117.
  66. Preece S.J., Goulermas J.Y., Kenney L.P., Howard D., A Comparison of Feature Extraction Methods for the Classification of Dynamic Activities from Accelerometer Data, „IEEE Transactions on Biomedical Engineering”, Vol. 56, No. 3, 2008, 871–879, DOI: 10.1109/TBME.2008.2006190.
  67. Ravi D., Wong C., Lo B., Yang G.-Z., A DeepLlearning Approach to On-Node Sensor Data Analytics for Mobile or Wearable Devices, „IEEE Journal of Biomedical and Health Informatics”, Vol. 21, No. 1, 2016, 56–64, DOI: 10.1109/JBHI.2016.2633287.
  68. Liu Z., Xu L., Jia Y., Guo S., Human Activity Recognition Based on Deep Learning with Multi-Spectrogram, [In:] 2020 IEEE 5th International Conference on Signal and Image Processing (ICSIP), IEEE, 2020, 11–15, DOI: 10.1109/ICSIP49896.2020.9339335.
  69. Hur T., Bang J., Huynh-The T., Lee J., Kim J.-I., Lee S., Iss2Image: A Novel Signal-Encoding Technique for CNN-Based Human Activity Recognition, „Sensors”, Vol. 18, No. 11, 2018, DOI: 10.3390/s18113910.
  70. Bloom V., Argyriou V., Makris D., Linear Latent Low Dimensional Space for Online Early Action Recognition and Prediction, „Pattern Recognition”, Vol. 72, 2017, 532–547, DOI: 10.1016/j.patcog.2017.07.003.
  71. Khaire U.M., Dhanalakshmi R., Stability of feature selection algorithm: A review, „Journal of King Saud University – Computer and Information Sciences”, Vol. 34, No. 4, 2022, 1060–1073, DOI: 10.1016/j.jksuci.2019.06.012.
  72. Li K., Fu Y., ARMA-HMM: A New Approach for Early Recognition of Human Activity, [In:] Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), IEEE, 2012, 1779–1782.
  73. Lee D.-G., Lee S.-W., Human Interaction Recognition Framework Based on Interacting Body Part Attention, „Pattern Recognition”, Vol. 128, 2022, DOI: 10.1016/j.patcog.2022.108645.
  74. Kambara M., Sugiura K., Relational Future Captioning Model for Explaining Likely Collisions in Daily Tasks, arXiv preprint arXiv:2207.09083, 2022.
  75. Xu Z., Qing L., Miao J., Activity Auto-Completion: Predicting Human Activities from Partial Videos, [In:] Proceedings of the IEEE International Conference on Computer Vision, 3191–3199, DOI: 10.1109/ICCV.2015.365.
  76. Meng M., Drira H., Daoudi M., Boonaert J., Human- -Object Interaction Recognition by Learning the Distances Between the Object and the Skeleton Joints, [In:] 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Vol. 7, IEEE, 2015, DOI: 10.1109/FG.2015.7284883.
  77. Dutta V., Zielińska T., Predicting Human Actions Taking Into Account Object Affordances, „Journal of Intelligent & Robotic Systems”, Vol. 93, No. 3, 2019, 745–761, DOI: 10.1007/s10846-018-0815-7.
  78. Uzunovic T., Golubovic E., Tucakovic Z., Acikmese Y., Sabanovic A., Task-Based Control and Human Activity Recognition for Human-Robot Collaboration, [In:] IECON 2018 – 44th Annual Conference of the IEEE Industrial Electronics Society, 2018, 5110–5115, DOI: 10.1109/IECON.2018.8591206.
  79. Zheng X., Chen X., Lu X., A Joint Relationship Aware Neural Network for Single-Image 3D Human Pose Estimation, „IEEE Transactions on Image Processing”, Vol. 29, 2020, 4747–4758, DOI: 10.1109/TIP.2020.2972104.
  80. Pavllo D., Feichtenhofer C., Auli M., Grangier D., Modeling Human Motion with Quaternion-Based Neural Networks, „International Journal of Computer Vision”, Vol. 128, No. 4, 2020, 855–872, DOI: 10.1007/s11263-019-01245-6.
  81. Kratzer P., Midlagajni N.B., Toussaint M., Mainprice J., Anticipating Human Intention for Full-Body Motion Pre diction in Object Grasping and Placing Tasks, [In:] 2020 29th IEEE International Conference on Robot and Human Interactive Communication (ROMAN), IEEE, 2020, 1157–1163.
  82. Singh A., Patil D., Omkar S., Eye in the Sky: Real-Time Drone Surveillance System (DSS) for Violent Individu als Identification using Scatternet Hybrid Deep Learning Network, [In:] Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, 1629–1637, DOI: 10.1109/CVPRW.2018.00214.
  83. Zhao X., Chen Y., Guo J., Zhao D., A spatial-Temporal Attention Model for Human Trajectory Prediction, „IEEE CAA Journal of Automatica Sinica”, Vol. 7, No. 4, 2020, 965–974, DOI: 10.1109/JAS.2020.1003228.
  84. Putra P.U., Shima K., Shimatani K., A Deep Neural Network Model for Multi-View Human Activity Recognition, „PloS One”, Vol. 17, No. 1, 2022, DOI: 10.1371/journal.pone.0262181.
  85. Du Y., Lim Y., Tan Y., Activity Prediction using LSTM in Smart Home, [In:] 2019 IEEE 8th Global Conference on Consumer Electronics (GCCE), IEEE, 2019, 918–919, DOI: 10.1109/GCCE46687.2019.9015492.
  86. Zhang Q., Wang T., Wu H.-N., Li M., Zhu J., Snoussi H., Human Action Prediction Based on Skeleton Data, [In:] 2020 39th Chinese Control Conference (CCC), IEEE, 2020, 6608–6612, DOI: 10.23919/CCC50068.2020.9189122.
  87. Dong M., Xu C., Skeleton-Based Human Motion Prediction With Privileged Supervision, „IEEE Transactions on Neural Networks and Learning Systems”, 2022, DOI: 10.1109/TNNLS.2022.3166861.
  88. Fragkiadaki K., Levine S., Felsen P., Malik J., Recurrent Network Models for Human Dynamics, [In:] Proceedings of the IEEE International Conference on Computer Vision, 2015, 4346–4354.
  89. Feichtenhofer C., Fan H., Malik J., He K., SlowFast Networks for Video Recognition, [In:] Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019.
  90. Ji S., Xu W., Yang M., Yu K., 3D Convolutional Neural Networks for Human Action Recognition, „IEEE Transactions on Pattern Analysis and Machine Intelligence”, Vol. 35, No. 1, 2013, 221–231, DOI: 10.1109/TPAMI.2012.59.
  91. Alfaifi R., Artoli A.M., Human Action Prediction with 3D-CNN, „SN Computer Science”, Vol. 1, 2020, DOI: 10.1007/s42979-020-00293-x.
  92. Zhou Y., Sun X., Zha Z.-J., Zeng W., MiCT: Mixed 3D/2D Convolutional Tube for Human Action Recognition, [In:] Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, 449–458, DOI: 10.1109/CVPR.2018.00054.
  93. Gao C., Wang T., Zhang M., Zhu A., Shi P., Snoussi H., 3D Human Motion Prediction Based on Graph Convolution Network and Transformer, [In:] 2021 China Automation Congress (CAC), IEEE, 2021, 2957–2962, DOI: 10.1109/CAC53003.2021.9728062.
  94. Schaefer S., Leung K., Ivanovic B., Pavone M., Leveraging Neural Network Gradients within Trajectory Optimization for Proactive Human-Robot Interactions, [In:] 2021 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2021, 9673–9679, DOI: 10.1109/ICRA48506.2021.9561443.
  95. Cho H., Yoon S.M., Divide and Conquer-Based 1D CNN Human Activity Recognition using Test Data Sharpening, „Sensors”, Vol. 18, No. 4, 2018, DOI: 10.3390/s18041055.
  96. Bhattacharyya A., Fritz M., Schiele B., Long-Term on-Board Prediction of People in Traffic Scenes Under Uncertainty, [In:] Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, 4194– 4202, DOI: 10.1109/CVPR.2018.00441.
  97. Hoai M., De la Torre F., Max-margin early event detetors, „International Journal of Computer Vision”, Vol. 107, No. 2, 2014, 191–202, DOI: 10.1007/s11263-013-0683-3.
  98. Lan T., Chen T.-C., Savarese S., A Hierarchical Reprsentation for Future Action Prediction, [In:] European Conference on Computer Vision, Springer, 2014, 689–704, DOI: 10.1007/978-3-319-10578-9_45.
  99. Wang H., Yuan C., Shen J., Yang W., Ling H., Action Unit Detection and Key Frame Selection for Human Activity Prediction, „Neurocomputing”, Vol. 318, 2018, 109–119, DOI: 10.1016/j.neucom.2018.08.037.
  100. Kong Y., Kit D., Fu Y., A Discriminative Model with Multiple Temporal Scales for Action Prediction, [In:] European Conference on Computer Vision, Springer, 2014, 596–611, DOI: 10.1007/978-3-319-10602-1_39.
  101. Ryoo M.S., Human Activity Prediction: Early Recognition of Ongoing Activities from Streaming Videos, [In:] 2011 International Conference on Computer Vision, IEEE, 2011, 1036–1043, DOI: 10.1109/ICCV.2011.6126349.
  102. Alvee B.I., Tisha S.N., Chakrabarty A., Application of Machine Learning Classifiers for Predicting Human Acti vity, [In:] 2021 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT), 2021, 39–44, DOI: 10.1109/IAICT52856.2021.9532572.
  103. Li K., Fu Y., Prediction of Human Activity by Discovering Temporal Sequence Patterns, „IEEE Transactions on Pattern Analysis and Machine Intelligence”, Vol. 36, No. 8, 2014, 1644–1657, DOI: 10.1109/TPAMI.2013.2297321.
  104. Galata A., Johnson N., Hogg D., Learning Variable-Length Markov Models of Behavior, „Computer Vision and Image Understanding”, Vol. 81, No. 3, 2001, 398–413, DOI: 10.1006/cviu.2000.0894.
  105. Cheng Y., Tomizuka M., Long-Term Trajectory Prediction of the Human Hand and Duration Estimation of the Human Action, „IEEE Robotics and Automation Letters”, Vol. 7, No. 1, 2021, 247–254, DOI: 10.1109/LRA.2021.3124524.
  106. Manju D, Radha V., An Enhanced Human Activity Prediction at Rainy and Night Times, „International Journal of Recent Technology and Engineering, Vol. 8, 2019, 965–970, DOI: 10.35940/ijrte.C4113.098319.
  107. Li C., Wearable Computing: Accelerometer-Based Human Activity Classification using Decision Tree, Utah State University, 2017.
  108. Wang A., Chen H., Zheng C., Zhao L., Liu J., Wang L., Evaluation of Random Forest for Complex Human Activity Recognition using Wearable Sensors, [In:] 2020 International Conference on Networking and Network Applications (NaNA), IEEE, 2020, 310–315, DOI: 10.1109/NaNA51271.2020.00060.
  109. Minor B.D., Doppa J.R., Cook D.J., Learning Activity Predictors from Sensor Data: Algorithms, Evaluation, and Applications, „IEEE Transactions on Knowledge and Data Engineering”, Vol. 29, No. 12, 2017, 2744–2757, DOI: 10.1109/TKDE.2017.2750669.
  110. Yu G., Yuan J., Liu Z., Predicting Human Activities using Spatio-Temporal Structure of Interest Points, [In:] Proceedings of the 20th ACM International Conference on Multimedia, 2012, 1049–1052, DOI: 10.1145/2393347.2396380.
  111. Yu G., Goussies N.A., Yuan J., Liu Z., Fast Action Detection via Discriminative Random Forest Voting and Top-K Subvolume Search, „IEEE Transactions on Multimedia”, Vol. 13, No. 3, 2011, 507–517, DOI: 10.1109/TMM.2011.2128301.
  112. S. B. ud din Tahir, Jalal A., Batool M., Wearable Sensors for Activity Analysis using SMO-based Random Forest Over Smart Home and Sports Datasets, [In:] 2020 3rd International Conference on Advancements in Computational Sciences (ICACS), IEEE, 2020, DOI: 10.1109/ICACS47775.2020.9055944.
  113. Halevy A., Norvig P., Pereira F., The Unreasonable Effectiveness of Data, „IEEE Intelligent Systems”, Vol. 24, No. 2, 2009, 8–12, DOI: 10.1109/MIS.2009.36.
  114. Sessions V., Valtorta M., The Effects of Data Quality on Machine Learning Algorithms, Proceedings of the 11th International Conference on Information Quality (ICIQ), Vol. 6, 2006, 485–498.
  115. Chen Y.-L., Wu X., Li T., Cheng J., Ou Y., Xu M., Dimensionality Reduction of Data Sequences for Human Activity Recognition, „Neurocomputing”, Vol. 210, 2016, 294–302, DOI: 10.1016/j.neucom.2015.11.126.
  116. Ray S., Alshouiliy K., Agrawal D.P., Dimensionality Reduction for Human Activity Recognition Using Google Colab, „Information”, Vol. 12, No. 1, 2020, DOI: 10.3390/info12010006.
  117. Darji N.R., Ajila S.A., Increasing Prediction Accuracy for Human Activity Recognition using Optimized Hyperparameters, [In:] 2020 IEEE International Conference on Big Data (Big Data), IEEE, 2020, 2472–2481, DOI: 10.1109/BigData50022.2020.9378000.
  118. Ziaeefard M., Bergevin R., Semantic Human Activity Recognition: A Literature Review, „Pattern Recognition”, Vol. 48, No. 8, 2015, 2329–2345, DOI: 10.1016/j.patcog.2015.03.006.
  119. Cheng H.-T., Sun F.-T., Griss M., Davis P., Li J., You D., NuActiv: Recognizing Unseen New Activities using Semantic Attribute-Based Learning, [In:] Proceeding of the 11th Annual International Conference on Mobile Systems, Applications, and Services, 2013, 361–374, DOI: 10.1145/2462456.2464438.
  120. 121.Venkatachalam S., Nair H., Zeng M., Tan C.S., Meng shoel O.J., Shen J.P., Sem-Net: Learning Semantic Attri butes for Human Activity Recognition with Deep Belief Networks, „Frontiers in Big Data”, 2022, DOI: 10.3389/fdata.2022.879389.
  121. 122.Mittelman R., Lee H., Kuipers B., Savarese S., Weakly Supervised Learning of Mid-Level Features with Beta-Bernoulli Process Restricted Boltzmann Machines, [In:] 2013 IEEE Conference on Computer Vision and Pattern Reco gnition, 2013, 476–483, DOI: 10.1109/CVPR.2013.68.
  122. Liu J., Yang Y., Saleemi I., Shah M., Learning Semantic Features for Action Recognition via Diffusion Maps, „Computer Vision and Image Understanding”, Vol. 116, No. 3, 2012, 361–377, DOI: 10.1016/j.cviu.2011.08.010.
  123. Liu J., Wang X., Li T., Yang J., Spatio-Temporal Semantic Features for Human Action Recognition, „KSII Transactions on Internet and Information Systems”, Vol. 6, No. 10, 2012, 2632–2649, DOI: 10.3837/tiis.2012.10.011.
  124. Sunkesula S.P.R., Dabral R., Ramakrishnan G., Lighten: Learning Interactions with Graph and Hierarchical Temporal Networks for HOI in Videos, [In:] Proceedings of the 28th ACM International Conference on Multimedia, 2020, 691–699, DOI: 10.1145/3394171.3413778.
  125. M. Sadegh Aliakbarian, F. Sadat Saleh, M. Salzmann, B. Fernando, L. Petersson, L. Andersson, Encouraging LSTMs to Anticipate Actions Very Early, [In:] Proceedings of the IEEE International Conference on Computer Vision, 2017, 280–289.
  126. Pirri F., Mauro L., Alati E., Ntouskos V., Izadpanahkakhk M., Omrani E., Anticipation and Next Action Forecasting in Video: an End-to-End Model with Memory, arXiv preprint arXiv:1901.03728, 2019.
  127. Uddin M.Z., A Wearable Sensor-Based Activity Prediction System to Facilitate Edge Computing in Smart Healthcare System, „Journal of Parallel and Distributed Computing”, Vol. 123, 2019, 46–53, DOI: 10.1016/j.jpdc.2018.08.010.
  128. Shi Y., Fernando B., Hartley R., Action Anticipation with RBF kernelized Feature Mapping RNN, [In:] Proceedings of the European Conference on Computer Vision (ECCV), 2018, 301–317.
  129. Jaouedi N., Perales F.J., Buades J.M., Boujnah N., Bouhlel M.S., Prediction of Human Activities Based on a New Structure of Skeleton Features and Deep Learning Model, „Sensors”, Vol. 20, No. 17, 2020, DOI: 10.3390/s20174944.
  130. Fan Y., Wen G., Li D., Qiu S., Levine M.D., Early Event Detection Based on Dynamic Images of Surveillance Videos, „Journal of Visual Communication and Image Representation”, Vol. 51, 2018, 70–75, DOI: 10.1016/j.jvcir.2018.01.002.
  131. Kong Y., Fu Y., Max-Margin Action Prediction Machine, „IEEE Transactions on Pattern Analysis and Machine Intelligence”, Vol. 38, No. 9, 2016, 1844–1858, DOI: 10.1109/TPAMI.2015.2491928.
  132. Jalal A., Quaid M.A.K., Hasan A.S., Wearable Sensor- -Based Human Behavior Understanding and Recognition in Daily Life for Smart Environments, [In:] 2018 International Conference on Frontiers of Information Technology (FIT), 2018, 105–110, DOI: 10.1109/FIT.2018.00026.
  133. Bütepage J., Kjellström H., Kragic D., A Probabilistic Semi-Supervised Approach to Multi-Task Human Activity Modeling, arXiv preprint arXiv:1809.08875, 2018.
  134. Qi S., Huang S., Wei P., Zhu S.-C., Predicting Human Activities using Stochastic Grammar, [In:] Proceedings of the IEEE International Conference on Computer Vision, 2017, 1164–1172.
  135. Jin Y., Zhu L., Mu Y., Complex Video Action Reasoning via Learnable Markov Logic Network, [In:] Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 3242–3251.
  136. Manousaki V., Papoutsakis K., Argyros A., Graphing the Future: Activity and Next Active Object Prediction using Graph-based Activity Representations, arXiv preprint arXiv:2209.05194, 2022.
  137. Li S., Li K., Fu Y., Early Recognition of 3D Human Actions, „ACM Transactions on Multimedia Computing, Communications, and Applications”, Vol. 14, No. 1s, 2018, DOI: 10.1145/3131344.
  138. Ellis K., Kerr J., Godbole S., Lanckriet G., Wing D., Marshall S., A Random Forest Classifier for the Prediction of Energy Expenditure and Type of Physical Activity from Wrist and Hip Accelerometers, „Physiological Measurement”, Vol. 35, No. 11, 2014, DOI: 10.1088/0967-3334/35/11/2191
  139. Sánchez V.G., Skeie N.-O., Decision trees for human activity recognition modelling in smart house environments, „Simulation Notes Europe”, Vol. 28, 2018, 177–184, DOI: 10.11128/sne.28.tn.10447.