(Publisher of Peer Reviewed Open Access Journals)

International Journal of Advanced Technology and Engineering Exploration (IJATEE)

ISSN (Print):2394-5443    ISSN (Online):2394-7454
Volume-5 Issue-47 October-2018
Full-Text PDF
DOI:10.19101/IJATEE.2018.547006
Paper Title : A survey on human activity prediction techniques
Author Name : Manju D. and Radha V.
Abstract :

Nowadays, in order to prevent criminal behaviors or traffic accidents, video surveillance systems have become more and more popular in both outdoor and indoor places such as offices, departmental stores, public places, railway stations, and airports, etc. So, there is a great demand for an intelligent system to detect abnormal events in videos. In the surveillance tasks, people are generally the main objects of interest. Even though, recognition of human action is an emerging topic in the field of computer vision, detection of abnormal event is recently attracting more research attention. Abnormal behaviors can be identified as irregular behavior from the normal ones. Certainly, various techniques and approaches are proposed in order to ensure human safety. This paper presents a survey on different human activity prediction techniques in video surveillance system. Initially, different techniques developed by previous researchers are studied in detail. Then, the limitations in those techniques are also addressed to suggest further improvement on human activity prediction in videos using advanced techniques. The efficiency of the different human activity prediction techniques is proved by comparing their parameters. The comparison results show the best human activity prediction technique among them.

Keywords : Video surveillance system, Human activity prediction, Human behavior detection, Abnormal behavior detection, Human action recognition.
Cite this article : Manju D. and Radha V., " A survey on human activity prediction techniques " , International Journal of Advanced Technology and Engineering Exploration (IJATEE), Volume-5, Issue-47, October-2018 ,pp.400-406.DOI:10.19101/IJATEE.2018.547006
References :
[1]Tsakanikas V, Dagiuklas T. Video surveillance systems-current status and future trends. Computers & Electrical Engineering. 2018; 70:736-53.
[Crossref] [Google Scholar]
[2]Hu W, Tan T, Wang L, Maybank S. A survey on visual surveillance of object motion and behaviors. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews). 2004; 34(3):334-52.
[Crossref] [Google Scholar]
[3]Taha A, Zayed HH, Khalifa ME, El-sayed M. Exploring behavior analysis in video surveillance applications. International Journal of Computer Applications. 2014; 93(14):22-32.
[Google Scholar]
[4]Wang H, Yuan C, Shen J, Yang W, Ling H. Action unit detection and key frame selection for human activity prediction. Neurocomputing. 2018; 318:109-19.
[Crossref] [Google Scholar]
[5]Wang H, Yang W, Yuan C, Ling H, Hu W. Human activity prediction using temporally-weighted generalized time warping. Neurocomputing. 2017; 225:139-47.
[Crossref] [Google Scholar]
[6]Wang Z, Jin J, Liu T, Liu S, Zhang J, Chen S, et al. Understanding human activities in videos: a joint action and interaction learning approach. Neurocomputing. 2018; 321:216-26.
[Crossref] [Google Scholar]
[7]Liu X, You T, Ma X, Kuang H. An optimization model for human activity recognition inspired by information on human-object interaction. In international conference on measuring technology and mechatronics automation 2018 (pp. 519-23). IEEE.
[Crossref] [Google Scholar]
[8]Li X, Chuah MC. ReHAR: robust and efficient human activity recognition. Winter conference on applications of computer vision 2018 (pp. 362-71). IEEE.
[Crossref] [Google Scholar]
[9]Putra PU, Shima K, Shimatani K. Markerless human activity recognition method based on deep neural network model using multiple cameras. In international conference on control, decision and information technologies 2018 (pp. 13-8). IEEE.
[Crossref] [Google Scholar]
[10]Ziaeefard M, Bergevin R, Lalonde JF. Deep uncertainty interpretation in dyadic human activity prediction. In international conference on machine learning and applications 2017 (pp. 822-5). IEEE.
[Crossref] [Google Scholar]
[11]Wang L, Zhao X, Si Y, Cao L, Liu Y. Context-associative hierarchical memory model for human activity recognition and prediction. IEEE Transactions on Multimedia. 2017; 19(3):646-59.
[Crossref] [Google Scholar]
[12]Kong Y, Gao S, Sun B, Fu Y. Action prediction from videos via memorizing hard-to-predict samples. In AAAI 2018.
[Google Scholar]
[13]Lohit S, Bansal A, Shroff N, Pillai J, Turaga P, Chellappa R. Predicting dynamical evolution of human activities from a single image. In proceedings of the conference on computer vision and pattern recognition workshops 2018 (pp. 496-505). IEEE.
[Google Scholar]
[14]Alvar M, Torsello A, Sanchez-Miralles A, Armingol JM. Abnormal behavior detection using dominant sets. Machine Vision and Applications. 2014; 25(5):1351-68.
[Crossref] [Google Scholar]
[15]Li W, Mahadevan V, Vasconcelos N. Anomaly detection and localization in crowded scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2014; 36(1):18-32.
[Crossref] [Google Scholar]
[16]Benezeth Y, Jodoin PM, Saligrama V. Abnormality detection using low-level co-occurring events. Pattern Recognition Letters. 2011; 32(3):423-31.
[Crossref] [Google Scholar]
[17]Chen DY, Huang PC. Motion-based unusual event detection in human crowds. Journal of Visual Communication and Image Representation. 2011; 22(2):178-86.
[Crossref] [Google Scholar]
[18]Cong Y, Yuan J, Liu J. Abnormal event detection in crowded scenes using sparse representation. Pattern Recognition. 2013; 46(7):1851-64.
[Crossref] [Google Scholar]
[19]Gu X, Cui J, Zhu Q. Abnormal crowd behavior detection by using the particle entropy. Optik-International Journal for Light and Electron Optics. 2014; 125(14):3428-33.
[Crossref] [Google Scholar]
[20]Li K, Fu Y. Prediction of human activity by discovering temporal sequence patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2014; 36(8):1644-57.
[Crossref] [Google Scholar]