首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Graph-based approach for human action recognition using spatio-temporal features
Institution:1. School of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, PR China;2. State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210093, PR China;1. School of Computer & Information, Hefei University of Technology, Hefei 230009, China;2. Institute of Intelligent Machines, Chinese Academy of Sciences, Hefei 230031, China;3. Institute of Health Sciences, Anhui University, Hefei, Anhui 230601, China
Abstract:Due to the exponential growth of the video data stored and uploaded in the Internet websites especially YouTube, an effective analysis of video actions has become very necessary. In this paper, we tackle the challenging problem of human action recognition in realistic video sequences. The proposed system combines the efficiency of the Bag-of-visual-Words strategy and the power of graphs for structural representation of features. It is built upon the commonly used Space–Time Interest Points (STIP) local features followed by a graph-based video representation which models the spatio-temporal relations among these features. The experiments are realized on two challenging datasets: Hollywood2 and UCF YouTube Action. The experimental results show the effectiveness of the proposed method.
Keywords:Human action recognition  Spatio-temporal features  Graph-based video modeling  Bag-of-sub-Graphs  Frequent sub-graphs  Support Vector Machines  Spatio-temporal Interest Points  gSpan algorithm
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号