Besides traditional RGB camera and infrared camera, various new sensors have been invented and widely used for imaging in the past decade, such as depth sensor, event camera, thermal infrared camera, and doppler camera. Merging information from multiple sensors provides better visual effectiveness for applications in surveillance, smart homes, and human-machine interaction. And the accompanying vision and recognition problems have attracted great interests of researchers. Although significant progress has been achieved in recent years using new sensors, e.g., the Kinect sensors are widely used for human action recognition, the applications of the new sensors in computer vision and pattern recognition tasks still need further exploration. These sensors provide different types and viewpoints of visual information, however, many challenges still exist such as how to select and fuse multi-sensor information. The goal of this workshop is to disseminate recent research progress for researchers on a focused platform, discuss how the multi-sensor based methods can benefit the field of action and gesture recognition, and explore potential collaborations.

Call for Papers

Call for papers: Papers addressing action and gesture recognition with multiple sensors and related topics are invited. Both theoretical and application results are sought for. The topics include, but are not limited to:

  • Action recognition from multi-sensors
  • Gesture recognition from multi-sensors
  • Multi-sensor feature extraction
  • Multi-sensor feature evaluation
  • Multi-sensor feature selection
  • Multi-sensor feature fusion
  • 2D/3D Human pose estimation
  • 2D/3D Hand pose estimation
  • Facial activity analysis
  • Action/event detection for health care
  • Action/event detection for human-machine interaction
  • Transfer learning among data from different sensors
  • Knowledge distillation between models learnt from different sensor data
  • Relationship modeling among data from different sensors
  • Cascade feature and multilayer classifier for action and gesture recognition
  • Multi-sensor based depth images and dense trajectories
  • Improved combined feature representation for action and gesture recognition
  • Multi-sensor collaboration in smart lighting control system

Submission: Click here for submission

Workshop proceedings will be published after the conference in the CCIS series of Springer through the ACPR organizers (publication chair).
NOTE: Registration for workshops is included in the conference registration, i.e., all the registered ACPR participants are free to attend any of the workshops (besides all the sessions of the main conference).

Important Dates

Paper Submission Deadline September 10, 2019 (No Extension)
Notification to Authors September 25, 2019
Camera-Ready Deadline October 1, 2019
Workshop Date November 26, 2019


09:00-09:05 Starting
09:05-10:30   Plenary speaker   Shoushun Chen
                                                        Nanyang Technological University, Singapore
                                                        Title: Computer Vision on CeleX Event Camera
                        Invited speaker   Shizheng Wang
                                                        Institute of Microelectronics, Chinese Academy of Sciences, China
                                                        Title: Dynamic Motion Analysis using Event Flow
                        Invited speaker   Andreas Kempa-Liehr
                                                        The University of Auckland, New Zealand
                                                        Title: Feature engineering workflow for activity recognition from synchronized inertial measurement units
                        Invited speaker   Zhigang Tu
                                                        Wuhan University, China
                                                        Title: Human action recognition with different visual cues
                        Invited speaker   Yang Xiao
                                                        Huazhong University of Science and Technology, China
                                                        Title: Towards Real-time Eyeblink Detection in The Wild: Dataset, Theory and Practices
10:30-11:00 Coffee break
11:00-12:00 Oral presentations
                        1. Learning Spatiotemporal Representation Based on 3D Autoencoder for Anomaly Detection Yunpeng Chang, Bin Luo, Zhigang Tu, Qianqing Qin
                        2. Multi-view discriminant analysis for dynamic hand gesture recognition Huong Giang Doan, Thanh-Hai Tran, Hai Vu, Thi-Lan Le, VT Nguyen, Sang Viet Dinh, Thi-Oanh Nguyen, Thi Thuy Nguyen, Cuong Duy Nguyen
                        3. Human Action Recognition Based on Dual Correlation Network Fei Han, Dejun Zhang, Yiqi Wu, Zirui Qiu, Longyong Wu, Weilun Huang
                        4. Feature engineering workflow for activity recognition from synchronized inertial measurement units Andreas W Kempa-Liehr, Jonty Oram, Andrew Wong, Mark Finch, Thor Besier
12:00-12:20 Panel Discussion

Accepted Papers



Zhigang Tu, Ph.D.
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote sensing Wuhan University, Wuhan, China
Jianyu Yang, Ph.D.
Soochow University, Suzhou, China
JingJing Meng, Ph.D.
Computer Science and Engineering Department
University at Buffalo, State University of New York (SUNY), USA

Technical Program Committee (TPC)


This workshop is proudly sponsored by RapidV++ (Shenzhen), MEGVII, CIT-China, and DeepMicro.