The SDL Server is currently powered off due to Covid-19.
This page is currently static and will not receive updates until the system has been re-enabled. The public facing NIST web-servers (including those hosting this page) might also be shutdown in the future if required.
|Schedule Item Type||Session
|Keynote/Invited Talk||09:05||Invited Talk: FBI Uses of Video Analytics||Richard Vorder Bruegge (Federal Bureau of Investigation)|
|Oral Paper||09:45||Activity Detection in Untrimmed Videos Using Chunk-based Classifiers||Joshua Gleason; Steven Schwarcz; Rajeev Ranjan; Carlos Castillo; Jun-Cheng Chen; Rama Chellappa (University of Maryland)|
|Oral Paper||10:10||Context Sensitivity of Spatio-Temporal Activity Detection using Hierarchical Deep Neural Networks in Extended Videos||Felix Hertlein; David Münch; Michael Arens (Fraunhofer IOSB)|
|Session Header||ActEV Challenge Results & Best Performer Presentations (1055-1210)|
|Oral Paper||10:55||ActEV SDL & TRECVID Challenge Results||Yooyoung Lee; Jonathan Fiscus; Afzal Godil; Andrew Delgado; Eliot Godard; Edmond J. Golden; Maxime Hubert; Lukas Diduch (NIST)|
|Oral Paper||11:20||Faster than Real-time Detection of Activities in Untrimmed Videos (SDL Leaderboard)||Joshua Gleason; Carlos Castillo; Rama Chellappa (University of Maryland)|
|Oral Paper||11:45||Argus: Efficient Parallel Activity Detection System for Extended Video Analysis (TRECVID Leaderboard)||Wenhe Liu; Xiaojun Chang; Guoliang Kang; Po-Yao Huang ; lijun yu; Yijun Qian; Jing Wen; Alexander Hauptmann (Carnegie Mellon University, Monash University)|
|Session Header||Activity Detection/Recognition Session I (1210-1310)|
|Oral Paper||12:10||Adaptive Feature Aggregation for Video Object Detection||Yijun Qian; Lijun Yu; Wenhe Liu; Guoliang Kang; Alexander Hauptmann (Carnegie Mellon University)|
|Keynote/Invited Talk||12:35||Invited Talk: The IARPA DIVA Program||Jeff Alstott (IARPA)|
|Break||13:10||Lunch & Poster Setup|
|Session Header||Activity Detection/Recognition Session II (1410-1515)|
|Keynote/Invited Talk||14:10||Invited Talk: Two Decades of Action for Action Recognition and Detection: From Shallow to Deep Representations||Rama Chellappa (University of Maryland)|
|Oral Paper||14:50||Real-Time Activity Detection of Human Movement in Videos via Smartphone Based on Synthetic Training Data||Rico Thomanek; Tony Rolletschke; Benny Platte; Claudia Hösel; Christian Roschke; Robert Manthey; Manuel Heinzig; Richard Vogel; Frank Zimmer; Matthias Vodel; Maximilian Eibl; Marc Ritter (University of Applied Sciences Mittweida, TU Chemnitz)|
|Other||15:15||ActEV Challenges and Datasets Group Discussion|
|Break||15:30||Afternoon Break + Poster Session|
|Session Header||Poster Session (1530-1645)|
|Poster Presentation||-||Poster Abstract: Efficient Parallel Activities Detection System for Extended Video Analysis||Wenhe Liu; Guoliang Kang; Po-Yao Huang; lijun yu; Yijun Qian; Jing Wen; Alexander Hauptmann (Carnegie Mellon University)|
|Poster Presentation||-||Poster Abstract: A Spatio-Temporal Activity Detection Framework||Mandis Beigi; LISA BROWN; Quanfu Fan; John Henning; Chung-Ching Lin; Yue Meng; Rameswar Panda; sharath pankanti; Honghui Shi; Rogerio Feris (IBM Research)|
|Poster Presentation||-||Summary of 2019 Activity Detection in Extended Videos Prize Challenge||Yooyoung Lee; Jonathan Fiscus; Afzal Godil; Andrew Delgado; Edmond J. Golden III; Maxime Hubert; Lukas Diduch (NIST)|
|Poster Presentation||-||Boosted Kernelized Correlation Filters for Event-based Face Detection||Bharath Ramesh (National University of Singapore); Hong Yang (NUS)|
|Poster Presentation||-||Exploring Techniques to Improve Activity Recognition using Human Pose Skeletons||Bharath Raj Nagoor Kani; Anand Subramanian; Kashyap Ravichandran; N. Venkateswaran (SSN College of Engineering and SSNCE)|
The 1st International Workshop on Human Activity Detection in multi-camera, Continuous, long-duration Video (HADCV'19) was successfully organized in conjunction with the (WACV 2019) and the results of the ActEV 2018 challenges were presented. Since then, we are running ActEV challenges on larger and more challenging datasets with more activities. In this workshop the aims are to present the research findings of the current ActEV challenges and to solicit papers on topics related to activity detection. The workshop will provide a platform for researchers to share research experiences and foster collaboration. Two concurrent challenges (https://actev.nist.gov) are currently running: the ActEV Sequestered Data Leaderboard based on the MEVA dataset and the self-reported ActEV TRECVID leaderboard based on the VIRAT dataset.
Recently, larger visual datasets and deep learning have revolutionized the computer vision field contributing to significant advances in the performance of activity detection and classification. However, a significant focus of activity detection has been on near field and social media video, while application to wide field-of-view public safety video has not yielded satisfactory results. Particular challenges for public safety video include large time periods with no activities, the presence of multiple simultaneous activities in different spatial regions of the video, and occurrence of many activities far from the video sensor resulting in low resolution. The VIRAT and MEVA datasets used for the ActEV evaluations are far more closely aligned with real-world public safety ground video analytics.
This workshop aims to bring all the stakeholders (activity detection, object detection and tracking, pose estimation, machine learning, etc.) together to help advance the state-of-the-art in human activity detection technology in multi-camera video streaming environments. The primary focus will be on activity detection algorithms, performance evaluation and characterization, and large dataset collections for activity detection. Research findings from the ActEV challenges will be presented. In addition, submissions are invited from the research community for unpublished research papers.
As organizers of the workshop we are looking forward to contributions in these and related areas.
A. Godil, J. Fiscus, A. Hoogs, R. Meth
Accepted papers will be allocated 8 pages in the proceeding and that is, a paper can be up to 8 pages + the references. The manuscripts should be submitted in PDF format and should follow the requirements of the IEEE WACV paper format. All work submitted to WACV workshop is considered confidential until the papers appear. Accepted papers will be included in the Proceedings of IEEE WACV 2020 & Workshops and will be sent for inclusion into the IEEE Xplore digital library.
For WACV'20 Paper preparation and Author kit, See WACV'20 Submissions page for more details Paper preparation and see Author kit.
You can register for HADCV'20 workshop at the WACV'20 registration page or by clicking this link: WACV2020 Registration