ActEV: Activities in Extended Video


Datasets

Framework

The DIVA Framework is a software framework designed to provide an architecture and a set of software modules which will facilitate the development of activity recognition analytics. The Framework is developed as a fully open source project on GitHub. The following links will help you get started with the framework: The DIVA Framework is based on KWIVER, an open source framework designed for building complex computer vision systems. The following links will help you learn more about KWIVER:
  • KWIVER Github Repository This is the main KWIVER site, all development of the framework happens here.
  • KWIVER Issue Tracker Submit any bug reports or feature requests for the KWIVER here. If there's any question about whether your issues belongs in the KWIVER or DIVA framework issues tracker, submit to the DIVA tracker and we'll sort it out..
  • KWIVER Main Documentation PageThe source for the KWIVER documentation is maintained in the Github repository using Sphinx. A built version is maintained on ReadTheDocs at this link. A good place to get started in the documentation, after reading the Introduction are the Arrows and Sprokit sections, both of which are used by the KWIVER framework.

Baseline Algorithms

KITWARE has adapted two "baseline" activity recognition algorithms to work within the DIVA Framework:

Annotation Tools

Contact Us

For information on data, evaluation code, etc., please email: actev-nist@nist.gov

For ActEV evaluation discussion, please visit our Google Group: https://groups.google.com/a/list.nist.gov/forum/#!forum/trecvid.actev

Activity Examples

An ActEV activity is defined to be “one or more people performing a specified movement or interacting with an object or group of objects”. Activities are annotated by humans using a set of annotation guidelines that specify how to perform the annotation and the criteria to determine if the activity occurred. Each activity is formally defined by five elements:

  • Activity Name - A mnemonic handle for the activity
  • Activity Description - Textual description of the activity
  • Begin time rule definition - The specification of what determines the beginning time of the activity
  • End time rule definition - The specification of what determines the ending time of the activity
  • Required object type list - The list of objects systems are expected to identify for the activity. Note: this aspect of an activity not addressed by ActEV-PC.

For example:

Activity Name
Description and Example Chip Videos

Closing
  • Description: A person closing the door to a vehicle or facility.
  • Start: The event begins 1 s before the door starts to move.
  • End: The event ends after the door stops moving. People in cars who close the car door from within is a closing event if you can still see the person within the car. If the person is not visible once they are in the car, then the closing should not be annotated as an event.
  • Objects associated with the activity : Person; and Door or Vehicle


Vehicle_turning_left
  • Description: A vehicle turning left or right is determined from the POV of the driver of the vehicle. The vehicle may not stop for more than 10 s during the turn.
  • Start: Annotation begins 1 s before vehicle has noticeably changed direction.
  • End: Annotation ends 1 s after the vehicle is no longer changing direction and linear motion has resumed. Note: This event is determined after a reasonable interpretation of the video.
  • Objects associated with the activity : Vehicle


Loading
  • Description: An object moving from person to vehicle.
  • Start: The event begins 2 s before the cargo to be loaded is extended toward the vehicle (i.e., before a person’s posture changes from one of “carrying” to one of “loading”).
  • End: The event ends after the cargo is placed into the vehicle and the person-cargo contact is lost. In the event of occlusion, it ends when the loss of contact is visible.
  • Objects associated with the activity: Person; and Vehicle


The ActEV evaluation will continue to add, modify, and exclude activities each evaluation cycle. The table below provides a historical list of activities and the objects involved for each ActEV Evaluation. In this table, the expected object types are P={Person} and V={Construction_Vehicle, Vehicle}.

Activity Name/Objects Summer18 ActEV LeaderBoard Fall 18 ActEV Self Reported Winter 18 ActEV Prize Challenge (No Objects Required)
Closing (P, V) or (P) X X X
Closing_trunk (P, V) X X X
Entering (P, V) or (P) X X X
Exiting (P, V) or (P) X X X
Interacts (P, V) X
Loading (P, V) X X X
Open_Trunk (P, V) X X X
Opening (P, V) or (P) X X X
Transport_HeavyCarry (P, V) X X X
Unloading (P, V) X X X
Vehicle_turning_left (V) X X X
Vehicle_turning_right (V) X X X
Vehicle_u_turn (V) X X X
Pull (P) X X X
Riding (P) X X X
Talking (P) X X X
activity_carrying (P) X X X
specialized_talking_phone (P) X X X
specialized_texting_phone (P) X X X
Updates
Of the 7 teams invited to take part in the ActEV-PC Independent Evaluation, the winners are:
NIST and the winners presented at the ActivityNet Workshop at CVPR 2019 on June 17.

Summary
ActEV is an series of evaluations to accelerate development of robust, multi-camera, automatic activity detection algorithms for forensic and real-time alerting applications. ActEV is an extension of the annual TRECVID Surveillance Event Detection (SED) evaluation where systems will also detect and track objects involved in the activities. Each evaluation will challenge systems with new data, system requirements, and/or new activities.

ActEV began with the Summer 2018 Blind and Leaderboard evaluations for 12 activities. The summer evaluation was followed by the ongoing Fall ActEV Self-Reported Evaluation which ends Dec 4, 2018 and included 18 activities. Beginning Dec 12, 2018, the Winter 2018, $50K ActEV Prize Challenge (ActEV-PC) kicks off with the opening of the Leaderboard Evaluation Phase.
What is Activity Detection in Extended Videos?
An ActEV activity is defined to be “one or more people performing a specified movement or interacting with an object or group of objects”. Activity detection technologies process extended video streams, such as those from a CCTV camera, and automatically detects all instances of the activity by: (1) identifying the type of activity, (2) producing a confidence score indicating the presence of instance, (3) temporally localizing the instance by indicating the begin and end times, and (4) optionally, detecting and tracking the objects (people, vehicles, objects) involved in the activity.

Click on the tabs above to see video examples, activity examples, and evaluation tasks
Who Can Participate ?
NIST invites all organizations, particularly universities and corporations, to participate in the ActEV evaluations. Participation is free. NIST does not provide funds to participants however NIST will administer several prize challenges related to ActEV.
How To Participate ?
Participation is easy: go to the Current Evaluations tab above for instructions.
Data
Each ActEV evaluation uses a new video data set, changes the evaluation tasks, or adds/changes activities. The data will be provided in MPEG-4 and AVI formatted files. See the individual evaluation pages for details.
Evaluation Metrics and Tools
The main scoring metrics will be based on detection, temporal localization, and spatio-temporal localization using evaluation measures that include the probability of missed detection and rate of false alarm. See details in the evaluation plans of each evaluation.

NIST maintains the ActEV Scoring Software on the Scoring software for the Activities in Extended Video (ActEV) evaluation GitHub repo.


Current Evaluation

ActEV: Video Examples
Below you will find four example videos from our data sets. There are two example views each of indoor and outdoor.
Location View 1 View 2
Indoor
Outdoor
ActEV Evaluation Tasks
Activity detection has been researched for many years and remains an unsolved computer vision challenge that requires many capabilities beyond the current state of the art. The ActEV series supports several evaluation tasks each escalating the difficulty by requiring more specific information from the system. Presently, there are three evaluation tasks defined: 1) Activity Detection (AD), 2) Activity and Object Detection (AOD), and (3) Activity and Object Detection and Tracking (AODT). Each evaluation task is summarized below. For a full description of the evaluation tasks, read the Evaluation Plan for each specific evaluation.
Activity Detection (AD)

For the Activity Detection task, given a target activity, a system automatically detects and temporally localizes all instances of the activity. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct and the temporal overlap must fall within a minimal requirement.
Activity and Object Detection (AOD)

For the Activity and Object Detection task, given a target activity, a system detects and temporally localizes all instances of the activity and spatially detects/localizes the people and/or objects associated with the target activity. For a system-identified instance to be scored as correct, it must meet the temporal overlap criteria for the AD task and in addition meet the spatial overlap of the identified objects during the activity instance.
Activity Object Detection and Tracking (AODT)

For the Activity Object Detection and Tracking task, given a target activity, a system detects and temporally localizes all instances of the activity, spatio-temporally detects/localizes the people and/or objects associated with the target activity, and properly assigns IDs the objects play in the activity. For a system-identified instance to be scored as correct, it must meet the temporal overlap criteria and spatio-temporal overlap of the objects for the AOD task and correctly assign the IDs to the objects as described in the activity definition.