Skip to main content

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

ActEV'21 Sequestered Data Leaderboard

UF (Unknown Facilities), KF (Known Facilities)

KA (Known Activities), SA (Surprise Activities)


ActEV'21 Sequestered Data Leaderboard (SDL) Evaluation
Updates
Summary
The Activities in Extended Video (ActEV) series of evaluations is designed to accelerate development of robust, multi-camera, automatic activity detection systems in known/unknown facilities for forensic and real-time alerting applications.

The ActEV 2021 Sequestered Data Leaderboard (ActEV’21 SDL) includes two separate evaluations of real-time or faster activity detection systems:

  • A Known Facility (KF) evaluation with Electro-Optical (EO)/Infrared (IR) modalities
  • An Unknown Facility (UF) evaluation with Known/Surprise activities

Systems submitted to the SDL are expected to process video in real-time or faster. Scores for systems operating slower than real-time will be computed from the output generated before the real-time clock expires per the Metrics section below.

What
Participants are invited to submit their runnable activity detection software using an ActEV Command Line Interface (CLI) submission. There are two leaderboards (KF and UF) with sub-leaderboards as follows:
  • KF: Known Facilities Leaderboards
    • KF Known Activities on EO
    • KF Known Activities on IR
  • UF: Unknown Facilities Leaderboards
    • UF Known Activities on EO
    • UF Surprise Activities on EO

Participants may submit systems that address only a single sub-leaderboard task as described in the evaluation plan below. NIST will evaluate system performance on sequestered data using NIST hardware and results will be posted to a public leaderboard.

Who
Everyone. Anyone who registers can submit to the evaluation server.
How
Register here and then follow the instructions on the Algorithm Submission tab above.

Systems must follow a NIST-defined CLI and automatically run on NIST’s servers, both of which are described in the instructions.

Evaluation Task
In the ActEV'21 SDL evaluation, there is one Activity Detection (AD) task which is defined as: given a target activity, a system automatically detects and temporally localizes all instances of the activity. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct, and the temporal overlap must fall within a minimal requirement.
Data
The ActEV'21 SDL evaluation is based on the KF and UF datasets. The KF and UF data were collected and annotated for the Intelligence Advanced Research Projects Activity (IARPA) Deep Intermodal Video Analytics (DIVA) program.

You can download the public MEVA dataset for free at mevadata.org. We also provide annotations for 27 hours of MEVA data, and instructions on how to make and share activity annotations are at mevadata.org.

Metrics
Submitted systems are measured by Probability of Missed Detection (Pmiss) and Time-based False Alarm (TFA). Pmiss is the portion of activities where the system did not detect the activity for at least 1 second. TFA is the portion of time that the system detected an activity when in fact there was none. Submitted systems are scored for Pmiss and TFA at multiple thresholds (based on confidence scores produced by the systems), creating a detection error tradeoff (DET).

The ActEV ’21 SDL Leaderboard will report scores on how systems would perform in real-time. If an SDL system takes longer than real-time to process test videos, NIST will score the system as if execution stopped where the runtime exceeded real time. The leaderboard ranking of a system is based on the scores as follows:
  • Time limited partial AUDC for the public leaderboard ranking
  • Time limited mean Pmiss at TFA = 0.02 for IARPA DIVA program ranking

Please see details in the ActEV'21 SDL evaluation plan below or check out the ActEV Scoring Software GitHub repo.

Evaluation Plan
Contact

For ActEV Evaluation information, please email: actev-nist@nist.gov

For ActEV Evaluation Discussion, please visit our ActEV Slack.

News
28May
The SDL Leaderboard will continue accepting submissions until November 2021.

ActEV'21 SDL Evaluation Schedule
CVPR'21 ActivityNet Guest Task ActEV SDL UF

February 10, 2021: CVPR'21 ActivityNet ActEV'21 SDL UF (with Known Activities) leaderboard opens
May 15, 2021 : New Deadline for ActEV SDL UF CLI submissions to be included in CVPR'21 ActivityNet workshop.
May 13, 2021 : Please continue submissions. We will also present the live leaderboard results during the workshop.
June 05, 2021 : Top two teams on the ActEV'21 SDL UF (with Known Activities) leaderboard will be invited to give oral presentations at the CVPR'21 ActivityNet workshop based on the CLI submission deadline (May 15, 2021).
June 13, 2021: Deadline to submit your video presentations (for top two teams)

Sep 21, 2020: ActEV'21 SDL UF (with Known Activities) leaderboard opens
Sep 25, 2020: ActEV'21 SDL KF with EO (Electro-Optical) and IR (Infrared) leaderboards open
Leaderboard Remains Open: Leaderboard will continue to remain open to allow participants to show continued progress on the challenging problem.
ActEV SDL Dataset

The ActEV SDL evaluation is based on the Known Facility (KF) and Unknown Facility (UF) datasets. The KF data was collected at the Muscatatuck Urban Training Center (MUTC) with a team of over 100 actors performing in various scenarios. The KF dataset has two parts: (1) the public training and development data and (2) sequestered evaluation data used by NIST to test systems. The UF data is a sequestered evaluation data which was collected at different places and includes both known and surprise activities.

The KF and UF data were collected and annotated for the Intelligence Advanced Research Projects Activity (IARPA) Deep Intermodal Video Analytics (DIVA) program. A primary goal of the DIVA program is to support activity detection in multi-camera environments for both DIVA performers and the broader research community.

Training and Development Data

The public KF dataset has been released as the Multiview Extended Video with Activities (MEVA) dataset. As of December 2019, 328 hours of ground-camera data and 4.2 hours of Unmanned Arial Vehicle video have been released. 28 hours of the ground camera video have been annotated by the same team that has annotated the ActEV test set. Additional annotations have been performed by the public and are also available in the annotation repository.

Sequestered Data

The KF test set is a 140-hour collection of videos which consists of both EO and IR camera modalities, public cameras (video from the cameras and associated metadata are in the public training set) and non-public cameras (video is not provided on metadata.org and camera parameters are only provided to the systems at test time). The KF leaderboard presents results on the full 140-hour collection reporting separately for EO and IR data. Developers receive additional scores by activity for the EO_Set1 and the IR_Set1. The EO-Set1 and IR-Set1 are subsets of the entire test sets. For example, EO-Set1 is a random 50% of the EO data from public cameras and likewise for IR-Set1.

The UF test set has a large collection of videos exclusively in the EO spectrum (as of September 9, 2020). The UF leaderboard presents results separately for known activity and surprise activity types.

Data Download

You can download the public MEVA video and annotations dataset for free from the mevadata.org website (http://mevadata.org/)

Then complete these steps:

  • Get an up-to-date copy of the ActEV Data Repo via GIT. You'll need to either clone the repo (the first time you access it) or update a previously downloaded repo with 'git pull'.
    • Clone: git clone https://gitlab.kitware.com/actev/actev-data-repo.git
    • Update: cd "Your_Directory_For_actev-data-repo"; git pull
  • Get an up-to-date copy of the MEVA Data Repo via GIT. You'll need to either clone the repo (the first time you access it) or update a previously downloaded repo with 'git pull'.
    • Clone: git clone https://gitlab.kitware.com/meva/meva-data-repo
    • Update: cd "Your_Directory_For_meva-data-repo"; git pull
ActEV SDL Evaluation Task

In the ActEV'21 SDL evaluation, there is one Activity Detection (AD) task for detecting and localizing activities; given a target activity, a system automatically detects and temporally localizes all instances of the activity. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct, and the temporal overlap must fall within a minimal requirement. The minimum temporal overlap with a single reference instance in the evaluation is 1 second. If the reference instance duration is less than 1 second, 50% of the reference duration is required as the minimum temporal overlap. The AD task applies to both Known facilities (KF) and Unknown facilities (UF) as well as both known and surprise activities.

Facilities

Known Facilities

Systems will be tested on detecting Known Activities in both Electro-Optical (EO) and Infrared (IR) camera modalities, where each modality is scored individually on separate sub-leaderboards. Both the facility and the activities are known to the developers, Facility data is available at https://mevadata.org including a site map with approximate camera locations and sample FOVs, camera models, a 3D site model, and additional metadata and site information. Sample representative video from the known facility is also provided, with over 20 hours of video annotated for leaderboard activities. All available metadata and site information may be used during system development.

Unknown Facilities

Systems will be tested on both Known and Surprise activities in EO video. Unlike the KF testing, no facility information will be provided to developers. A subset of the facility-defining metadata and the Surprise Activity definitions (textual definitions and at least one chip video exemplars with localization annotations) will be provided at test time. Thus, systems must be able to build detectors on the fly. The Known Activities are the same those used for KF testing. Systems will be provided a maximum of 10 hours to train SA detectors while executing on the NIST hardware.

Activities

Known Activities

For the Known Activities (KA) tests, developers are provided a list of activities in advance for use during system development (e.g., training) for the system to automatically detect and localize all instances of the activities.

Detailed activity definitions are in the ActEV Annotation Definitions for MEVA Data document.

The names of the 37 Known Activities for ActEV’21 SDL are:


person_abandons_package person_loads_vehicle person_stands_up
person_closes_facility_door person_transfers_object person_talks_on_phone
person_closes_trunk person_opens_facility_door person_texts_on_phone
person_closes_vehicle_door person_opens_trunk person_steals_object
person_embraces_person person_opens_vehicle_door person_unloads_vehicle
person_enters_scene_through_structure person_talks_to_person vehicle_drops_off_person
person_enters_vehicle person_picks_up_object vehicle_picks_up_person
person_exits_scene_through_structure person_purchases vehicle_reverses
person_exits_vehicle person_reads_document vehicle_starts
hand_interacts_with_person person_rides_bicycle vehicle_stops
person_carries_heavy_object person_puts_down_object vehicle_turns_left
person_interacts_with_laptop person_sits_down vehicle_turns_right
vehicle_makes_u_turn

Surprise Activities

For the Surprise Activities (SA) tests, the pre-built system is provided a set of activities with training materials (text description and at least one exemplar video chip) during system test time (in execution) to automatically develop detection and localization models. Then the system must automatically detect and localize all instances of the activities. To facilitate activity training at test time, systems will be provided a maximum of 10 hours to train for SAs while executing on the NIST hardware.

Please click Surprise Activities to see how to make SDL submission for SA-capable algorithms.

Algorithm Delivery for the SDL participants

System delivery to the leaderboard must be in a form compatible with the ActEV Command Line Interface (ActEV CLI) and submitted to NIST for testing. The command line interface implementation that you will provide formalizes the entire process of evaluating a system, by providing the evaluation team a means to: (1) download and install your software via a single URL, (2) verify that the delivery works AND produces output that is “consistent” with output you produce, and (3) process a large collection of video in a fault-tolerant, parallelizable manner.

To complete this task you will need the following items described in detail below:

  1. FAQ - Validation Phase Processing
  2. CLI Description
  3. The Abstract CLI Git Repository
  4. The CLI Implementation Primer
  5. The Validation Data Set
  6. Example CLI-Compliant Implementation
  7. NIST Hardware and Initial Operating System Description
  8. SDL Submission Processing Pipeline

1. FAQ - Validation Phase Processing

The ActEV SDL - Validation Phase Processing

FAQ

2. CLI Description

The ActEV CLI description

3. The Abstract CLI GIT Repository

The Abstract CLI GIT repository. The repo contains documentation.

4. The CLI Implementation Primer

There are 6 steps to adapt your code to the CLI. The ActEV Evaluation CLI Programming Primer describes the steps to clone the Abstract CLI and begin adapting the code to your implementation.

5. The Validation Data Set

As mentioned above, the CLI is used to verify the downloaded software is correctly installed and produces the same output at NIST as you produce locally. In order to do so, we have provided a small validation data set (ActEV-Eval-CLI-Validation-Set3) as part of the ActEV SDL Dataset that will be processed both at your site and at NIST. Please use this data in Step 3 of the Primer described above.

6. Example CLI-Compliant Implementation

The link below provides CLI implementations for the leaderboard baseline algorithm for Known Activities only:

7. NIST Independent Evaluation Infrastructure Specification

Hardware specifications for ActEV’21 SDL evaluations are as follows.

1080 Hardware for Known Facility Evaluation:

  • 32 VCPU 2.2Ghz (16 hyperthreaded)
  • 128GB memory
  • Root Disk: 40GB
  • Ephemeral Disk: 256GB
  • GPU: 4x - 1080Ti
  • OS: Ubuntu 18.04

2080 Hardware for Unknown Facility Evaluation:

  • 32 VCPU 2.2Ghz (16 hyperthreaded)
  • 128GB memory
  • Root Disk: 128GB
  • Ephemeral Disk: 256GB
  • GPU: 4x - 2080Ti
  • OS: Ubuntu 18.04

8. SDL Submission Processing Pipeline

There are three stages for the submission processing pipeline. They are:

  • Validation: during this stage NIST runs through each of the ActEV CLI commands to install your system, runs the system on the validation set, compares the produced output to the validation set, and finally, takes a snapshot of your system to re-use during execution.
  • Execution: during this stage, we use the snapshot to process the sequestered data. Presently, we can divide the sequestered data into 2-hour sub-parts of the data set or process the whole dataset. Each sub-part is nominally 24, 5-minute files. We are only processing ActEV'21 SDL data through your system. Presently, we have a time limit per part and if a part fails to be processed, we retry it once.
  • Scoring: After all the parts have been processed, the outputs are merged, and scored.
SDL Submission for Surprise Activities

The content below is a jumping off point to begin developing an ActEV CLI (Command Line Interface) compliant system capable of training on Surprise Activities (SA) and being able to test the system’s capabilities for the surprise activities detection on sequestered data. This document describes the steps to obtain ActEV-provided data resources and information sources to make sure you have all the critical information.


  • Step 1: Obtain the ActEV-provided data resources needed to develop an SA-capable system by updating to actev-data-repo. The commands to update the repo are as follows and these steps assume you have already completed the setup steps in the top-level "README.md":
    • cd actev-data-repo
    • git pull
    • python3 scripts/actev-corpora-maint.py --operation download --corpus MEVA

  • Step 2: Review the ActEV-provided data resources. There are three directories within actev-data-repo relevant to SA:
    • partitions/ActEV-Eval-CLI-Validation-Set7: This is a new validation data set for CLI testing and must be used for Surprise Activity Enabled systems. This is a larger data set (5 hours) than other validation sets so that SA can be reasonably tested both during development and during validation on SDL.
    • partitions/MEVA-Public-SimUF-KA-SimSA5-20201023: This is a revision of the MEVA-Public-20200288 data partion to support internal development of a Surprise Activity-enabled system. The data partition uses the MEVA Public data found on mevadata.org and the meva-data-repo. Consult the README.md for details.
    • corpora/MEVA/SimSA: This directory contains derived data sources for the Surprise Activities. This directory is the source for the two partitions above and demonstrates how Kitware-produced MEVA annotations are transformed to be simulated Surprise Activities.

  • Step 3: Familiarize yourself with the Surprise Activity (SA) evaluation task by reviewing the evaluation plan in particular Sections 1 (Overview) and 4.3 (Activity Definitions and Annotations). Also note that SA-capable systems will be simultaneously evaluated both surprise and known activities.

  • Step 4: Familiarize yourself with activity definition materials that the system will process to build detection models. For each activity, a system will be provided two components; a textual description and a set of exemplar instances. The textual description includes a natural language description of the activity, common examples, and rules to determine the start and end of an activity instance. An exemplar instance consists of a video-chip clip and a KWIVER Packet Format (KPF) annotation file outlining the people AND objects involved in the activity instance. The ActEV-provided data resources include “Simulated Surprise Activities” (SimSA) to help developers understand the data resources. The table below shows an example of the SimSA “SimSA5_person_closes_trunk” that was derived from the MEVA known activity “person_closes_trunk”.

  • Step 5: Familiarize yourself with the ActEV Evaluation JSON files that are used as input to the system by reading the ActEV Evaluation JSON Formats Document found in (https://gitlab.kitware.com/meva/meva-data-repo/-/tree/master/documents/nist-json-for-actev). In particular, review the structure and content of the “activity-index” JSON found in Section 1.2 (Activity Index). The activity index enumerates all the activity-defining materials that are provided to a CLI-compliant system during activity training. For example, see actev-data-repo/partitions/ActEV..Set7/activity-index.json.

  • Step 6: Familiarize yourself with the modifications to the ActEV Abstract CLI to support SA found in Algorithm Submission. In particular, note the new ‘actev-train-system’ command that takes as input an activity-index.json and a directory of video. The command is responsible for building the models to detect the activities and storing the models in the CLI directory for subsequent uses. When the actev-train-system command is called to train/configure the system to detect the surprise activities, the step must run within 10 hours on the NIST SDL server.

  • Step 7: Develop a CLI-compliant system for submission
    • Follow the procedure to update the CLI to the latest version as described in the ActEV Evaluation CLI Programming Primer.
    • Implement the ‘actev-train-system’ command. Keep in mind, models need to be accessible for subsequent CLI calls and models produced during training should be written to the same/similar location as existing models.
    • Use the required validation set “ActEV-Eval-CLI-Validation-Set7” and include both Known Activity and Surprise Activity detections.
    • Make your submission to the SDL.

Datasets

Framework

The DIVA Framework is a software framework designed to provide an architecture and a set of software modules which will facilitate the development of activity recognition analytics. The Framework is developed as a fully open source project on GitHub. The following links will help you get started with the framework: The DIVA Framework is based on KWIVER, an open source framework designed for building complex computer vision systems. The following links will help you learn more about KWIVER:
  • KWIVER Github Repository This is the main KWIVER site, all development of the framework happens here.
  • KWIVER Issue Tracker Submit any bug reports or feature requests for the KWIVER here. If there's any question about whether your issues belongs in the KWIVER or DIVA framework issues tracker, submit to the DIVA tracker and we'll sort it out..
  • KWIVER Main Documentation Page The source for the KWIVER documentation is maintained in the Github repository using Sphinx. A built version is maintained on ReadTheDocs at this link. A good place to get started in the documentation, after reading the Introduction are the Arrows and Sprokit sections, both of which are used by the KWIVER framework.
The framework based R-C3D baseline algorithm implementation with CLI found in the link.

Baseline Algorithms

KITWARE has adapted two "baseline" activity recognition algorithms to work within the DIVA Framework:

Visualization Tools

Annotation Tools