ActEV Self-Reported Leaderboard (SRL) Challenge


ActEV Self-Reported Leaderboard (SRL) Challenge
Updates

New for WACV HADCV’22 workshop : ActEV Self-Reported Leaderboard (SRL) Challenge
  • NIST releases ActEV SRL test dataset: September 17, 2021
  • ActEV SRL Challenge Opens: September 17, 2021
  • Deadline for ActEV SRL Challenge results submission: December 01, 2021
  • Invite the top three teams on the ActEV SRL Challenge for WACV HADCV'22 Workshop (https://actev.nist.gov/workshop/hadcv22): December 18, 2021
Summary
The ActEV Self-Reported Leaderboard (SRL) Challenge will provide a new test set to participants to run activity detection algorithms on their own hardware platforms and submit results to the evaluation server for scoring.

What
Participants are invited to submit their activity detection results to the evaluation server.
Who
Everyone. Anyone who registers can submit to the evaluation server.
How
Evaluation Task
In the ActEV Self-Reported Leaderboard (SRL) Challenge , there is Activity Detection task which is defined as: given a target activity, a system automatically detects and temporally spatio-temporally localizes all instances of the activity. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct, and the temporal and spatio-temporal overlap must fall within a minimal requirement.
Data
The ActEV Self-Reported Leaderboard (SRL) Challenge is based on the MEVA Known Facility (KF) datasets. The large-scale MEVA dataset is designed for activity detection in multi-camera environments. It was created on the Intelligence Advanced Research Projects Activity (IARPA) Deep Intermodal Video Analytics (DIVA) program to support DIVA performers and the broader research community.

You can download the public MEVA resources (training video, training annotations and the test set) as described on the SRL Data Tab.

Metrics
Submitted results are measured by Probability of Missed Detection (Pmiss) and Time-based False Alarm (TFA). TFA is the portion of time that the detected an activity when in fact there was none. Submitted results are scored for Pmiss and TFA at multiple thresholds (based on confidence scores produced by the systems), creating a detection error tradeoff (DET) curve. We will also report the mean Average Precision (mAP) for the submitted results.

The ActEV Self-Reported Leaderboard (SRL) Challenge will report system performance scores on a public leaderboard on this website.

Please see details in the ActEV Self-Reported Leaderboard (SRL) Challenge evaluation plan below or check out the ActEV Scoring Software GitHub repo.

Evaluation Plan --Coming Soon!
Contact

For ActEV Evaluation information, please email: actev-nist@nist.gov

For ActEV Evaluation Discussion, please visit our ActEV Slack.

News
01Dec
The ActEV SRL Leaderboard will continue accepting submissions until December 2021.

ActEV Self-Reported Leaderboard (SRL) Challenge
New for WACV HADCV’22: ActEV Self-Reported Leaderboard (SRL) Challenge
  • NIST releases ActEV SRL test dataset: (update) September 17, 2021
  • ActEV SRL Challenge Opens: (update) September 17, 2021
  • Deadline for ActEV SRL Challenge results submission: December 01, 2021
  • Invite the top three teams on the ActEV SRL Challenge for WACV'22 HADCV workshop: December 18, 2021
ActEV SRL Dataset

The ActEV Self-Reported Leaderboard (SRL) Challenge is based on the MEVA Known Facility (KF) dataset. The MEVA KF data was collected at the Muscatatuck Urban Training Center (MUTC) with a team of over 100 actors performing in various scenarios. The MEVA KF dataset has two parts: (1) the public training and development data and (2) ActEV SR test dataset ( available Sep 10th) .

The MEVA KF data were collected and annotated for the Intelligence Advanced Research Projects Activity (IARPA) Deep Intermodal Video Analytics (DIVA) program. A primary goal of the DIVA program is to support activity detection in multi-camera environments for both DIVA performers and the broader research community.

Training and Development Data

The public KF dataset has been released as the Multiview Extended Video with Activities (MEVA) dataset. As of December 2019, 328 hours of ground-camera data and 4.2 hours of Unmanned Arial Vehicle video have been released. 160 hours of the ground camera video have been annotated by the same team that has annotated the ActEV test set. Additional annotations have been performed by the public and are also available in the annotation repository.

ActEV SRL Testdata

The ActEV SRL Testdata will be released on September 17th.

There are four locations of data pertaining to the MEVA data resources and the evaluation. The sections below document how to obtain and use the data for the HADCV evaluation.

Data Download

You can download the public MEVA video for free from the mevadata.org website (http://mevadata.org/) by completing these steps:

  • Get an up-to-date copy of the ActEV Data Repo via GIT. You'll need to either clone the repo (the first time you access it) or update a previously downloaded repo with 'git pull'.
    • Clone: git clone https://gitlab.kitware.com/actev/actev-data-repo.git
    • Update: cd "Your_Directory_For_actev-data-repo"; git pull
    • Follow the steps in the top-level README.
    • Download the HADCV22 Test collection into ./partitions/HADCV22-Test-20211010 using the command: % python scripts/actev-corpora-maint.py --regex ".*drop-4-hadcv22.*" --operation download
  • Get an up-to-date copy of the MEVA Data Repo via GIT. You'll need to either clone the repo (the first time you access it) or update a previously downloaded repo with 'git pull'.
    • Clone: git clone https://gitlab.kitware.com/meva/meva-data-repo
    • Update: cd "Your_Directory_For_meva-data-repo"; git pull
  • Download the training video found in the MEVA AWS Video Data Bucket within the directories: drops-123-r13, examples, mutc-3d-model, uav-drop-01, and updates-r13. (NOTE: directory drop-4-hadcv22 is NOT a training resource).
Coming Soon
Rules for Leaderboard Evaluation Schedule

During the TRECID ActEV SRL evaluation, you can create a maximum of four systems and submit maximum of two results per day and maximum of 50 results in total.

The challenge participant can train their systems or tune parameters using any data complying with applicable laws and regulations. However, they must inform NIST that they are using such data, and provide appropriate detail regarding the type of data used.

The challenge participant agree not to probe the test videos via manual/human means such as looking at the videos to produce the activity type and timing information from prior, during and after the evaluation.

Participants are free to publish results for their own system but must not publicly compare their results with other participants (ranking, score differences, etc.) without explicit written consent from the other participants.

While participants may report their own results, participants may not make advertising claims about their standing in the evaluation, regardless of rank, or winning the evaluation, or claim NIST endorsement of their system(s). The following language in the U.S. Code of Federal Regulations (15 C.F.R. § 200.113)14 shall be respected: NIST does not approve, recommend, or endorse any proprietary product or proprietary material. No reference shall be made to NIST, or to reports or results furnished by NIST in any advertising or sales promotion which would indicate or imply that NIST approves, recommends, or endorses any proprietary product or proprietary material, or which has as its purpose an intent to cause directly or indirectly the advertised product to be used or purchased because of NIST test reports or results.

At the conclusion of the evaluation, NIST may generate a report summarizing the system results for conditions of interest. Participants may publish or otherwise disseminate these charts, unaltered and with appropriate reference to their source.



ActEV SRL Evaluation Task

In the ActEV Self-Reported Leaderboard (SRL) Challenge, there are two Activity Detection task for detecting and temporally localizes and spatio-temporally localizing activities; given a target activity, a system automatically detects and temporally localizes all instances of the activity. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct, and the temporal and spatio-temporal overlap must fall within a minimal requirement.

Facilities

Known Facilities

Systems will be tested on MEVA Known Facility testdata. Facility data is available at https://mevadata.org including a site map with approximate camera locations and sample FOVs, camera models, a 3D site model, and additional metadata and site information. Sample representative video from the known facility is also provided, with over 160 hours of video annotated for leaderboard activities. All available metadata and site information may be used during system development.

Activities

Known Activities

For the MEVA Known Activities (KA) tests, developers are provided a list of activities in advance for use during system development (e.g., training) for the system to automatically detect and localize all instances of the activities.

Detailed activity definitions are in the ActEV Annotation Definitions for MEVA Data document.

The names of the 20 Known Activities for ActEV SRL (subset of the the SDL names ) :


person_closes_vehicle_door person_reads_document
person_enters_scene_through_structure person_sits_down
person_enters_vehicle person_stands_up
person_exits_scene_through_structure person_talks_to_person
person_exits_vehicle person_texts_on_phone
person_interacts_with_laptop >person_transfers_object
person_opens_facility_door vehicle_starts
person_opens_vehicle_door vehicle_stops
person_picks_up_object vehicle_turns_left
person_puts_down_object vehicle_turns_right

Datasets

Framework

The DIVA Framework is a software framework designed to provide an architecture and a set of software modules which will facilitate the development of activity recognition analytics. The Framework is developed as a fully open source project on GitHub. The following links will help you get started with the framework: The DIVA Framework is based on KWIVER, an open source framework designed for building complex computer vision systems. The following links will help you learn more about KWIVER:
  • KWIVER Github Repository This is the main KWIVER site, all development of the framework happens here.
  • KWIVER Issue Tracker Submit any bug reports or feature requests for the KWIVER here. If there's any question about whether your issues belongs in the KWIVER or DIVA framework issues tracker, submit to the DIVA tracker and we'll sort it out..
  • KWIVER Main Documentation Page The source for the KWIVER documentation is maintained in the Github repository using Sphinx. A built version is maintained on ReadTheDocs at this link. A good place to get started in the documentation, after reading the Introduction are the Arrows and Sprokit sections, both of which are used by the KWIVER framework.
The framework based R-C3D baseline algorithm implementation with CLI found in the link.

Baseline Algorithms

KITWARE has adapted two "baseline" activity recognition algorithms to work within the DIVA Framework:

Visualization Tools

Annotation Tools