ActEV Self-Reported Leaderboard (SRL) Challenge


ActEV Self-Reported Leaderboard (SRL) Challenge
Updates

Update to the WACV HADCV’22 : ActEV Self-Reported Leaderboard (SRL) Challenge
  • Primary task changed to Activity and Object Detection (AOD) that requires spatial localization.
  • Primary Metric Changed to Pmiss@0.1RFA.
  • Updated data resources for AOD support - See ‘Data’.
Summary
The ActEV Self-Reported Leaderboard (SRL) Challenge will provide a new test set to participants to run activity and object detection algorithms on their own hardware platforms and submit results to the evaluation server for scoring.

What
Participants are invited to submit their activity and object detection results to the evaluation server.
Who
Everyone. Anyone who registers can submit to the evaluation server.
How
Evaluation Task
In the ActEV SRL evaluation, there is one primary task:
The Activity and Object Detection (AOD) task for detecting and temporal/spatially localizes all instances of the activity from predefined activity classes. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct, and the temporal/spatial overlap must fall within a minimal requirement. In the ActEV SRL evaluation, there is one secondary task:
The Activity Detection (AD) task for detecting and temporally localizing activities. Given a target activity, a system automatically detects and temporally localizes all instances of the activity from predefined activity classes. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct and must meet minimal temporal overlap requirement .

Evaluation Type
The ActEV Self-Reported Leaderboard (SRL) Challenge is a self-reported leaderboard (take-home) evaluation; participants download a ActEV SRL testset, run their activity detection algorithms on the test set using their own hardware platforms, and then submit their system output to the evaluation server for scoring results.
Data
The ActEV Self-Reported Leaderboard (SRL) Challenge is based on the Multiview Extended Video with Activities (MEVA) Known Facility (KF) dataset. The large-scale MEVA dataset is designed for activity detection in multi-camera environments. It was created on the Intelligence Advanced Research Projects Activity (IARPA) Deep Intermodal Video Analytics (DIVA) program to support DIVA performers and the broader research community.

You can download the public MEVA resources (training video, training annotations and the test set) as described on the SRL Data Tab.

Metrics
For both the AOD (primary) and AD tasks, the submitted results are measured by Probability of Missed Detection (Pmiss) at a Rate of Fixed False Alarm (RateFA) of 0.1 (Pmiss@0.1RFA). RateFA is the average number of false alarms activity instances per minute. Pmiss is the portion of activity instances where the system did not detect the activity within the required temporal (AD) and spatio-temporal (AOD) overlap requirements. Submitted results are scored for Pmiss and RateFA at multiple thresholds (based on confidence scores produced by the systems), creating a detection error tradeoff (DET) curve .


The ActEV Self-Reported Leaderboard (SRL) Challenge will report system performance scores on a public leaderboard on this website.

Please see details in the ActEV Self-Reported Leaderboard (SRL) Challenge evaluation plan below or check out the Updated ActEV Scoring Software GitHub repo.

Updated Evaluation Plan --Coming Soon!
Contact

For ActEV Evaluation information, please email: actev-nist@nist.gov

For ActEV Evaluation Discussion, please visit our ActEV Slack.

News
01Dec
The ActEV SRL Leaderboard will continue accepting submissions until December 01, 2021.

ActEV Self-Reported Leaderboard (SRL) Challenge
New for WACV HADCV’22: ActEV Self-Reported Leaderboard (SRL) Challenge
  • NIST releases ActEV SRL test dataset: (update) September 17, 2021
  • ActEV SRL Challenge Opens: (update) September 17, 2021
  • Deadline for ActEV SRL Challenge results submission: December 01, 2021: 4:00 PM EST
  • Invite the top three teams on the ActEV SRL Challenge based on the AOD ranking for WACV'22 HADCV workshop: December 18, 2021
ActEV SRL Dataset

The ActEV Self-Reported Leaderboard (SRL) Challenge is based on the Multiview Extended Video with Activities (MEVA) Known Facility (KF) dataset. The MEVA KF data was collected at the Muscatatuck Urban Training Center (MUTC) with a team of over 100 actors performing in various scenarios. The MEVA KF dataset has two parts: (1) the public training and development data and (2) ActEV SRL test dataset ( available Sep 10th, 2021) .

The MEVA KF data were collected and annotated for the Intelligence Advanced Research Projects Activity (IARPA) Deep Intermodal Video Analytics (DIVA) program. A primary goal of the DIVA program is to support activity detection in multi-camera environments for both DIVA performers and the broader research community.

Training and Development Data

In December 2019, the public MEVA dataset has been released with 328 hours of ground-camera data and 4.2 hours of Unmanned Arial Vehicle video have been released. 160 hours of the ground camera video have been annotated by the same team that has annotated the ActEV test set. Additional annotations have been performed by the public and are also available in the annotation repository.

ActEV SRL Testdata

The ActEV SRL Test dataset has been released.

There are four locations of data pertaining to the MEVA data resources and the evaluation. The sections below document how to obtain and use the data for the HADCV evaluation.

Data Download

You can download the public MEVA video for free from the mevadata.org website (http://mevadata.org/) by completing these steps:

SRL Test data

  • Get an up-to-date copy of the ActEV Data Repo via GIT. You'll need to either clone the repo (the first time you access it) or update a previously downloaded repo with 'git pull'.
    • Clone: git clone https://gitlab.kitware.com/actev/actev-data-repo.git
    • Update: cd "Your_Directory_For_actev-data-repo"; git pull
    • Follow the steps in the top-level README.
    • Download the HADCV22 SRL Test dataset into ./partitions/HADCV22-Test-20211010 using the command: % python scripts/actev-corpora-maint.py --regex ".*drop-4-hadcv22.*" --operation download
MEVA Training/Development data
  • Get an up-to-date copy of the MEVA Data Repo via GIT. You'll need to either clone the repo (the first time you access it) or update a previously downloaded repo with 'git pull'.
    • Clone: git clone https://gitlab.kitware.com/meva/meva-data-repo
    • Update: cd "Your_Directory_For_meva-data-repo"; git pull
  • Download the training data collection found in the MEVA AWS Video Data Bucket within the directories: drops-123-r13, examples, mutc-3d-model, uav-drop-01, and updates-r13. (NOTE: directory drop-4-hadcv22 is NOT a training resource).
Rules for Leaderboard Evaluation Schedule

During the TRECID ActEV SRL evaluation, you can create a maximum of four systems and submit maximum of two results per day and maximum of 50 results in total.

The challenge participant can train their systems or tune parameters using any data complying with applicable laws and regulations. However, they must inform NIST that they are using such data, and provide appropriate detail regarding the type of data used.

The challenge participant agree not to probe the test videos via manual/human means such as looking at the videos to produce the activity type and timing information from prior, during and after the evaluation.

Participants are free to publish results for their own system but must not publicly compare their results with other participants (ranking, score differences, etc.) without explicit written consent from the other participants.

While participants may report their own results, participants may not make advertising claims about their standing in the evaluation, regardless of rank, or winning the evaluation, or claim NIST endorsement of their system(s). The following language in the U.S. Code of Federal Regulations (15 C.F.R. § 200.113)14 shall be respected: NIST does not approve, recommend, or endorse any proprietary product or proprietary material. No reference shall be made to NIST, or to reports or results furnished by NIST in any advertising or sales promotion which would indicate or imply that NIST approves, recommends, or endorses any proprietary product or proprietary material, or which has as its purpose an intent to cause directly or indirectly the advertised product to be used or purchased because of NIST test reports or results.

At the conclusion of the evaluation, NIST may generate a report summarizing the system results for conditions of interest. Participants may publish or otherwise disseminate these charts, unaltered and with appropriate reference to their source.



ActEV SRL Evaluation Task

In the ActEV SRL evaluation, there is one primary task:
The Activity and Object Detection (AOD) task for detecting and temporal/spatially localizes all instances of the activity from predefined activity classes. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct, and the temporal/spatial overlap must fall within a minimal requirement.

In the ActEV SRL evaluation, there is one secondary task:
The Activity Detection (AD) task for detecting and temporally localizing activities. Given a target activity, a system automatically detects and temporally localizes all instances of the activity from predefined activity classes. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct and must meet minimal temporal overlap requirement.

Facilities

Known Facilities

Systems will be tested on MEVA Known Facility testdata. Facility data is available at https://mevadata.org including a site map with approximate camera locations and sample FOVs, camera models, a 3D site model, and additional metadata and site information. Sample representative video from the known facility is also provided, with over 160 hours of video annotated for leaderboard activities. All available metadata and site information may be used during system development.

Activities

Known Activities

For the MEVA Known Activities (KA) tests, developers are provided a list of activities in advance for use during system development (e.g., training) for the system to automatically detect and localize all instances of the activities.

Detailed activity definitions are in the ActEV Annotation Definitions for MEVA Data document.

The names of the 20 Known Activities for ActEV SRL (subset of the the SDL names ) :


person_closes_vehicle_door person_reads_document
person_enters_scene_through_structure person_sits_down
person_enters_vehicle person_stands_up
person_exits_scene_through_structure person_talks_to_person
person_exits_vehicle person_texts_on_phone
person_interacts_with_laptop >person_transfers_object
person_opens_facility_door vehicle_starts
person_opens_vehicle_door vehicle_stops
person_picks_up_object vehicle_turns_left
person_puts_down_object vehicle_turns_right

Datasets

Framework

The DIVA Framework is a software framework designed to provide an architecture and a set of software modules which will facilitate the development of activity recognition analytics. The Framework is developed as a fully open source project on GitHub. The following links will help you get started with the framework: The DIVA Framework is based on KWIVER, an open source framework designed for building complex computer vision systems. The following links will help you learn more about KWIVER:
  • KWIVER Github Repository This is the main KWIVER site, all development of the framework happens here.
  • KWIVER Issue Tracker Submit any bug reports or feature requests for the KWIVER here. If there's any question about whether your issues belongs in the KWIVER or DIVA framework issues tracker, submit to the DIVA tracker and we'll sort it out..
  • KWIVER Main Documentation Page The source for the KWIVER documentation is maintained in the Github repository using Sphinx. A built version is maintained on ReadTheDocs at this link. A good place to get started in the documentation, after reading the Introduction are the Arrows and Sprokit sections, both of which are used by the KWIVER framework.
The framework based R-C3D baseline algorithm implementation with CLI found in the link.

Baseline Algorithms

KITWARE has adapted two "baseline" activity recognition algorithms to work within the DIVA Framework:

Visualization Tools

Annotation Tools