Skip to main content

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

TREC'24 ActEV Self-Reported Leaderboard (SRL) Challenge

ActEV Self-Reported Leaderboard (SRL) Challenge (now running under TREC 2024)
Updates

  • The Leaderboard is reporting results for TREC '24
  • TREC'24 conference: Nov 18 — 22 (hybrid)
  • TREC'24 ActEV test dataset release: Same as for TRECVID ActEV SRL 2023 and 2022
  • TREC'24 ActEV SRL Challenge starts from June 01, 2024.
  • TREC'24 ActEV SRL Challenge results submission deadline : October 7, 2024: 4:00 PM EST
  • Primary Metric is Activity and Object Detection (AOD) and is based on Pmiss@0.1RFA.
Summary
The ActEV Self-Reported Leaderboard (SRL) Challenge is now running under TREC 2024 and will provide a test set to participants to run activity and object detection algorithms on their own hardware platforms and submit results to the evaluation server for scoring.

What
Participants are invited to submit their activity and object detection results to the evaluation server.
Who
Everyone. Anyone who registers can submit to the evaluation server.
How
Evaluation Task
In the ActEV SRL evaluation, there is one primary task:
The Activity and Object Detection (AOD) task for detecting and temporal/spatially localizes all instances of the activity from predefined activity classes. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct, and the temporal/spatial overlap must fall within a minimal requirement. In the ActEV SRL evaluation, there is one secondary task:
The Activity Detection (AD) task for detecting and temporally localizing activities. Given a target activity, a system automatically detects and temporally localizes all instances of the activity from predefined activity classes. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct and must meet minimal temporal overlap requirement .

Evaluation Type
The ActEV Self-Reported Leaderboard (SRL) Challenge is a self-reported leaderboard (take-home) evaluation; participants download a ActEV SRL testset, run their activity detection algorithms on the test set using their own hardware platforms, and then submit their system output to the evaluation server for scoring results.
Data
The ActEV Self-Reported Leaderboard (SRL) Challenge is based on the Multiview Extended Video with Activities (MEVA) Known Facility (KF) dataset. The large-scale MEVA dataset is designed for activity detection in multi-camera environments. It was created on the Intelligence Advanced Research Projects Activity (IARPA) Deep Intermodal Video Analytics (DIVA) program to support DIVA performers and the broader research community.

You can download the public MEVA resources (training video, training annotations and the test set) as described on the SRL Data Tab.

ActEV SRL Test Datasets
The TREC 2024 ActEV SRL test dataset is the same as for TRECVID'23 ActEV SRL, TRECVID'22 ActEV SRL and CVPR'22 ActivityNet ActEV SRL challenge.

Metrics
For both the AOD (primary) and AD tasks, the submitted results are measured by Probability of Missed Detection (Pmiss) at a Rate of Fixed False Alarm (RateFA) of 0.1 (Pmiss@0.1RFA). RateFA is the average number of false alarms activity instances per minute. Pmiss is the portion of activity instances where the system did not detect the activity within the required temporal (AD) and spatio-temporal (AOD) overlap requirements. Submitted results are scored for Pmiss and RateFA at multiple thresholds (based on confidence scores produced by the systems), creating a detection error tradeoff (DET) curve .


The ActEV Self-Reported Leaderboard (SRL) Challenge will report system performance scores on a public leaderboard on this website.

Please see details in the ActEV Self-Reported Leaderboard (SRL) Challenge evaluation plan below or check out the Updated ActEV Scoring Software GitHub repo.

Updated SRL Evaluation Plan
Contact

For ActEV Evaluation information, please email: actev-nist@nist.gov

For ActEV Evaluation Discussion, please visit our ActEV Slack.

News
1June
TREC'24 ActEV SRL Leaderboard opens June 1, 2024
7Oct
Deadline for TRECVID ActEV SRL Challenge results submission

ActEV Self-Reported Leaderboard (SRL) Challenge

TREC 2024 conference Task: ActEV Self-Reported Leaderboard (SRL) Challenge
  • TREC'24 ActEV test dataset release: May 1, 2024 (same as for TRECVID'23 ActEV SRL)
  • ActEV SRL Challenge Opens: June 01, 2024
  • Deadline for ActEV SRL Challenge results submission: October 7, 2024: 4:00 PM EST
  • All teams invited for TREC workshop based on the participation TREC 2024 conference : Nov 18 — 22 (hybrid)
ActEV SRL Dataset

The ActEV Self-Reported Leaderboard (SRL) Challenge is based on the Multiview Extended Video with Activities (MEVA) Known Facility (KF) dataset. The MEVA KF data was collected at the Muscatatuck Urban Training Center (MUTC) with a team of over 100 actors performing in various scenarios. The MEVA KF dataset has two parts: (1) the public training and development data and (2) ActEV SRL test dataset .

The MEVA KF data were collected and annotated for the Intelligence Advanced Research Projects Activity (IARPA) Deep Intermodal Video Analytics (DIVA) program. A primary goal of the DIVA program is to support activity detection in multi-camera environments for both DIVA performers and the broader research community.

Training and Development Data

In December 2019, the public MEVA dataset has been released with 328 hours of ground-camera data and 4.2 hours of Unmanned Arial Vehicle video have been released. 160 hours of the ground camera video have been annotated by the same team that has annotated the ActEV test set. Additional annotations have been performed by the public and are also available in the annotation repository.

ActEV SRL Test Datasets


1)The TREC 2024 ActEV SRL test dataset has been released and is the same as for TRECVID 2023 ActEV SRL, TRECVID 2022 ActEV SRL and CVPR'22 ActivityNet ActEV SRL challenge.

There are four locations of data pertaining to the MEVA data resources and the evaluation. The sections below document how to obtain and use the data for the TREC 2024 ActEV SRL (same as HADCV evaluation).

  • http://mevadata.org - general information about MEVA.
  • MEVA AWS Video Data Bucket - The AWS bucket contains the video data for download.
  • https://gitlab.kitware.com/meva/meva-data-repo - The GIT repo for public annotations.
  • https://gitlab.kitware.com/actev/actev-data-repo - The GIT repo for files pertaining to ActEV and HADCV evaluations. This repo is the distribution mechanism for the TREC'24 ActEV SRL evaluation-related materials. The evaluations make use of multiple data sets. This repo is a nexus point between the evaluations and the utilized data sets. The repo consists of partition definitions (e.g., train, validation, or test) to be used for the evaluations.
  • The TREC 2024 ActEV SRL test dataset is the same as for TRECVID 2023 ActEV SRL, TRECVID 2022 ActEV SRL and CVPR'22 ActivityNet ActEV SRL challenge

Data Download

You can download the public MEVA video for free from the mevadata.org website (http://mevadata.org/) by completing these steps:

SRL Test data

  • Get an up-to-date copy of the ActEV Data Repo via GIT. You'll need to either clone the repo (the first time you access it) or update a previously downloaded repo with 'git pull'.
    • Clone: git clone https://gitlab.kitware.com/actev/actev-data-repo.git
    • Update: cd "Your_Directory_For_actev-data-repo"; git pull
    • Follow the steps in the top-level README.
    • Download the TREC 2024 ActEV SRL Test dataset (same as for the HADCV'22 workshop) into ./partitions/HADCV22-Test-20211010 using the command: % python scripts/actev-corpora-maint.py --regex ".*drop-4-hadcv22.*" --operation download
MEVA Training/Development data
  • Get an up-to-date copy of the MEVA Data Repo via GIT. You'll need to either clone the repo (the first time you access it) or update a previously downloaded repo with 'git pull'.
    • Clone: git clone https://gitlab.kitware.com/meva/meva-data-repo
    • Update: cd "Your_Directory_For_meva-data-repo"; git pull
  • Download the training data collection found in the MEVA AWS Video Data Bucket within the directories: drops-123-r13, examples, mutc-3d-model, uav-drop-01, and updates-r13. (NOTE: directory drop-4-hadcv22 is NOT a training resource).
Rules for Leaderboard Evaluation Schedule

During the TREC'24 ActEV SRL evaluation, you can create a maximum of three systems and submit maximum of two results per day and maximum of 50 results in total.

The challenge participant can train their systems or tune parameters using any data complying with applicable laws and regulations. However, they must inform NIST that they are using such data, and provide appropriate detail regarding the type of data used.

The challenge participant agree not to probe the test videos via manual/human means such as looking at the videos to produce the activity type and timing information from prior, during and after the evaluation.

Participants are free to publish results for their own system but must not publicly compare their results with other participants (ranking, score differences, etc.) without explicit written consent from the other participants.

While participants may report their own results, participants may not make advertising claims about their standing in the evaluation, regardless of rank, or winning the evaluation, or claim NIST endorsement of their system(s). The following language in the U.S. Code of Federal Regulations (15 C.F.R. § 200.113)14 shall be respected: NIST does not approve, recommend, or endorse any proprietary product or proprietary material. No reference shall be made to NIST, or to reports or results furnished by NIST in any advertising or sales promotion which would indicate or imply that NIST approves, recommends, or endorses any proprietary product or proprietary material, or which has as its purpose an intent to cause directly or indirectly the advertised product to be used or purchased because of NIST test reports or results.

At the conclusion of the evaluation, NIST may generate a report summarizing the system results for conditions of interest. Participants may publish or otherwise disseminate these charts, unaltered and with appropriate reference to their source.



ActEV SRL Evaluation Task

In the ActEV SRL evaluation, there is one primary task:
The Activity and Object Detection (AOD) task for detecting and temporal/spatially localizes all instances of the activity from predefined activity classes. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct, and the temporal/spatial overlap must fall within a minimal requirement.

In the ActEV SRL evaluation, there is one secondary task:
The Activity Detection (AD) task for detecting and temporally localizing activities. Given a target activity, a system automatically detects and temporally localizes all instances of the activity from predefined activity classes. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct and must meet minimal temporal overlap requirement.

Facilities

Known Facilities

Systems will be tested on MEVA Known Facility test data. Facility data is available at https://mevadata.org including a site map with approximate camera locations and sample FOVs, camera models, a 3D site model, and additional metadata and site information. Sample representative video from the known facility is also provided, with over 160 hours of video annotated for leaderboard activities. All available metadata and site information may be used during system development.

Activities

Known Activities

For the MEVA Known Activities (KA) tests, developers are provided a list of activities in advance for use during system development (e.g., training) for the system to automatically detect and localize all instances of the activities.

Detailed activity definitions are in the ActEV Annotation Definitions for MEVA Data document.

The names of the 20 Known Activities for ActEV SRL (subset of the the SDL names ) :


person_closes_vehicle_door person_reads_document
person_enters_scene_through_structure person_sits_down
person_enters_vehicle person_stands_up
person_exits_scene_through_structure person_talks_to_person
person_exits_vehicle person_texts_on_phone
person_interacts_with_laptop >person_transfers_object
person_opens_facility_door vehicle_starts
person_opens_vehicle_door vehicle_stops
person_picks_up_object vehicle_turns_left
person_puts_down_object vehicle_turns_right

Datasets

Framework

The DIVA Framework is a software framework designed to provide an architecture and a set of software modules which will facilitate the development of activity recognition analytics. The Framework is developed as a fully open source project on GitHub. The following links will help you get started with the framework: The DIVA Framework is based on KWIVER, an open source framework designed for building complex computer vision systems. The following links will help you learn more about KWIVER:
  • KWIVER Github Repository This is the main KWIVER site, all development of the framework happens here.
  • KWIVER Issue Tracker Submit any bug reports or feature requests for the KWIVER here. If there's any question about whether your issues belongs in the KWIVER or DIVA framework issues tracker, submit to the DIVA tracker and we'll sort it out..
  • KWIVER Main Documentation Page The source for the KWIVER documentation is maintained in the Github repository using Sphinx. A built version is maintained on ReadTheDocs at this link. A good place to get started in the documentation, after reading the Introduction are the Arrows and Sprokit sections, both of which are used by the KWIVER framework.
The framework based R-C3D baseline algorithm implementation with CLI found in the link.

Baseline Algorithms

KITWARE has adapted two "baseline" activity recognition algorithms to work within the DIVA Framework:

Visualization Tools

Annotation Tools