Skip to main content

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

TRECVID 2021 ActEV: Activities in Extended Video

ActEV 2021 Leaderboard


Updated: 2021-10-25 11:02:47 -0400
1 26562 36435 ActEV-2018 ActEV19_AD INF INF_PRE "b1965d97b73d64216126e32418256cf83acc74d9--2021-10-19T10:19:29-04:00" 0.39607 no 0.30622 0.81080
2 26542 36412 ActEV-2018 ActEV19_AD BUPT-MCPRL MCPRL_S0 "5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" 0.40853 0.32489 0.79798
3 26539 36409 ActEV-2018 ActEV19_AD BUPT-MCPRL MCPRL_S2 "5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" 0.40947 0.32919 0.79953
4 26540 36410 ActEV-2018 ActEV19_AD BUPT-MCPRL MCPRL_S3 "5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" 0.41305 0.32866 0.80114
5 26546 36416 ActEV-2018 ActEV19_AD UCF UCF - S1 "b1965d97b73d64216126e32418256cf83acc74d9--2021-10-19T10:19:29-04:00" 0.43059 no 0.34080 0.86431
6 26543 36413 ActEV-2018 ActEV19_AD UCF UCF - S2 "5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" 0.43271 no 0.34207 0.86376
7 26534 36398 ActEV-2018 ActEV19_AD UCF UCF-P "5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" 0.43562 no 0.34466 0.85234
8 26532 36396 ActEV-2018 ActEV19_AD INF INF_full "5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" 0.44436 no 0.35079 0.84287
9 26388 35986 ActEV-2018 ActEV19_AD INF INF "5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" 0.45115 no 0.35161 0.84820
10 26544 36414 ActEV-2018 ActEV19_AD UCF UCF - S3 "5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" 0.45700 no 0.36994 0.86826
11 26467 36284 ActEV-2018 ActEV19_AD M4D_2021 M4D_2021_S1 "5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" 0.84658 0.79410 0.88521
12 26473 36301 ActEV-2018 ActEV19_AD BUPT-MCPRL MCPRL_S1 "0e755c08a257a122eef62dd419103171431002bf--2021-09-14T11:26:46-04:00" 0.84901 0.82052 0.96236
13 26508 36363 ActEV-2018 ActEV19_AD TokyoTech_AIST TTA-baseline "5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" 0.85159 0.81970 0.94897
14 26215 35537 ActEV-2018 ActEV19_AD M4D_2021 baseline "5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" 0.85484 0.79732 0.87719
15 26530 36394 ActEV-2018 ActEV19_AD Team UEC UEC_1 "5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" 0.96405 0.95035 0.95670
16 23863 29738 ActEV-2018 ActEV19_AD NIST-TEST Test System I "cca66d153e229d653d4d4e6e82467423ae880ea9--2021-02-26T21:10:52+00:00" 1.00000 no 1.00000 1.00000



*Partial AUDC is the area under the DET curve between a Time-based False Alarm rate of 0 and 0.2. Value of a perfect system is 0.

Contact Us

For ActEV 2021 Evaluation information (data, evaluation code, etc.) please email:

For ActEV 2021 Evaluation Discussion Please Visit our Google Group.!forum/trecvid.actev

Activities and Tasks

List of Activities and New Names

The table below provides a summary of the activity list and the new names for the TRECVID ActEV 2021 evaluation. The VIRAT Video dataset annotations that were created as part of the IARPA DIVA program are now available in a public repository here. The repository also contains the activity definitions used for the annotations.
The evaluation will be based on 35 activities from the activities listed below, we have dropped some of the activity with low counts. The CSV file with mapping the new names with the ones listed in the DIVA-Annotation-Guidelines is here.

VIRAT Activity Name (Original)VIRAT Activity Name 2020/2021
Task for the TrecVID ActEV 2021 Evaluation
In the TrecVID ActEV 2021 evaluation, there is one Activity Detection (AD) task for detecting and localizing of activities
Activity Detection (AD)

For the Activity Detection task, given a target activity, a system automatically detects and temporally localizes all instances of the activity. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct and the temporal overlap must fall within a minimal requirement as described in the Evaluation Plan.
TRECVID ActEV 2021 Evaluation


Feb 07, 2021: NIST releases ActEV JSON training, validation sets and the Index files ( Same as for 2020 Eval)
March 21, 2021: Updated date: Start of ActEV 2021 Leaderboard Evaluation
October 01, 2021: 1:00 PM EST    08, 2021: 1:00 PM EST: End of ActEV 2021 Leaderboard Evaluation  The leaderboard will soon be set up with the new eval partition. We will inform you when it is available.
Also, the original KPF files have been added to the ‘annotations’ directory for the train and validate partitions.
ActEV is an series of evaluations to accelerate development of robust, multi-camera, automatic activity detection algorithms for forensic and real-time alerting applications. Each evaluation will challenge systems with new data, system requirements, and/or new activities. For more details about the previous evaluations and example videos see the main ActEV page and click on the example videos tab
What is Activity Detection in Videos ?
An ActEV activity is defined to be “one or more people performing a specified movement or interacting with an object or group of objects”. Activity detection technologies process extended video streams, such as those from a IP camera, and automatically detects all instances of the activity by: (1) identifying the type of activity, (2) producing a confidence score indicating the presence of instance, (3) temporally localizing the instance by indicating the begin and end times, and (4) optionally, detecting and tracking the objects (people, vehicles, objects) involved in the activity.
The ActEV evaluation is being conducted to assess the robustness of automatic activity detection for a multi-camera streaming video environment.
NIST invites all organizations, particularly universities and corporations, to submit their results using their technologies to the ActEV evaluation server. The evaluation is open worldwide. Participation is free. NIST does not provide funds to participants.
To take part in the ActEV evaluation you need to register on the website and acknowledge that you have read and accepted the data license to download the data.
Evaluation Task
In the TrecVID ActEV 2021 evaluation, there is one Activity Detection (AD) task for detecting and localizing of activities
Activity Detection (AD)
For the Activity Detection task, given a target activity, a system automatically detects and temporally localizes all instances of the activity. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct and the temporal overlap must fall within a minimal requirement as described in the Evaluation Plan.
The TrecVID ActEV 2021 evaluation is only based on the VIRAT V1 and V2 dataset

The evaluation will be based on 35 activities from the activities listed in the activities tab and the names have been updated for the evaluation. The data is provided in MPEG-4 format . You can download the public VIDEO dataset for free at; and more info about the datasets is on the data tab
Submitted activity detection systems must give a confidence score for each activity they detect. Detected activities are then thresholded based on this confidence score. Varying the threshold makes a trade-off between being sensitive enough to identify true activity instances (low threshold) vs. not making false alarms when no activity is present (high threshold). Submitted systems are scored on both of these, measured by Probability of Missed Detection (Pmiss) and Time-based False Alarm (TFA). Pmiss is the portion of activities where the system did not detect the activity for at least 1 second. TFA is the portion of time that the system detected an activity when in fact there was none. Submitted systems are scored for Pmiss and TFA at multiple thresholds, creating a detect error tradeoff (DET) curve. The leaderboard ranking of a system is based on a summary of its DET curve: the area under the DET Curve across the TFA range between 0% to 20% divided by 0.2 to normalize the value to [0:1]. Lower numbers are better, as they reflect fewer errors.
Evaluation Plan
Task coordinator
ActEV NIST team (
Register for ActEV 2021
ActEV 2021 Evaluation starts.
ActEV 2021 Evaluation now ends.

TRECVID ActEV 2021 Evaluation Schedule
Feb 07, 2021: NIST releases ActEV evaluation plan and defines the target Activities ( Same as for 2020 Eval)

Feb 07, 2021 NIST releases ActEV JSON training and validation sets and the Index files ( Same as for 2020 Eval)

Updated: March 21, 2021: Start of ActEV 2021 Leaderboard Evaluation

October 01 2021: 1:00 PM EST   08 2021: 1:00 PM EST : End of ActEV 2021 Leaderboard Evaluation

Due October 15, 2021 : Speaker proposals submission

Due November 15, 2021: TRECVID notebook draft paper

Due December 1, 2021 : TRECVID workshop Registration

December. 7 - 10, 2021: TRECVID virtual workshop

TRECVID 2021 ActEV dataset
VIRAT Video Dataset
The VIRAT Video Dataset is designed to be realistic, natural and challenging for video surveillance domains in terms of its resolution, background clutter, diversity in scenes, and human activity/event categories than existing action recognition datasets. It has become a benchmark dataset for the computer vision community. Please download the videos from
This GIT Repo is the data distribution mechanism for the ActEV evaluation. The repo presently consists of a collection of corpora (plural for corpus) and partition definition files to be used for evaluations. Future additions will include source annotations and donated data/annotations. The repo contains textual data but not the large-sized corpora (videos, etc.).

  • Create a login account by registering (using the link above) for the TrecVID 2021 ActEV
  • During account registration, you will:
  • You then be able to make submissions. If there is any issue please email us at

Rules for Leaderboard Evaluation Schedule

During the TRECID ActEV 2021 evaluation, you can create a maximum of four systems and submit maximum of two results per day and maximum of 50 results in total for the AD task

The challenge participant can train their systems or tune parameters using any data complying with applicable laws and regulations. In the event that external limitations preclude sharing such data with others, participant are still permitted to use the data, but they must inform NIST that they are using such data, and provide appropriate detail regarding the type of data used and the limitations on distribution.

The challenge participant agree not to probe the test videos via manual/human means such as looking at the videos to produce the activity type and timing information from prior to the evaluation period to end of leaderboard evaluation.

All machine learning or statistical analysis algorithms must complete training, model selection, and tuning prior to running on the test data. This rule does not preclude online learning/adaptation during test data processing so long as the adaptation information is not reused for subsequent runs of the evaluation collection.

The only VIRAT data that may be used by the systems are the ActEV provided training and validation sets, associated annotations, and any derivatives of those sets (e.g. additional annotations on those videos). All other VIRAT data and associated annotations may not be used by any of the systems for the ActEV Leaderboard Evaluation.

If you have any question, please email to