TRECVID 2020 ActEV: Activities in Extended Video

ActEV 2020 Leaderboard


Updated: 2020-12-17 20:42:47 -0500
1 26150 ActEV-2018 ActEV19_AD INF INF "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.42307 no 0.33241 0.80965
2 25954 ActEV-2018 ActEV19_AD INF INF_full "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.42408 no 0.33289 0.80802
3 25882 ActEV-2018 ActEV19_AD INF INF_PRE "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.43134 no 0.34306 0.83349
4 25751 ActEV-2018 ActEV19_AD INF INF_MMVG "a92dd379255c91e3f4272bdeec5d11275606a00b--2020-09-25T15:28:18-04:00" 0.43988 no 0.34321 0.84598
5 27303 ActEV-2018 ActEV19_AD UCF UCF - S3 "d7bbc36a502220c6de75bc47a1148f851f35a0a9--2020-11-18T10:23:08-05:00" 0.54830 no 0.50285 0.83621
6 25924 ActEV-2018 ActEV19_AD BUPT-MCPRL MCPRL_S1 "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.55515 0.48779 0.84519
7 25950 ActEV-2018 ActEV19_AD BUPT-MCPRL MCPRL_S0 "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.55673 0.48391 0.84506
8 25948 ActEV-2018 ActEV19_AD BUPT-MCPRL MCPRL_S3 "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.55702 0.48962 0.83997
9 25947 ActEV-2018 ActEV19_AD BUPT-MCPRL MCPRL_S2 "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.55882 0.49325 0.84189
10 27302 ActEV-2018 ActEV19_AD UCF UCF - S2 "d7bbc36a502220c6de75bc47a1148f851f35a0a9--2020-11-18T10:23:08-05:00" 0.57149 no 0.52984 0.83225
11 26324 ActEV-2018 ActEV19_AD UCF UCF-P "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.58485 no 0.54730 0.83540
12 26321 ActEV-2018 ActEV19_AD UCF UCF - S1 "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.58485 no 0.54730 0.83540
13 26345 ActEV-2018 ActEV19_AD TokyoTech_AIST TTA-SF2 "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.79753 0.75502 0.87889
14 25793 ActEV-2018 ActEV19_AD TokyoTech_AIST TTA-baseline "a92dd379255c91e3f4272bdeec5d11275606a00b--2020-09-25T15:28:18-04:00" 0.81868 0.78228 0.87679
15 26344 ActEV-2018 ActEV19_AD TokyoTech_AIST TTA-SF "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.83456 0.80451 0.88326
16 25916 ActEV-2018 ActEV19_AD TokyoTech_AIST TTA-SRM "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.85508 0.83174 0.87881
17 26323 ActEV-2018 ActEV19_AD CERTH-ITI P "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.86576 0.84454 0.88237
18 26225 ActEV-2018 ActEV19_AD CERTH-ITI YR16 "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.88511 0.86165 0.89439
19 26155 ActEV-2018 ActEV19_AD CERTH-ITI YRW16 "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.88530 0.86136 0.91187
20 26100 ActEV-2018 ActEV19_AD CERTH-ITI I3D_base "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.93125 0.92318 0.92850
21 25832 ActEV-2018 ActEV19_AD Team UEC UEC "5230d4cbdfe6434ab5487fe89453fdd4ae937791--2020-10-16T12:57:21-04:00" 0.95168 0.95329 0.98300
22 27530 ActEV-2018 ActEV19_AD kindai_kobe kindai_ogu_multilabel "efdc16835253b0eec6a4f9dfc2c07cf45caab43c--2020-12-14T13:15:45-05:00" 0.96267 0.95204 0.93905
23 26385 ActEV-2018 ActEV19_AD Team UEC UEC-Test "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.96374 0.95005 0.95602
24 25949 ActEV-2018 ActEV19_AD kindai_kobe kindai_ogu_baseline "f36ab9db2b88877c01ca0536a46daa099b435e76--2020-10-19T14:57:25-04:00" 0.96820 0.96443 0.95665


Updated: 2020-12-17 20:42:48 -0500

*Partial AUDC is the area under the DET curve between a Time-based False Alarm rate of 0 and 0.2. Value of a perfect system is 0.

Contact Us

For ActEV 2020 Evaluation information (data, evaluation code, etc.) please email:

For ActEV 2020 Evaluation Discussion Please Visit our Google Group.!forum/trecvid.actev

Activities and Tasks

List of Activities and New Names

The table below provides a summary of the activity list and the new names for the TRECVID ActEV 2020 evaluation. The VIRAT Video dataset annotations that were created as part of the IARPA DIVA program are now available in a public repository here. The repository also contains the activity definitions used for the annotations.
The evaluation will be based on 35 activities from the activities listed below, we have dropped some of the activity with low counts. The CSV file with mapping the new names with the ones listed in the DIVA-Annotation-Guidelines is here.

VIRAT Activity Name (Original)VIRAT Activity Name 2020
Task for the TrecVID ActEV 2020 Evaluation
In the TrecVID ActEV 2020 evaluation, there is one Activity Detection (AD) task for detecting and localizing of activities
Activity Detection (AD)

For the Activity Detection task, given a target activity, a system automatically detects and temporally localizes all instances of the activity. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct and the temporal overlap must fall within a minimal requirement as described in the Evaluation Plan.
TRECVID ActEV 2020 Evaluation

New TRECVID ActEV Deadline: Friday, November 06 2020: 1:00 PM EST, But we will keep the leaderboard running till Dec 15, 2020.
June 09, 2020: We’ve made new partitions for the TRECVID 2020 ActEV data sets and added the to the actev-data-repo.
To get the updates, do a ‘git pull’. There are three new partitions:
  • partitions/ActEV20-TRECVID-eval-20200604
  • partitions/ActEV20-TRECVID-train-20200604
  • partitions/ActEV20-TRECVID-validate-20200604
The leaderboard will soon be set up with the new eval partition. We will inform you when it is available.
Also, the original KPF files have been added to the ‘annotations’ directory for the train and validate partitions.
ActEV is an series of evaluations to accelerate development of robust, multi-camera, automatic activity detection algorithms for forensic and real-time alerting applications. Each evaluation will challenge systems with new data, system requirements, and/or new activities. For more details about the previous evaluations and example videos see the main ActEV page and click on the example videos tab
What is Activity Detection in Videos ?
An ActEV activity is defined to be “one or more people performing a specified movement or interacting with an object or group of objects”. Activity detection technologies process extended video streams, such as those from a IP camera, and automatically detects all instances of the activity by: (1) identifying the type of activity, (2) producing a confidence score indicating the presence of instance, (3) temporally localizing the instance by indicating the begin and end times, and (4) optionally, detecting and tracking the objects (people, vehicles, objects) involved in the activity.
The ActEV evaluation is being conducted to assess the robustness of automatic activity detection for a multi-camera streaming video environment.
NIST invites all organizations, particularly universities and corporations, to submit their results using their technologies to the ActEV evaluation server. The evaluation is open worldwide. Participation is free. NIST does not provide funds to participants.
To take part in the ActEV evaluation you need to register on the website and acknowledge that you have read and accepted the data license to download the data.
Evaluation Task
In the TrecVID ActEV 2020 evaluation, there is one Activity Detection (AD) task for detecting and localizing of activities
Activity Detection (AD)
For the Activity Detection task, given a target activity, a system automatically detects and temporally localizes all instances of the activity. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct and the temporal overlap must fall within a minimal requirement as described in the Evaluation Plan.
The TrecVID ActEV 2020 evaluation is only based on the VIRAT V1 and V2 dataset

The evaluation will be based on 35 activities from the activities listed in the activities tab and the names have been updated for the evaluation. The data is provided in MPEG-4 format . You can download the public VIDEO dataset for free at; and more info about the datasets is on the data tab
Submitted activity detection systems must give a confidence score for each activity they detect. Detected activities are then thresholded based on this confidence score. Varying the threshold makes a trade-off between being sensitive enough to identify true activity instances (low threshold) vs. not making false alarms when no activity is present (high threshold). Submitted systems are scored on both of these, measured by Probability of Missed Detection (Pmiss) and Time-based False Alarm (TFA). Pmiss is the portion of activities where the system did not detect the activity for at least 1 second. TFA is the portion of time that the system detected an activity when in fact there was none. Submitted systems are scored for Pmiss and TFA at multiple thresholds, creating a detect error tradeoff (DET) curve. The leaderboard ranking of a system is based on a summary of its DET curve: the area under the DET Curve across the TFA range between 0% to 20% divided by 0.2 to normalize the value to [0:1]. Lower numbers are better, as they reflect fewer errors.
Evaluation Plan
Task coordinator
ActEV NIST team (
Register for ActEV 2020
ActEV 2020 Evaluation starts.
ActEV 2020 Evaluation now ends.

TRECVID ActEV 2020 Evaluation Schedule
April 01, 2020: NIST releases ActEV evaluation plan and defines the target Activities

June 09, 2020 NIST releases ActEV JSON training and validation sets and the Index files

June 18, 2020: Start of ActEV 2020 Leaderboard Evaluation

Friday, November 06 2020: 1:00 PM EST : End of ActEV 2020 Leaderboard Evaluation (Updated Deadline)

October 22, 2020: Due Workshop speakers proposal

November 19, 2020: Due TRECVID notebook draft paper

December 4, 2020: TRECVID virtual workshop Registration

December 8-11, 2020: TRECVID virtual workshop (The ActEV task results and team presentation)
TRECVID 2020 ActEV dataset
VIRAT Video Dataset
The VIRAT Video Dataset is designed to be realistic, natural and challenging for video surveillance domains in terms of its resolution, background clutter, diversity in scenes, and human activity/event categories than existing action recognition datasets. It has become a benchmark dataset for the computer vision community. Please download the videos from
This GIT Repo is the data distribution mechanism for the ActEV evaluation. The repo presently consists of a collection of corpora (plural for corpus) and partition definition files to be used for evaluations. Future additions will include source annotations and donated data/annotations. The repo contains textual data but not the large-sized corpora (videos, etc.).

  • Create a login account by registering (using the link above) for the TrecVID 2020 ActEV
  • During account registration, you will:
  • You then be able to make submissions. If there is any issue please email us at

Rules for Leaderboard Evaluation Schedule

During the TRECID ActEV 2020 evaluation, you can create a maximum of four systems and submit maximum of two results per day and maximum of 50 results in total for the AD task

The challenge participant can train their systems or tune parameters using any data complying with applicable laws and regulations. In the event that external limitations preclude sharing such data with others, participant are still permitted to use the data, but they must inform NIST that they are using such data, and provide appropriate detail regarding the type of data used and the limitations on distribution.

The challenge participant agree not to probe the test videos via manual/human means such as looking at the videos to produce the activity type and timing information from prior to the evaluation period to end of leaderboard evaluation.

All machine learning or statistical analysis algorithms must complete training, model selection, and tuning prior to running on the test data. This rule does not preclude online learning/adaptation during test data processing so long as the adaptation information is not reused for subsequent runs of the evaluation collection.

The only VIRAT data that may be used by the systems are the ActEV provided training and validation sets, associated annotations, and any derivatives of those sets (e.g. additional annotations on those videos). All other VIRAT data and associated annotations may not be used by any of the systems for the ActEV Leaderboard Evaluation.

If you have any question, please email to