ActEV 2021 Leaderboard
TRECVID21
Updated: 2021-10-25 11:02:47 -0400
RANK |
SUBMISSION ID |
SCORING REQUEST NAME |
SCORING_REQUEST_ID |
EVALUATION_NAME |
SCORING_PROTOCOL |
TEAM NAME |
SYSTEM NAME |
?COLUMN? |
PARTIAL AUDC* |
PRIZE_ELIGIBLE |
MEAN-P_MISS@0.15TFA |
MEAN-W_P_MISS@0.15RFA |
1 |
26562 |
ActEV-2018_AD_TRECVID21_SYS-00359_INF_20211008-150355-5718.sr-20211008-150355-6139 |
36435 |
ActEV-2018 |
ActEV19_AD |
INF |
INF_PRE |
"b1965d97b73d64216126e32418256cf83acc74d9--2021-10-19T10:19:29-04:00" |
0.39607 |
no |
0.30622 |
0.81080 |
2 |
26542 |
ActEV-2018_AD_TRECVID21_SYS-00306_BUPT-MCPRL_20211008-092501-2764.sr-20211008-092501-3175 |
36412 |
ActEV-2018 |
ActEV19_AD |
BUPT-MCPRL |
MCPRL_S0 |
"5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" |
0.40853 |
|
0.32489 |
0.79798 |
3 |
26539 |
ActEV-2018_AD_TRECVID21_SYS-00295_BUPT-MCPRL_20211008-064624-6632.sr-20211008-064624-7016 |
36409 |
ActEV-2018 |
ActEV19_AD |
BUPT-MCPRL |
MCPRL_S2 |
"5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" |
0.40947 |
|
0.32919 |
0.79953 |
4 |
26540 |
ActEV-2018_AD_TRECVID21_SYS-00301_BUPT-MCPRL_20211008-091912-7759.sr-20211008-091912-8121 |
36410 |
ActEV-2018 |
ActEV19_AD |
BUPT-MCPRL |
MCPRL_S3 |
"5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" |
0.41305 |
|
0.32866 |
0.80114 |
5 |
26546 |
ActEV-2018_AD_TRECVID21_SYS-00303_UCF_20211008-094513-4996.sr-20211008-094513-5721 |
36416 |
ActEV-2018 |
ActEV19_AD |
UCF |
UCF - S1 |
"b1965d97b73d64216126e32418256cf83acc74d9--2021-10-19T10:19:29-04:00" |
0.43059 |
no |
0.34080 |
0.86431 |
6 |
26543 |
ActEV-2018_AD_TRECVID21_SYS-00304_UCF_20211008-094125-6787.sr-20211008-094125-7281 |
36413 |
ActEV-2018 |
ActEV19_AD |
UCF |
UCF - S2 |
"5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" |
0.43271 |
no |
0.34207 |
0.86376 |
7 |
26534 |
ActEV-2018_AD_TRECVID21_SYS-00271_UCF_20211007-231425-5330.sr-20211007-231425-5842 |
36398 |
ActEV-2018 |
ActEV19_AD |
UCF |
UCF-P |
"5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" |
0.43562 |
no |
0.34466 |
0.85234 |
8 |
26532 |
ActEV-2018_AD_TRECVID21_SYS-00354_INF_20211007-222635-0490.sr-20211007-222635-0824 |
36396 |
ActEV-2018 |
ActEV19_AD |
INF |
INF_full |
"5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" |
0.44436 |
no |
0.35079 |
0.84287 |
9 |
26388 |
ActEV-2018_AD_TRECVID21_SYS-00353_INF_20210919-223604-8544.sr-20210919-223604-9267 |
35986 |
ActEV-2018 |
ActEV19_AD |
INF |
INF |
"5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" |
0.45115 |
no |
0.35161 |
0.84820 |
10 |
26544 |
ActEV-2018_AD_TRECVID21_SYS-00305_UCF_20211008-094234-2932.sr-20211008-094234-3426 |
36414 |
ActEV-2018 |
ActEV19_AD |
UCF |
UCF - S3 |
"5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" |
0.45700 |
no |
0.36994 |
0.86826 |
11 |
26467 |
ActEV-2018_AD_TRECVID21_SYS-00401_M4D-2021_20210929-062532-5058.sr-20210929-062532-5417 |
36284 |
ActEV-2018 |
ActEV19_AD |
M4D_2021 |
M4D_2021_S1 |
"5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" |
0.84658 |
|
0.79410 |
0.88521 |
12 |
26473 |
ActEV-2018_AD_TRECVID21_SYS-00293_BUPT-MCPRL_20210930-094727-8768.sr-20210930-094727-9323 |
36301 |
ActEV-2018 |
ActEV19_AD |
BUPT-MCPRL |
MCPRL_S1 |
"0e755c08a257a122eef62dd419103171431002bf--2021-09-14T11:26:46-04:00" |
0.84901 |
|
0.82052 |
0.96236 |
13 |
26508 |
ActEV-2018_AD_TRECVID21_SYS-00361_TokyoTech-AIST_20211006-125106-8001.sr-20211006-125106-8775 |
36363 |
ActEV-2018 |
ActEV19_AD |
TokyoTech_AIST |
TTA-baseline |
"5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" |
0.85159 |
|
0.81970 |
0.94897 |
14 |
26215 |
ActEV-2018_AD_TRECVID21_SYS-00394_M4D-2021_20210831-051152-9620.sr-20210831-051153-0097 |
35537 |
ActEV-2018 |
ActEV19_AD |
M4D_2021 |
baseline |
"5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" |
0.85484 |
|
0.79732 |
0.87719 |
15 |
26530 |
ActEV-2018_AD_TRECVID21_SYS-00404_Team-UEC_20211007-215130-8412.sr-20211007-215130-8794 |
36394 |
ActEV-2018 |
ActEV19_AD |
Team UEC |
UEC_1 |
"5295b0a0c33c637da65e9927d344ca8bca0b4fab--2021-04-26T14:46:22-04:00" |
0.96405 |
|
0.95035 |
0.95670 |
16 |
23863 |
ActEV-2018_AD_TRECVID21_SYS-00257_NIST-TEST_20210312-112809-9379.sr-20210312-112809-9779 |
29738 |
ActEV-2018 |
ActEV19_AD |
NIST-TEST |
Test System I |
"cca66d153e229d653d4d4e6e82467423ae880ea9--2021-02-26T21:10:52+00:00" |
1.00000 |
no |
1.00000 |
1.00000 |
TRECVID21
Updated: 2021-10-25 11:02:47 -0400
*Partial AUDC is the area under the DET curve between a
Time-based False Alarm rate of 0 and 0.2. Value of a
perfect system is 0.
Activities and Tasks
List of Activities and New Names
The table below provides a summary of the activity list and the new names
for the TRECVID ActEV 2021 evaluation. The VIRAT Video dataset annotations that were created as part of the IARPA DIVA program are now available in a public repository here. The repository also contains the activity definitions used for the annotations.
The evaluation will be based on 35 activities from the activities listed below, we have dropped some of the activity with low counts. The CSV file with mapping the new names with the ones listed in the DIVA-Annotation-Guidelines is here.
| |
---|
VIRAT Activity Name (Original) | VIRAT Activity Name 2020/2021 |
Closing | person_closes_facility_or_vehicle_door |
Closing_Trunk | person_closes_trunk |
DropOff_Person_Vehicle | vehicle_drops_off_person |
Entering | person_enters_facility_or_vehicle |
Exiting | person_exits_facility_or_vehicle |
Interacts | person_interacts_object |
Loading | person_loads_vehicle |
Open_Trunk | person_opens_trunk |
Opening | person_opens_facility_or_vehicle_door |
Person_Person_Interaction | person_person_interaction |
PickUp | person_pickups_object |
PickUp_Person_Vehicle | vehicle_picks_up_person |
Pull | person_pulls_object |
Push | person_pushs_object |
Riding | person_rides_bicycle |
SetDown | person_sets_down_object |
Talking | person_talks_to_person |
Transport_HeavyCarry | person_carries_heavy_object |
Unloading | person_unloads_vehicle |
activity_carrying | person_carries_object |
activity_crouching | person_crouches |
activity_gesturing | person_gestures |
activity_running | person_runs |
activity_sitting | person_sits |
activity_standing | person_stands |
activity_walking | person_walks |
specialized_talking_phone | person_talks_on_phone |
specialized_texting_phone | person_texts_on_phone |
specialized_using_tool | person_uses_tool |
vehicle_moving | vehicle_moves |
vehicle_starting | vehicle_starts |
vehicle_stopping | vehicle_stops |
vehicle_turning_left | vehicle_turns_left |
vehicle_turning_right | vehicle_turns_right |
vehicle_u_turn | vehicle_makes_u_turn |
Task for the TrecVID ActEV 2021 Evaluation
In the TrecVID ActEV 2021 evaluation, there is one Activity Detection (AD) task for detecting and localizing of
activities
Activity Detection (AD)
For the Activity Detection task, given a target activity, a system automatically detects and temporally localizes
all instances of the activity. For a system-identified activity instance to be evaluated as correct, the type of
activity must be correct and the temporal overlap must fall within a minimal requirement as
described in the Evaluation Plan.
TRECVID ActEV 2021 Evaluation
Updates
Feb 07, 2021: NIST releases ActEV JSON training, validation sets and the Index files ( Same as for 2020 Eval)
March 21, 2021: Updated date: Start of ActEV 2021 Leaderboard Evaluation
October
01, 2021: 1:00 PM EST 08, 2021: 1:00 PM EST: End of ActEV 2021 Leaderboard Evaluation
The leaderboard will soon be set up with the new eval partition. We will inform you when it is available.
Also, the original KPF files have been added to the ‘annotations’ directory for the train and validate partitions.
Summary
ActEV is an series of evaluations to accelerate development of robust, multi-camera, automatic activity detection algorithms for forensic and real-time alerting applications. Each evaluation will challenge systems with new data, system requirements, and/or new activities. For more details about the previous evaluations and example videos see the
main ActEV page and click on the example videos tab
What is Activity Detection in Videos ?
An ActEV activity is defined to be “one or more people performing a specified movement or interacting with an object or group of objects”. Activity detection technologies process extended video streams, such as those from a IP camera, and automatically detects all instances of the activity by: (1) identifying the type of activity, (2) producing a confidence score indicating the presence of instance, (3) temporally localizing the instance by indicating the begin and end times, and (4) optionally, detecting and tracking the objects (people, vehicles, objects) involved in the activity.
What
The ActEV evaluation is being conducted to
assess the robustness of automatic activity detection for a
multi-camera streaming video environment.
Who
NIST invites all organizations, particularly
universities and corporations, to submit their results using their
technologies to the ActEV evaluation server. The evaluation is open
worldwide. Participation is free. NIST does not provide funds to
participants.
How
To take part in the ActEV evaluation you
need to register on the actev.nist.gov website and acknowledge that you have read and accepted the data license to download the data.
Evaluation Task
In the TrecVID ActEV 2021 evaluation, there is one Activity Detection (AD) task for detecting and localizing of
activities
Activity Detection (AD)
For the Activity Detection task, given a target activity, a system automatically detects and temporally localizes
all instances of the activity. For a system-identified activity instance to be evaluated as correct, the type of
activity must be correct and the temporal overlap must fall within a minimal requirement as
described in the Evaluation Plan.
Data
The TrecVID ActEV 2021 evaluation is only based on the VIRAT V1 and V2 dataset The evaluation will be based on 35 activities from the activities listed in the
activities tab and the names have been updated for the evaluation. The data is provided in MPEG-4 format . You can download the public VIDEO dataset for free at
viratdata.org; and more info about the datasets is on the
data tab
Metrics
Submitted activity detection systems must give a confidence score for each activity they detect. Detected activities are then thresholded based on this confidence score. Varying the threshold makes a trade-off between being sensitive enough to identify true activity instances (low threshold) vs. not making false alarms when no activity is present (high threshold). Submitted systems are scored on both of these, measured by Probability of Missed Detection (Pmiss) and Time-based False Alarm (TFA). Pmiss is the portion of activities where the system did not detect the activity for at least 1 second. TFA is the portion of time that the system detected an activity when in fact there was none. Submitted systems are scored for Pmiss and TFA at multiple thresholds, creating a detect error tradeoff (DET) curve. The leaderboard ranking of a system is based on a summary of its DET curve: the area under the DET Curve across the TFA range between 0% to 20% divided by 0.2 to normalize the value to [0:1]. Lower numbers are better, as they reflect fewer errors.
Evaluation Plan
TRECVID
Task coordinator
ActEV NIST team (ActEV-nist@nist.gov)
News
21March
Register for ActEV 2021
21March
ActEV 2021 Evaluation starts.
08Oct
ActEV 2021 Evaluation now ends.
TRECVID ActEV 2021 Evaluation
Schedule
Feb 07, 2021: NIST releases ActEV evaluation plan and defines the target Activities ( Same as for 2020 Eval)
Feb 07, 2021 NIST releases ActEV JSON training and validation sets and the Index files ( Same as for 2020 Eval)
Updated: March 21, 2021: Start of ActEV 2021 Leaderboard Evaluation
October 01 2021: 1:00 PM EST 08 2021: 1:00 PM EST : End of ActEV 2021 Leaderboard Evaluation
Due October 15, 2021 : Speaker proposals submission
Due November 15, 2021: TRECVID notebook draft paper
Due December 1, 2021 : TRECVID workshop Registration
December. 7 - 10, 2021: TRECVID virtual workshop
TRECVID 2021 ActEV dataset
VIRAT Video Dataset
The VIRAT Video Dataset is designed to be realistic, natural and challenging for video surveillance domains in terms of its resolution, background clutter, diversity in scenes, and human activity/event categories than existing action recognition datasets. It has become a benchmark dataset for the computer vision community. Please download the videos from
viratdata.org
This GIT Repo is the data distribution mechanism for the
ActEV evaluation. The repo presently consists of a
collection of corpora (plural for corpus) and partition
definition files to be used for evaluations. Future
additions will include source annotations and donated
data/annotations. The repo contains textual data but not
the large-sized corpora (videos, etc.).
- Create a login account by registering (using the link above) for the TrecVID 2021 ActEV
- During account registration, you will:
- You then be able to make submissions. If there is any issue please email us at actev.nist@nist.gov
Rules for Leaderboard Evaluation
Schedule
During the TRECID ActEV 2021 evaluation,
you can create a maximum of four systems and submit
maximum of two results per day and maximum of 50 results
in total for the AD task
The challenge participant can train their systems
or tune parameters using any data complying with applicable laws
and regulations. In the event that external limitations preclude
sharing such data with others, participant are still permitted to
use the data, but they must inform NIST that they are using such
data, and provide appropriate detail regarding the type of data
used and the limitations on distribution.
The challenge participant agree not to probe the
test videos via manual/human means such as looking at the videos to
produce the activity type and timing information from prior to the
evaluation period to end of leaderboard evaluation.
All machine learning or statistical analysis
algorithms must complete training, model selection, and tuning
prior to running on the test data. This rule does not preclude
online learning/adaptation during test data processing so long as
the adaptation information is not reused for subsequent runs of the
evaluation collection.
The only VIRAT data that may be used by the
systems are the ActEV provided training and validation sets,
associated annotations, and any derivatives of those sets (e.g.
additional annotations on those videos). All other VIRAT data and
associated annotations may not be used by any of the systems for
the ActEV Leaderboard Evaluation.