TRECVID 2020 ActEV: Activities in Extended Video


ActEV 2020 Leaderboard


TRECVID20

Updated: 2020-07-15 11:35:24 -0400
RANK SCORING REQUEST NAME SCORING_REQUEST_ID EVALUATION_NAME SCORING_PROTOCOL TEAM NAME SYSTEM NAME ?COLUMN? PARTIAL AUDC* PRIZE_ELIGIBLE MEAN-P_MISS@0.15TFA MEAN-W_P_MISS@0.15RFA
1 ActEV-2018_AD_TRECVID20_SYS-00257_NIST-TEST_20200715-110224-9653.sr-20200715-110225-0111 23529 ActEV-2018 ActEV19_AD NIST-TEST Test System I "63c92f4cf9969be92ffa042fd4a6540c499ca3a9--2020-06-17T12:10:02-04:00" 0.75007 no 0.71464 0.90569

*Partial AUDC is the area under the DET curve between a Time-based False Alarm rate of 0 and 0.2. Value of a perfect system is 0.

Contact Us

For ActEV 2020 Evaluation information (data, evaluation code, etc.) please email: actev-nist@nist.gov

For ActEV 2020 Evaluation Discussion Please Visit our Google Group. https://groups.google.com/a/list.nist.gov/forum/#!forum/trecvid.actev

Activities and Tasks

List of Activities and New Names


The table below provides a summary of the activity list and the new names for the TRECVID ActEV 2020 evaluation. The VIRAT Video dataset annotations that were created as part of the IARPA DIVA program are now available in a public repository here. The repository also contains the activity definitions used for the annotations.
The evaluation will be based on 35 activities from the activities listed below, we have dropped some of the activity with low counts. The CSV file with mapping the new names with the ones listed in the DIVA-Annotation-Guidelines is here.


VIRAT Activity Name (Original)VIRAT Activity Name 2020
Closing
person_closes_facility_or_vehicle_door
Closing_Trunkperson_closes_trunk
DropOff_Person_Vehiclevehicle_drops_off_person
Enteringperson_enters_facility_or_vehicle
Exitingperson_exits_facility_or_vehicle
Interactsperson_interacts_object
Loadingperson_loads_vehicle
Open_Trunkperson_opens_trunk
Opening
person_opens_facility_or_vehicle_door
Person_Person_Interactionperson_person_interaction
PickUpperson_pickups_object
PickUp_Person_Vehiclevehicle_picks_up_person
Pullperson_pulls_object
Pushperson_pushs_object
Ridingperson_rides_bicycle
SetDownperson_sets_down_object
Talkingperson_talks_to_person
Transport_HeavyCarryperson_carries_heavy_object
Unloadingperson_unloads_vehicle
activity_carryingperson_carries_object
activity_crouchingperson_crouches
activity_gesturingperson_gestures
activity_runningperson_runs
activity_sittingperson_sits
activity_standingperson_stands
activity_walkingperson_walks
specialized_talking_phoneperson_talks_on_phone
specialized_texting_phoneperson_texts_on_phone
specialized_using_toolperson_uses_tool
vehicle_movingvehicle_moves
vehicle_startingvehicle_starts
vehicle_stoppingvehicle_stops
vehicle_turning_leftvehicle_turns_left
vehicle_turning_rightvehicle_turns_right
vehicle_u_turnvehicle_makes_u_turn
Task for the TrecVID ActEV 2020 Evaluation
In the TrecVID ActEV 2020 evaluation, there is one Activity Detection (AD) task for detecting and localizing of activities
Activity Detection (AD)

For the Activity Detection task, given a target activity, a system automatically detects and temporally localizes all instances of the activity. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct and the temporal overlap must fall within a minimal requirement as described in the Evaluation Plan.
TRECVID ActEV 2020 Evaluation

Updates
New TRECVID ActEV Deadline: October 07, 2020: 1:00 PM EST, But we will keep the leaderboard running till Dec 15, 2020.
June 09, 2020: We’ve made new partitions for the TRECVID 2020 ActEV data sets and added the to the actev-data-repo.
To get the updates, do a ‘git pull’. There are three new partitions:
  • partitions/ActEV20-TRECVID-eval-20200604
  • partitions/ActEV20-TRECVID-train-20200604
  • partitions/ActEV20-TRECVID-validate-20200604
The leaderboard will soon be set up with the new eval partition. We will inform you when it is available.
Also, the original KPF files have been added to the ‘annotations’ directory for the train and validate partitions.
Summary
ActEV is an series of evaluations to accelerate development of robust, multi-camera, automatic activity detection algorithms for forensic and real-time alerting applications. Each evaluation will challenge systems with new data, system requirements, and/or new activities. For more details about the previous evaluations and example videos see the main ActEV page and click on the example videos tab
What is Activity Detection in Videos ?
An ActEV activity is defined to be “one or more people performing a specified movement or interacting with an object or group of objects”. Activity detection technologies process extended video streams, such as those from a IP camera, and automatically detects all instances of the activity by: (1) identifying the type of activity, (2) producing a confidence score indicating the presence of instance, (3) temporally localizing the instance by indicating the begin and end times, and (4) optionally, detecting and tracking the objects (people, vehicles, objects) involved in the activity.
What
The ActEV evaluation is being conducted to assess the robustness of automatic activity detection for a multi-camera streaming video environment.
Who
NIST invites all organizations, particularly universities and corporations, to submit their results using their technologies to the ActEV evaluation server. The evaluation is open worldwide. Participation is free. NIST does not provide funds to participants.
How
To take part in the ActEV evaluation you need to register on the actev.nist.gov website and acknowledge that you have read and accepted the data license to download the data.
Evaluation Task
In the TrecVID ActEV 2020 evaluation, there is one Activity Detection (AD) task for detecting and localizing of activities
Activity Detection (AD)
For the Activity Detection task, given a target activity, a system automatically detects and temporally localizes all instances of the activity. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct and the temporal overlap must fall within a minimal requirement as described in the Evaluation Plan.
Data
The TrecVID ActEV 2020 evaluation is only based on the VIRAT V1 and V2 dataset

The evaluation will be based on 35 activities from the activities listed in the activities tab and the names have been updated for the evaluation. The data is provided in MPEG-4 format . You can download the public VIDEO dataset for free at viratdata.org; and more info about the datasets is on the data tab
Metrics
Submitted activity detection systems must give a confidence score for each activity they detect. Detected activities are then thresholded based on this confidence score. Varying the threshold makes a trade-off between being sensitive enough to identify true activity instances (low threshold) vs. not making false alarms when no activity is present (high threshold). Submitted systems are scored on both of these, measured by Probability of Missed Detection (Pmiss) and Time-based False Alarm (TFA). Pmiss is the portion of activities where the system did not detect the activity for at least 1 second. TFA is the portion of time that the system detected an activity when in fact there was none. Submitted systems are scored for Pmiss and TFA at multiple thresholds, creating a detect error tradeoff (DET) curve. The leaderboard ranking of a system is based on a summary of its DET curve: the area under the DET Curve across the TFA range between 0% to 20% divided by 0.2 to normalize the value to [0:1]. Lower numbers are better, as they reflect fewer errors.
Evaluation Plan
TRECVID
Task coordinator
ActEV NIST team (ActEV-nist@nist.gov)
News
01April
Register for ActEV 2020
15June
ActEV 2020 Evaluation starts.
07Oct
ActEV 2020 Evaluation now ends.

TRECVID ActEV 2020 Evaluation Schedule
April 01, 2020: NIST releases ActEV evaluation plan and defines the target Activities

June 09, 2020 NIST releases ActEV JSON training and validation sets and the Index files

June 18, 2020: Start of ActEV 2020 Leaderboard Evaluation

October 07, 2020: 1:00 PM EST : End of ActEV 2020 Leaderboard Evaluation (Updated Deadline)

--------------------------------------------------------------
October 22, 2020: Due Workshop speakers proposal

November 19, 2020: Due TRECVID notebook draft paper

December 4, 2020: TRECVID virtual workshop Registration

December 8-10, 2020: TRECVID virtual workshop (The ActEV task results and team presentation)
TRECVID 2020 ActEV dataset
VIRAT Video Dataset
The VIRAT Video Dataset is designed to be realistic, natural and challenging for video surveillance domains in terms of its resolution, background clutter, diversity in scenes, and human activity/event categories than existing action recognition datasets. It has become a benchmark dataset for the computer vision community. Please download the videos from viratdata.org
This GIT Repo is the data distribution mechanism for the ActEV evaluation. The repo presently consists of a collection of corpora (plural for corpus) and partition definition files to be used for evaluations. Future additions will include source annotations and donated data/annotations. The repo contains textual data but not the large-sized corpora (videos, etc.).

  • Create a login account by registering (using the link above) for the TrecVID 2020 ActEV
  • During account registration, you will:
  • You then be able to make submissions. If there is any issue please email us at actev.nist@nist.gov


Rules for Leaderboard Evaluation Schedule

During the TRECID ActEV 2020 evaluation, you can create a maximum of four systems and submit maximum of two results per day and maximum of 50 results in total for the AD task

The challenge participant can train their systems or tune parameters using any data complying with applicable laws and regulations. In the event that external limitations preclude sharing such data with others, participant are still permitted to use the data, but they must inform NIST that they are using such data, and provide appropriate detail regarding the type of data used and the limitations on distribution.

The challenge participant agree not to probe the test videos via manual/human means such as looking at the videos to produce the activity type and timing information from prior to the evaluation period to end of leaderboard evaluation.

All machine learning or statistical analysis algorithms must complete training, model selection, and tuning prior to running on the test data. This rule does not preclude online learning/adaptation during test data processing so long as the adaptation information is not reused for subsequent runs of the evaluation collection.

The only VIRAT data that may be used by the systems are the ActEV provided training and validation sets, associated annotations, and any derivatives of those sets (e.g. additional annotations on those videos). All other VIRAT data and associated annotations may not be used by any of the systems for the ActEV Leaderboard Evaluation.




If you have any question, please email to actev-nist@nist.gov