TRECVID 2018 ActEV: Activities in Extended Video 1B Eval
August 27, 2018 - December 04, 2018


There was no 1-B Leaderboard.
There was no 1-B Leaderboard.
Contact Us

For ActEV 1B Evaluation information (data, evaluation code, etc.) please email: actev-nist@nist.gov

For ActEV 1B Evaluation Discussion Please Visit our Google Group. https://groups.google.com/a/list.nist.gov/forum/#!forum/trecvid.actev

Activities and Tasks

List of Activities per Task


The table below provides a summary of the activity list and the required objects for the ActEV evaluations. The twelve target activities are used in the 1.A evaluation while nineteen target activities are used in both the ActEV Leaderboard and ActEV 1.B evaluations. The target objects for the ActEV evaluations (1.A, Leaderboard, and 1.B) are P={Person} and V={Construction_Vehicle, Vehicle}.


ActEV 1A Eval ActEV LeaderBoard Eval ActEV 1B Eval
Closing (P, V) or (P)
Closing_trunk (P, V)
Entering (P, V) or (P)
Exiting (P, V) or (P)
Loading (P, V)
Open_Trunk (P, V)
Opening (P, V) or (P)
Transport_HeavyCarry (P, V)
Unloading (P, V)
Vehicle_turning_left (V)
Vehicle_turning_right (V)
Vehicle_u_turn (V)










Closing (P, V) or (P)
Closing_trunk (P, V)
Entering (P, V) or (P)
Exiting (P, V) or (P)
Loading (P, V)
Open_Trunk (P, V)
Opening (P, V) or (P)
Transport_HeavyCarry (P, V)
Unloading (P, V)
Vehicle_turning_left (V)
Vehicle_turning_right (V)
Vehicle_u_turn (V)
Interacts (P)
Pull (P)
Riding (P)
Talking (P)
activity_carrying (P)
specialized_talking_phone (P)
specialized_texting_phone (P)



Closing (P, V) or (P)
Closing_trunk (P, V)
Entering (P, V) or (P)
Exiting (P, V) or (P)
Loading (P, V)
Open_Trunk (P, V)
Opening (P, V) or (P)
Transport_HeavyCarry (P, V)
Unloading (P, V)
Vehicle_turning_left (V)
Vehicle_turning_right (V)
Vehicle_u_turn (V)
Interacts (P)
Pull (P)
Riding (P)
Talking (P)
activity_carrying (P)
specialized_talking_phone (P)
specialized_texting_phone (P)



Different Tasks for the ActEV Evaluation
For the 1A Leaderboard evaluation, we evaluated for the following two tasks: 1) Activity Detection (AD), and 2) Activity and Object Detection (AOD). Each task can be completed independently. For the TRECVID 2018 1.B ActEV evaluation we add the following task Activity and Object Detection and Tracking (AODT)
Activity Detection (AD)

For the Activity Detection task, given a target activity, a system automatically detects and temporally localizes all instances of the activity. For a system-identified activity instance to be evaluated as correct, the type of activity must be correct and the temporal overlap must fall within a minimal requirement as described in Evaluation Plan.
Activity and Object Detection (AOD)

For the Activity and Object Detection task, given a target activity, a system detects and temporally localizes all instances of the activity and spatially detects/localizes the people and/or objects associated with the target activity. For a system-identified instance to be scored as correct, it must meet the temporal overlap criteria for the AD task and in addition meet the spatial overlap of the identified objects during the activity instance as described in the Evaluation Plan.
Activity Object Detection and Tracking (AODT) (for the 1B evaluation)

For the Activity Object Detection and Tracking task, given a target activity, a system detects and temporally localizes all instances of the activity, spatio-temporally detects/localizes the people and/or objects associated with the target activity, and properly assigns IDs the objects play in the activity. For a system-identified instance to be scored as correct, it must meet the temporal overlap criteria and spatio-temporal overlap of the objects for the AOD task and correctly assign the IDs to the objects as prescribed in the activity definition as described in the Evaluation Plan.

TRECVID ActEV 2018 1B Evaluation

Summary
The ActEV evaluation sought to evaluate robust automatic activity detection algorithms for a multi-camera streaming video environment. ActEV is an extension of the annual TRECVID Surveillance Event Detection (SED) evaluation by adding a large collection of multi-camera video data. Some of the applications of activity detection of interest are public safety and traffic monitoring and management. ActEV addressed activity detection for both forensic applications and for real-time alerting. The ActEV evaluation was initially run as a blind evaluation and is currently open as a leaderboard evaluation. Participants may join the leaderboard evaluation independent of participation in the blind evaluation.
What is Activity Detection in Videos ?
By activity detection, we mean to detect visual events (people engaged in particular activities) in a large collection of streaming video data.
What
The ActEV evaluation is being conducted to assess the robustness of automatic activity detection for a multi-camera streaming video environment.
Who
NIST invites all organizations, particularly universities and corporations, to submit their results using their technologies to the ActEV evaluation server. The evaluation is open worldwide. Participation is free. NIST does not provide funds to participants.
How
To take part in the ActEV evaluation you need to email and register on the actev.nist.gov website. We then send you information on how to download data and how to create an account on the ActEV scoring server (see website for details). Using a valid registered account, you are able to upload your JSON format results (see website for: Evaluation Plan) to the ActEV scoring server and participate in the ActEV evaluation. The ActEV evaluation is currently running as a leaderboard evaluation.
Why
The primary driver of the evaluation is to support public safety and traffic monitoring and management by automatic activity detection in streaming video.
Data
The ActEV data used for this evaluation is an unreleased portion of the VIRAT dataset. The data was provided in MPEG-4 format. (please send an email to receive information: Data download, Licensing and Participation Agreements, and Evaluation Plan).
Metrics
The main scoring metrics was based on detection, temporal localization and spatio-temporal localization. We used standard evaluation measures (e.g., probability of missed detection, rate of false alarm) (see: Evaluation Plan).
Evaluation Plan
Form
Please send an email to receive information on how to downloading the Video Dataset.
Task coordinator
ActEV NIST team (ActEV-nist@nist.gov )

Tentative DIVA 1.B Self-Reported Evaluation Schedule

The blind ActEV- and Leaderboard-evaluation is over.

August 27, 2018: NIST releases 1.B Evaluation Plan Over

August 27, 2018: Sample Training and Validation sets (same as the Leaderboard evaluation) Over

September 20, 2018: NIST releases Scoring Software 1.B (AO, AOD, AODT) Over

September 20, 2018: NIST posts JSON Data Format Specification for 1.B Over

October 15, 2018: NIST Dry Run for 1.B Evaluation begins on Scoring Server (1.B-Dryrun Over

November 5, 2018: NIST Dry Run ends Over

November 12, 2018: NIST Evaluation Data Unlocked Over

November 12, 2018: NIST start of 1.B Evaluation Scoring Server Over

December 04, 2018, 11 AM EST: Performers submit AD, AOD, AODT, self-reported (1.B-Eval) Over

TRECVID 2018 ActEV: Activities in Extended Video
Data is no longer released for this evaluation. Please go to the "Current Evaluations" Tab on the Main ActEV Page.
Rules for Leaderboard Evaluation Schedule

TRECID ActEV 2018 You can submit maximum of 2 results per day and maximum of 50 results in total for AD and AOD.

The challenge participant can train their systems or tune parameters using any data complying with applicable laws and regulations. In the event that external limitations preclude sharing such data with others, participant are still permitted to use the data, but they must inform NIST that they are using such data, and provide appropriate detail regarding the type of data used and the limitations on distribution.

The challenge participant agree not to probe the test videos via manual/human means such as looking at the videos to produce the activity type and timing information from prior to the evaluation period to end of leaderboard evaluation.

All machine learning or statistical analysis algorithms must complete training, model selection, and tuning prior to running on the test data. This rule does not preclude online learning/adaptation during test data processing so long as the adaptation information is not reused for subsequent runs of the evaluation collection.

The only VIRAT data that may be used by the systems are the ActEV provided training and validation sets, associated annotations, and any derivatives of those sets (e.g. additional annotations on those videos). All other VIRAT data and associated annotations may not be used by any of the systems for the ActEV Leaderboard Evaluation.




If you have any question, please email to actev-nist@nist.gov