Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

DIU’s xVIEW2 - Assessing Building Damage

Computer Vision for Building Damage Assessment - Automate damage assessment to accelerate recovery from natural disasters

Department of Defense

Total Cash Prizes Offered: $150,000
Type of Challenge: Technology demonstration and hardware; Analytics, visualizations and algorithms; Scientific
Partner Agencies | Federal: FEMA, USGS, NASA, NGA, Joint AI Center (DoD), National Security Innovation Network (DOD), Carnegie Mellon University’s Software Engineering Institute (FFRDC)
Partner Agencies | Non-federal: California Governor’s Office of Emergency Services, CAL Fire, Cal Guard, MAXAR/Digital Globe
Submission Start: 09/18/2019 08:00 AM ET
Submission End: 11/22/2019 07:59 AM ET

Description

xVIEW2

When a disaster strikes, quick and accurate situational information is critical to an effective response. Before responders can act in the affected area, they need to know the location, cause and severity of damage. But disasters can strike anywhere, disrupting local communication and transportation infrastructure, making the process of assessing specific local damage difficult, dangerous, and slow.

Raw Imagery is Not Enough

disaster relief satellite image

Satellite imagery can provide unbiased overhead views, but raw imagery is not enough to inform recovery efforts. High-resolution imagery is required to see specific damage conditions, but because disasters cover a large ground area, analysts must search through huge swaths of pixel space to localize and score damage in the area of interest. Then annotated imagery must be summarized and communicated to the recovery team. It is a slow and laborious process.

Solving a Common Problem

Recognizing an opportunity to solve a key analytical bottleneck, the Defense Innovation Unit, together with other Humanitarian Assistance and Disaster Recovery (HADR) organizations, is releasing a new labeled, high-resolution satellite dataset and a challenge to the computer vision community.

Prizes

Total Prize Pool

$150,000

Submission Tracks

The pace of innovation in both computer vision and satellite imagery have been increasingly rapidly. Much of that innovation is happening at universities, startups, and the commercial sector. To maximize the community of solvers who are able to participate in the Challenge, we have implemented three separate submission tracks for this Challenge. Solvers may choose to participate in whichever track best suits their needs:

Track 1: Open Source

If you choose to participate in Track 1, you agree to release your code under one of the approved permissive open source licenses: Apache 2.0, BSD, LGPL, or MIT. Participants in this track are eligible for both the main award pool and additional incentives reserved for Open Source submissions only.

Additional award: $25,000

Track 2: Government Purpose Rights

If you choose to participate in Track 2, you agree to grant non-exclusive Government Purpose Rights for your solution, and will be eligible for the main award pool.

1st - $37,950; 2nd - $28,750; 3rd - $23,000; 4th - $17,000; 5th - $8,300

Track 3: Evaluation Only

If you choose to participate in Track 3, you grant us only the minimal rights required to evaluate your submission and display results on the Challenge leaderboard. Track 3 participants are eligible for a special award pool, which is smaller than the main award pool.

1st - $3,000; 2nd - $2,500; 3rd - $2,000; 4th - $1,500; 5th - $1,000

All tracks are eligible for follow-on acquisition opportunities. For more detail on submission tracks and requirements

Rules

View the official rules for this challenge.

View the terms for this challenge.

Judging Criteria

Buildings around the world are as diverse as the conditions they face. The xBD dataset includes pre- and post-disaster imagery for six different types of disaster and fifteen countries.

Goal:

The Challenge requires solvers to provide a computational solution that accurately localizes buildings in overhead imagery and scores the severity of building damage.

Ranking Solvers:

(A) Highest Score First. Solvers will be ranked in descending order, with highest score first and lowest score last. Further detail on the scoring methodology is available on the Challenge and Rules pages on this site.

(B) Ties Ranked by Submission Time. In the event of a tie between participants, the one with the earliest submission time will be ranked first while the one with the later submission time will be ranked lower. DIU reserves the right to break ties when necessary based on finer time precision than displayed on the public leaderboard.

Inputs

Given a pair of pre/post images, your model must localize and classify building damage. Input images are square RGB image files in PNG format, with height and width of 1024 pixels. Pre/post pairs are identified by matching numerical IDs for each set of pre/post filenames. For the training data, filenames also include information about the disaster, but disaster information is obfuscated in the test dataset.

Outputs

Your model must predict an output PNG image with height and width of 1024 pixels, where each pixel value corresponds to the predicted class at that place in the input image:

  • 0: no building
  • 1: undamaged building
  • 2: building with minor damage
  • 3: building with major damage
  • 4: destroyed building

Localization is scored against the building polygons annotated in the "pre" images, at each pixel location. Damage classification is scored against the damage levels annotated in the "post" images, at each pixel location.

Baseline model

A public baseline model is currently being evaluated to establish public benchmark performance levels. The code and results for the baseline model will be available soon.

Upload Your Predictions

The first step in a submission is uploading your predictions for the test dataset, which you should compute offline on your own. Download the test dataset, compute a prediction for each of the instances in the test set, and then upload your predictions for evaluation and display on the public leaderboard. You may elect to upload submissions anonymously, which will display "Anonymous" for that submission on the leaderboard. Solvers may upload one submission at a time; a maximum of three submissions per day will be evaluated. Baseline models are currently being evaluated to establish public benchmark performance levels; leaderboard functionality will be available soon.

Submit your Container for Verification

The best results on the leaderboard will be eligible for online container verification; a successful container verification is required to complete a submission and be eligible for awards. To submit your container for verification, you must containerize your code to compute one prediction, push your container to Docker Hub in a public or private account, authorize the Challenge Sponsor to pull your container, and then use the form to submit a container verification job. If the results of your container evaluation are the same as your previously uploaded predictions, your container is successfully verified. A tutorial and further details will be available when leaderboard functionality is active. Container verification submissions may be made from public or private repositories. Container verification is part of the submission process, and a successful container verification is required to be eligible for awards.

Evaluation Metric

The overall ranking metric for the xView2 Challenge is combined F1 score. Submissions are evaluated over the test dataset to compute a localization F1 score and a damage classification F1 score. Localization F1 scores the agreement between your predictions (0 = no building, or 1-4 = building) at each pixel location versus the ground truth labels for the "pre" image. Damage classification F1 scores the agreement between your predictions (1 = no damage, 2 = minor damage, 3 = major damage, 4 = destroyed) over the pixels of each building polygon in the ground truth of the "post" images. The overall F1 score is 30% localization F1 + 70% damage classification F1.

How To Enter

View instructions on how to sign up for this challenge.

Point of Contact

Have feedback or questions about this challenge? Send the challenge manager an email