Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.


The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Informational Only

This challenge is no longer accepting new submissions.

UG2 Prize Challenge

Bridging the Gap Between Computational Photography and Visual Recognition

Office of Director of National Intelligence - Intelligence Advanced Research Project Activity

Total Cash Prizes Offered: $75,000
Type of Challenge: Software and apps
Submission Start: 01/31/2018 12:00 AM ET
Submission End: 04/15/2018 11:59 PM ET

This challenge is externally hosted.

You can view the challenge details here:


What is the current state-of-the art for image restoration and enhancement applied to images acquired under less than ideal circumstances?  Can the application of enhancement algorithms as a pre-processing step improve image interpretability for manual analysis or automatic visual recognition to classify scene content?  The Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director of National Intelligence (ODNI), is sponsoring the UG2 Prize Challenge.  This challenge seeks to answer these important questions for general applications related to computational photography and scene understanding. As a well-defined case study, the challenge aims to advance the analysis of images collected by small unmanned aerial vehicles (UAVs) by improving image restoration and enhancement algorithm performance using the UG2 Dataset Who We Are:  IARPA focuses on high-risk, high-payoff research.  The UG2 Prize Challenge will engage the wider research community to advance image restoration and enhancement of low quality unconstrained imagery for computer vision applications, such as imagery captured from UAV-mounted sensors. What We’re Doing:  The challenge consists of two parts: (1) image restoration and enhancement to improve image quality for manual inspection; and (2) image restoration and enhancement to improve the automatic classification of objects found within individual images.  The winners of each category will be invited to present at a workshop to be held at the 2018 IEEE Computer Vision and Pattern Recognition (CVPR) Conference ( Why We’re Doing This:  The advantages of conducting visual surveillance from a platform like a small UAV are clear. Man-portable systems can be launched from safe positions to penetrate difficult or dangerous terrain, acquiring hours of video without putting human lives at risk. What is unclear is how to automate the interpretation of these images — a necessary measure in the face of millions of frames from individual flights. Human analysts cannot manually sift through data of this scale for actionable intelligence information. Ideally, a computer vision system would be able to identify objects, events, and human identities of interest to analysts, surfacing valuable data out of a massive pool of largely uninteresting or irrelevant images. To build such a system, one could turn to recent machine learning breakthroughs in visual recognition, which have been enabled by access to millions of training images from the Internet. However, such approaches cannot be used as off-the-shelf components to assemble the system we desire, because they do not take into account artifacts unique to the operation of the sensor and optics platform on a small UAV. Where and When We’re Doing This:  Registration to join the challenge will take place through this site.  From, participants will be directed to register with the University of Notre Dame, the organizer and evaluator of the challenge.  Registration closes on April 1, 2018 with algorithm submission closing on April 15, 2018.
  • When does UG2 registration begin?  January 31, 2018
  • Where to learn more about the challenge, including rules, criteria and eligibility requirements?
  • Where do participants register?
  • When is the registration deadline?  April 1, 2018
  • When is the algorithm submission deadline?  April 15, 2018
  • When will winners be announced?  May 15, 2018
  • When and where is the CVPR workshop?  June 18, 2018 in Salt Lake City, UT
Who Should Participate? The UG2 Prize Challenge is intended for prize participants who are eligible to compete for the challenge prizes.  We encourage developers of computational photography and image processing algorithms to participate, both domestic and international, from academia and industry. Other U. S. Government Agencies, Federally Funded Research and Development Centers (FFRDCs), University Affiliated Research Centers (UARCs), or any other similar organizations that have a special relationship with the Government that gives them access to privileged or proprietary information, or access to Government equipment or real property, will not be eligible to participate in the prize challenge. Read the full rules and challenge eligibility details by going here: Why Participate?  The most successful and innovative teams will be invited to present at the CVPR 2018 workshop. Within each challenge category, the first and second place scoring algorithms will be awarded prize monies.  A total of $75,000 in prizes will be awarded.


Honeywell - ACST

Northwestern University

Honeywell - ACST


Image Enhancement to Facilitate Manual Inspection - 1st Place
Cash Prize Amount: $25000
An evaluation of the qualitative enhancement of images

Image Enhancement to Facilitate Manual Inspection - 2nd Place
Cash Prize Amount: $12500
An evaluation of the qualitative enhancement of images

Image Enhancement to Improve Automatic Object Recognition - 1st Place
Cash Prize Amount: $25000
An evaluation of classification improvement

Image Enhancement to Improve Automatic Object Recognition - 2nd Place
Cash Prize Amount: $12500
An evaluation of classification improvement


A full description of rules is available at:

Judging Criteria

Image Enhancement to Facilitate Manual Inspection
Percentage: 100
A full description of challenge evaluation is available at:

Image Enhancement to Improve Automatic Object Recognition
A full description of challenge evaluation is available at:

How To Enter

A full description of rules, submissions, and evaluations are at: SUBMISSIONS Development Kit The Development Kit consists of a Docker file containing the basic structure for the algorithms submission, as well as instructions on how to pull and run a supplemental quantitative classification module for the second challenge. The images, annotation, and lists specifying the training/validation sets for the challenge are provided separately. Each team must submit one algorithm for each challenge they wish to participate in. Participants who have investigated several algorithms may submit up to 3 algorithms per challenge. All submissions (per challenge) will be held within one single Docker container to be uploaded to Docker Hub. The Docker container will container all dependencies and code required to perform the model’s operation and will execute the model(s) contained upon run. The input images will be provided to the container at run time through Docker’s mounting option, as will the output folders for the model(s) to save their results. Each model must be run on all images contained within the input folder and must save the new images to their respective output folder locations, without any name changes or missing images. Requirements Software:
  • Docker-CE
  • NVIDIA Docker
  • CUDA 8.0
  • cuDNN v5.0
Hardware: The proposed algorithms should be able to run in systems with:
  • Up to and including Titan Xp 12 GB
  • Up to and including 12 cores
  • Up to and including 32gb memory
If you have any questions please feel free to email