Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Informational Only

This challenge is no longer accepting new submissions.

2019 NIST ARIAC Challenge

Build your own robot control software and win up to $10,000!

Department of Commerce - National Institute of Standards and Technology

Type of Challenge: Software and apps
Submission Start: 02/12/2019 09:00 AM ET
Submission End: 04/19/2019 05:00 PM ET

This challenge is externally hosted.

You can view the challenge details here: http://www.nist.gov/ariac

Description

The Agile Robotics for Industrial Automation Competition, ARIAC, addresses a critical limitation of robots used in industrial environments like factory floors: They are not as agile as they need to be.

Many robots are not able to quickly detect failures or recover from those failures. They aren't able to sense changes in their environment and modify their actions accordingly. Programming these robots for even the simplest tasks takes time and effort. Help solve this problem by coding your own robot control system (software) that can outperform everyone else's in a simulated environment! Compete to win $17,500 in cash prizes and the opportunity to showcase your solution at a future robotics conference.

Background

In June 2017, the National Institute of Standard and Technology (NIST) put on the first ARIAC Competition. The goal of the competition was to test the agility of industrial robot systems, with the goal of enabling industrial robots on the shop floors to be more productive, more autonomous, and to require less time from shop floor workers. For the second year of the competition, we introduced a cash prize to motivate and expand participation. This is the third year of the competition.

In this context, agility is defined broadly to address:

  • Failure identification and recovery, where robots can detect failures in a manufacturing process and automatically recover from those failures
  • Automated planning, to minimize (or eliminate) the up-front robot programming time when a new task is introduced
  • Fixtureless environment, where robots can sense the environment and perform tasks on parts that are not in predefined locations

Participants are required to develop a robot control system (software) to control a robot in a simulated environment. Gazebo (http://www.gazebosim.com), which is an open source robotics simulation environment, will be used as the testing platform and the Robot Operating Systems (ROS), which is an open source set of software libraries and tools, will be used to define the interfaces to the simulation system.

Software interfaces will be made publicly available that will allow the Participants to control the robot. There will be one qualifier in the April 2019 timeframe with the final competition occurring in mid-May 2019. The competition will be performed virtually in the cloud, so physical attendance is not required to compete. NIST is proposing a workshop to a major robotics conference to allow the winning Participants to present their approaches; details, when available, will be posted to the official challenge website. See full details in How to Enter.

Dates of the Competition

  • Qualifier Begin: April 15, 2019
  • Qualifier End: April 19, 2019
  • Competition Testing (Finals) Period Begins: May 13, 2019
  • Competition Testing (Finals) Period Ends: 5:00 PM, May 17, 2019
  • Announcement of Cash Prize Winners: May 21, 2019

Competition Scenarios

The following terminology is frequently used in this document

  • Order: A list of parts and their goal location on a tray.
  • Part: One element of an order.
  • Tray: A surface that hold parts.
  • Kit: A tray and set of parts that make up an order.

ARIAC requires participants to complete a series of tests centered in an industrial scenario that are based around building kits made up of particular parts. The robot system will work within the environment specified in the work environment section.

There are three different test scenarios that all involve moving parts from a supply location to a tray. The possible supply locations are a conveyor belt, and stationary bins. Challenges will be introduced in each scenario. Details about the scenarios follow.

  1. Scenario 1: Baseline Kit Building: The first scenario is intended as a baseline set of tasks for the other test methods to be compared against. The task for this scenario is to pick specific parts and place them on a tray. The robot arms will receive an “Order” that details the list of parts and their target locations. Orders are covered in more detail in the Orders section.
  2. Scenario 2: Dropped Part: The task for Scenario 2 is identical to Scenario 1, however one or more parts will drop from the robots’ gripper. The robots will need to recover after dropping a part and complete the given Order. Recovery could entail picking up the dropped part or fetching a new part.
  3. Scenario 3: In-Process Kit Change: While the robots are in the middle of assembling a kit, a new high priority order will be received that needs to be completed as fast as possible. The robots will need to decide how best to complete this new order, and then complete the previous order.

The competition will consist of 15 trials: 5 trials of each of the 3 scenarios. Each trial will receive a score based on completion and efficiency metrics outlined in the Scoring section.

Winners

Up to three winners will be selected. The Prize Purse for the ARIAC Prize Competition is a total of $17,500. In addition, each winner may be able to select any one representative from their team to present their results at a robotics conference in 2019 in a workshop dedicated to the ARIAC competition. NIST may be able to provide travel funds subject to availability.  The number of winners will be fewer than three if there are fewer than three Participants that qualify for the competition. The Prize Purse may increase, but will not decrease. Any increases in the Prize Purse will be posted on the Event Website. NIST reserves the right to announce additional winners of non-cash prizes.

Prizes

1st Place
Cash Prize Amount: $10000

2nd Place
Cash Prize Amount: $5000

3rd Place
Cash Prize Amount: $2500

Rules

The event website and official rules are posted at http://www.nist.gov/ariac.

Eligibility to Participate 

Participation in the ARIAC Prize Competition is open to ALL; however, not all participants are eligible to win cash prizes as explained in the next section.

Each Competition Participant (individual, team, or legal entity) is required to register on the ARIAC website. There shall be one Official Representative for each Competition Participant. The Official Representative must provide a name, username (which may serve as a team or affiliation name), email address, and affirm that he/she has read and consents to be governed by the Competition Rules. Multiple registrations per Participant are NOT allowed. At NIST’s discretion, any violation of this rule will be grounds for disqualification from the Competition. Multiple individuals and/or legal entities may collaborate as a team to submit a single entry (Participant’s code submission to a qualifier or final), in which case the designated Official Representative will be responsible for meeting all entry and evaluation requirements. Participation is subject to all U.S. federal, state and local laws and regulations. Participants must not be suspended, debarred, or otherwise excluded from doing business with the Federal Government. Individuals entering on behalf of or representing a company, institution or other legal entity are responsible for confirming that their entry does not violate any policies of that company, institution or legal entity. Any other individuals or legal entities involved with the design, production, execution, distribution or evaluation of ARIAC are not eligible to participate.

Once registered, Participants will have access to the interfaces needed to participate in the competition through the ARIAC website. NIST will create an account to allow the Participants to upload their submissions to the qualifier. Modifications and improvements to a Participant’s control system during the qualifying period are to be expected, but the Participant should have one final control system ready for the final competition.

Elibility to Win a Cash Prize

To be eligible for a cash prize:

  • A Participant (whether an individual, team, or legal entity) must have registered to participate and complied with all of the requirements under section 3719 of title 15, United States Code as contained herein.
  • At the time of Entry, the Official Representative (individual or team lead, in the case of a group project) must be age 18 or older and a U.S. citizen or permanent resident of the United States or its territories.
  • In the case of a private entity, the business shall be incorporated in and maintain a primary place of business in the United States or its territories.
  • Participants may not be a Federal entity or Federal employee acting within the scope of their employment. NIST employees are not eligible to participate. Non-NIST Federal employees acting in their personal capacities should consult with their respective agency ethics officials to determine whether their participation in this Competition is permissible.
  • A Participant shall not be deemed ineligible because the Participant consulted with Federal employees or used Federal facilities in preparing its submission to the ARIAC Prize Competition if the Federal employees and facilities are made available to all Participants on an equitable basis.

In addition, interested Participants who do not meet the eligibility requirements to win a prize (i.e., individuals who are neither a US citizen nor a permanent resident of the United States or non-US-based entities) are encouraged to participate in the Competition. They are invited to register on the ARIAC website and download the training material. The performance obtained by these Participants will be displayed on the ARIAC website in the same manner as the performance obtained by Participants who are eligible to win cash prizes.

Judging Panel

The NIST Director will appoint a panel of three qualified judges. The Judges will be robot agility and/or simulation experts from inside and/or outside of NIST. The Judges will determine winners according to the Judging Criteria described herein. The Judges may not have personal or financial interests in, or be an employee, officer, director, or agent of, any entity that is a registered Participant in the Competition and may not have a familial or financial relationship with an individual who is a registered Participant. In the event of such a conflict, a Judge must recuse himself or herself and a new Judge may be appointed.

Basis on which Finalists and Winners Will Be Selected

A portion of the scores will be automatically calculated, shared with the Participants, and publicly posted for each Trial at the conclusion of the Finals (after all Trials are run) as a combination of cost and performance metrics, described below in the Trial Score Calculation section. A Participant’s final automated score is the sum of scores for each of the 15 competition Trials.

Environment

The simulation environment is a representation of an industrial kitting work cell with two robot arms, a conveyor belt, part bins, and trays.

The conveyor belt is a 1 m wide plane that transports objects across the work environment at a fixed speed of roughly 0.2 m/s. Parts continuously appear on the belt for the duration of the trial. When parts reach the end of the conveyor belt they are automatically removed. Teams can control the conveyor belt during development, but not during the final competition.

There are eight part bins that may be used for building kits. Parts in these bins will not be replaced once used.

There are two robot arms mounted on a linear actuator that operates parallel to the conveyor belt. The linear actuator measures 4 m.

Two automated guided vehicles (AGV) are located at either end of the linear actuator. Kits are built on top of these AGVs. A team will programmatically signal the AGVs when the kits are ready to be taken away. The signaled AGV will depart for a short period and then return with an empty tray.

The trays used for assembling kits are flat trays measuring 0.5 x 0.7 m.

Robot Arm

The two robot arms used in each trial will be Universal Robots UR10.

The robot arms’ position is controlled through the linear actuator on which it is mounted.

The end of the arms are equipped with a vacuum gripper. The vacuum gripper is controlled in a binary manner and reports whether or not it is successfully gripping an object.

Sensors

A team can place sensors around the environment. Each sensor has a cost that factors into the final score. Available sensors are:

  1. Break beam: reports when a beam is broken by an object. It does not provide distance information.
  2. Laser scanner: provides an array of distances to a sensed object.
  3. Cognex logical camera: provides information about the pose and type of all models within its field of view.
  4. Proximity: detects the range to an object.

Order

An order is an instruction containing kits for the robot system to complete. Each order will specify the kit to be assembled, i.e. the list of parts to be put in the kit. Each specified part has the following structure:

  1. The type of part.
  2. The position and orientation of the part on the tray.

Faulty parts

Above each AGV is a quality control sensor that detects faulty parts. If faulty parts are detected while teams are filling trays, those parts should be removed from the tray and replaced with another part of the same type. Faulty parts are considered unwanted parts: they will not count for any points when the kit is submitted, and they will cost teams the all-parts bonus if left in trays.

Scoring

Scores will be automatically calculated for each trial as a combination of performance metrics and costs. 

Competition Process

Each trial will consist of the following steps:

  1. The robots programmatically signal that they are able to begin accepting orders.
  2. The first Order (Order 1) is sent to the robots.
  3. A fixed amount of time is allowed to complete the order.
  4. In the case of the Dropped Part testing method, up to three parts will be forcibly dropped from the gripper.
  5. In the case of the In-Process Kit Change testing method, a new Order (Order 2) will be issued that is of higher priority than the previously issued Order 1. When Order 2 is complete, building of Order 1 is to resume.
  6. The robots signal programmatically when a kit is complete and ready for quality control.
  7. The robot system will be notified that the trial is over. The trial is over when time runs out or all Orders have been fulfilled.

Cost Metrics

The following values are calculated for a Participant’s system setup. As Participants are to use the same system setup for all Trials, the values will remain unchanged between Trials.

The choices made by the competitors (choices of sensors) will have a specific cost associated with them. The final cost for individual sensors will be made available to the Participants at least two weeks before the Finals. For planning purposes, Participants should use:

  1. $500 for each logical camera used.
  2. $200 for each depth camera used.
  3. $100 for each other sensor used (e.g., break beam, proximity, laser scanner).

The sum of these cost choices made by each team will be divided by a baseline cost of $1700 and made into a cost factor for the scoring. The team costs are designated as TC, and the baseline cost is designated as BC. The cost factor, is then calculated as: 

Completion Score

Performance metrics that cover both completion and efficiency are calculated for each Trial separately. Completion captures the quality of the orders fulfilled (that the boxes contain the correct products in the correct position/orientations); efficiency captures the responsiveness in fulfilling orders (that the orders were filled quickly).

Each trial, since it has the option to have more than one order, will have a certain set of scores for each order. Each order consists of a list of products needed and a prescribed position and orientation for each product within the box. The completion score, CS, is a combined score for each order, starting with 1 point for each product that is placed in the box. For each product that is within 3 cm of the correct position, and within 0.1 radians of the correct orientation will result in an additional point. Next, since the main focus of the competition is to have fulfilled orders, any order with all of the products in the confines of the box and in the correct position/orientation receives an additional point for each product. So, for an order sj with i products in it would have a maximum score of:

Efficiency Factor

The Efficiency of the approaches chosen by the teams are accounted for in the scoring by using the times that each team takes to complete each trial. The times are counted in seconds from the start of the trial until the first kit is completed and is designated as T01. In the trials with multiple orders, the time, Tj is counted in seconds from the time that the jth order is sent to the team. For each trial, an average of the times for each team (ATj) is compared to the individual team time to calculate the Efficiency Factor: 

If a team’s system times out for a trial, the efficiency factor for the trial is set to 0 and the trial time is not used to calculate the average times for that trial. 

For the multi-order trials where one of the orders is a high priority order, an additional multiplier, h, is applied with the efficiency factor to give a higher proportion of the score to that order, which is initially set to a value of 3. The final h value will be made available to the Participants at least two weeks before the finals.

Trial Score Calculation

All of the scoring factors described above are combined as shown below to calculate the score for each trial: 

Final Score Calculation

The final overall score is the equally weighted sum of all trial scores. As mentioned in the Competition Scenarios section, trials are expected to focus on the Baseline Kit Building, Dropped Part, and In-Process Kit Change scenarios, among others.

The Winners of cash prizes will be determined by three Judges appointed by the NIST Director, using the Judging Criteria outlined herein.

NIST will announce the Finalists via the Event Website as well as those Finalists that have been awarded a cash award (each, an “Award”). The anticipated number and amount of the Awards that will be awarded for this Competition is set forth in these rules; however, NIST is not obligated to make all or any Awards, and reserves the right to award fewer than the anticipated number of Awards in the event an insufficient number of eligible Participants meet any one or more of the Judging Criteria for this Competition, based on NIST’s sole discretion. Using the Judging Criteria described herein, three qualified Judges (appointed by the NIST Director) will determine the winners of the ARIAC Prize Competition.

The winner verification process for being eligible to receive an Award includes providing the full legal name, tax identification number or social security number, routing number and banking account to which the prize money can be deposited directly. Return of any notification as “undeliverable” will result in disqualification. The verification form must be returned within seven (7) calendar days of receipt. After verification of eligibility, Awards will be distributed in the form of electronic funds transfer addressed to the Official Representatives specified in the winning Entries. That Official Representative will have sole responsibility for further distribution of any Award among Participants in a group Entry or within a company or institution that has submitted an Entry through that representative. The list of Entries receiving Awards for the Competition, including the names of all members of a team, will be made public according to the timeline outlined on the Event Website.

Winners are responsible for all taxes and reporting related to any Award received as part of the Competition.

All costs incurred in the preparation of Competition Entries are to be borne by Participants.

Presentation of Winning Approaches at a To-Be-Determined Workshop in a Robotics Conference

Up to three Winners of the ARIAC Competition may be invited by NIST to attend a workshop dedicated to the ARIAC Competition at a future TBD robotics conference. The Winners will be required to make a 25-minute presentation on their approach in the ARIAC Competition followed by audience question and answer.

NIST may arrange and pay for the travel for one team-selected representative from each of the winning Participants to cover eligible travel costs such as airfare, conference registration fee, and lodging.

Point of Contact for the Competition

Questions about the ARIAC Prize Competition can be directed to William Harrison, william.harrison@nist.gov.

Judging Criteria

Judges

  • William Flannigan: Senior Manager of Advanced Robotics, Amazon Robotics
  • Brian Gerkey: CEO, Open Source Robotics Foundation
  • Craig Schlenoff: Supervisory Mechanical Engineer, National Institute of Standards and Technology

Overall performance based on the scoring metrics in competition rules
Percentage: 80

Using the scoring metrics described herein, the first place Entry will be awarded 80 points, the second place Entry will be award 70 points, the third place Entry will be awarded 60 points, and so on.


Novelty of approach and alignment with spirit of competition
Percentage: 20

At the judges’ sole discretion, up to 20 points will be awarded for Entries that show novel approaches to solving the agility challenges and whose approaches are consistent with the spirit of the competition of coming up with industrially-implementable approaches that will help industry make better use of their robotic platforms. Each Entry is eligible for up to 20 points, and more than one Entry can receive all 20 points (or any other value).


How To Enter

Get Started!

  1. Register at the ARIAC website
  2. Start coding! Review the official rules and documentation available at the ARIAC website to design your own robot control software.
  3. Compete in the Qualifier in April 2019
  4. High-scoring teams will be invited to compete in the Finals May 13 - May 17, 2019.

How to Participate

An interested Participant (individual, team, or legal entity) must initiate the process of participating in the Competition by registering at the ARIAC website. The party is then given access to view the documentation and download the tutorials (which provides instructions on how to run the simulations) from the ARIAC website. In the event NIST determines modifications to the documentation and tutorials are needed, all website registrants will be notified by email.

Once registered, the Participants are eligible to participate in the qualifier. There will be one qualifier occurring in the April 2019 timeframe (specific deadlines will be published at the official challenge website and all Participants notified by email). A minimum score will be determined, based on the metrics described below, which Participants have to meet or exceed in the qualifier to be eligible for the final competition.

Participants will run the qualifier from their own location. They will have two weeks from the date that each qualifier is released on the website (available to be downloaded) to submit their software control system that addresses the challenges in the qualifier. Registered participants will be emailed when the qualifier is released. They should run the qualifier at their own location and submit the resulting output files generated by their control system (listed below).  Their output files should be uploaded to an account (nfiles.nist.gov) that NIST will set up specifically for each Participant.

Specific files that need to be submitted include:

  • Their environment configuration file specifying your choice of sensors.
  • A performance log file that contains information about the task completion.
  • A simulation state log file that contains information about the state of Gazebo during the trial.