follow us on facebook follow us on twitter email us

Statistical Software for Healthcare Oversight

About the Challenge
Building a simple statistical software tool to aid in effective healthcare oversight.

Posted By: Department of Health and Human Services
Category: Software/Apps
Skill: Software/Apps Interest: Health Submission Dates: 12 a.m. ET, Sep 28, 2016 - 5 p.m. ET, May 15, 2017

Results:

We are pleased to announce the winners of the Statistical Software for Healthcare Oversight Challenge.  The grand prize winner and the two finalists were able to create new versions of RAT-STATS that were 508 compliant, replicated the four required RAT-STATS modules, and met all other competition requirements. The Grand Prize winner was selected from the three initial finalists by a panel of 12 users. We have also included additional acknowledgements for the teams that provided the best and the most unique graphical user interfaces (these results have also been posted on the OIG website at https://oig.hhs.gov/compliance/rat-stats/prize/).

Grand Prize Winner and Finalist: Doug Brown (Team Catalyst)

Overall, Team Catalyst’s entry delivered the most intuitive user experience. The entry maintained the familiarity of the original RAT-STATS design while still finding room to add several new and useful features. For example, the entry included custom confidence intervals which allowed users to calculate confidence intervals at other than the standard 80, 90, or 95 percent confidence levels. The entry went beyond the 4 modules required by the contest and included functions for stratified attribute samples and for generating sets of random numbers. As with the other finalists, the Team Catalyst entry was able to replicate RAT-STATS on 60 test cases and met the key requirements for 508 compliance.

Finalist: Murray Miron

Among a number of notable new features, Murray’s entry included an integrated help system that connected relevant help information with each input element within the software. The accessibility of the information greatly reduced the need for users to refer back to a separate help screen or manual. Murray’s entry also included the fastest algorithm for calculating exact confidence intervals for large attribute samples. In addition, the entry went beyond the four required modules to also include a function for determining the sample size of a design given an estimated error rate. As with the other finalists, Murray’s entry was able to replicate RAT-STATS on 60 test cases and met the key requirements for 508 compliance.

Finalist: Dave Kaniss and Tim Kaniss (Team ENGRdynamics)

Team ENGRdyanmics’ simplified the original RAT-STATS Design by combining the stratified and unrestricted variable appraisal functions into a single module. The combined module had an elegant design and was still very effective in accounting for the differences between the stratified and unrestricted analyses. The team also included a number of additional useful new features such as the ability to import files by dragging them into the target input field.  As with the other finalists, the Team ENGRdynamics’ entry was able to replicate RAT-STATS on 60 test cases and met the key requirements for 508 compliance.

Best Graphical User Interface – Corey Berry and Aja Berry (Team CBTek)

The graphical user interface for CBTek’s entry looked great, met 508 compliance standards, and allowed for easy access to all key program functions and operations. The interface had the added advantage that it was customizable to fit the needs of the individual user. Of final note, the team’s splash screen and program icon were the best of all the entries submitted.

Graphic User Interface Honorable Mention – Sudeep Shouche and Premraj Narkhede (Team Dolcera)

Team Dolcera provided a unique entry which split several of the RAT-STATS modules into a series of steps. The component nature of the design simplified the input and made the function requirements easier to follow for new users.

 


 

Contest Rules and Description

Important Note: Once five finalists have been identified, no further entries from teams not previously identified as finalists will be reviewed even if they are submitted prior to the May 15, 2017, 5:00 p.m. EST deadline.  Moreover, the contest will end on May 15, 2017, at 5:00 p.m. EST even if fewer than five finalists have been selected.

Finalists may update their entries without losing their finalist designation. Updates from the finalists will be accepted until 5:00 p.m. EST on the fourteenth day after the fifth finalist has been identified or May 15, 2017, 5:00 p.m. EST, whichever comes first. The newest entry from each team will be used for all judging purposes unless otherwise requested by the team.

Current Status: Status as of 3:00 PM EST May 31 2017: Three finalists.

Registration: To register for the competition, create an account on the challenge.gov website and send an email with the name(s) of your team member(s) to jared.smith@oig.hhs.gov.

Background: Each year HHS handles hundreds of millions of Medicare and Medicaid claims valued at over a trillion dollars. Due to the high volume of claims, statistical sampling provides a critical tool to ensure effective oversight of these expenditures. Sampling is used by the providers in their own efforts to monitor their performance and by the various organizations within HHS. There are a wide range of different software tools for performing statistical analysis. RAT-STATS has a unique niche in that it provides a straightforward tool for individuals who need a simple but robust method for selecting and analyzing statistical samples. The RAT-STATS software was originally created in 1978 and has gone through several upgrades since that time. Unlike a full statistical package that attempts to answer all types of questions for a wide range of users, RAT-STATS serves as a streamlined solution to handle the specific task of developing valid statistical samples and estimates within the healthcare oversight setting.

For example, an OIG investigator may pull a simple random sample in order to estimate damages for a provider suspected of fraud. RAT-STATS generates valid pseudo-random numbers and outputs all of the information needed to replicate the sample. Once the investigator finishes reviewing the sample, he or she can then enter the results into RAT-STATS to get the final statistical estimate. While the investigator may need some basic training in statistics, he or she would not need the same level of expertise as would be required to navigate the many options available in a full service statistical or data analysis package.

The objective of the current challenge is to develop the foundation for an upgraded version of RAT-STATS. The current version of RAT-STATS is well validated; however, its user interface can be difficult to navigate and the underlying code makes the software costly to update. OIG needs a new, modern version of the software that is easy to use and can be extended in a cost effective manner. In addition, the new version of the software must be 508 compliant.

The current version of the RAT-STATS software can be found at http://www.oig.hhs.gov/compliance/rat-stats/index.asp.

Judging Criteria

Finalist Criteria

The first five eligible submissions that comply with the rules of this Federal Register Notice (see also the Rules page of this website), follow the detailed submission instructions, are complete, are able to fully replicate RAT-STATS on 60 target cases, and include a description of a reasonable method for adding additional modules will each be declared a finalist. Once five finalists have been identified, no further entries from teams not previously identified as finalists will be reviewed even if they are submitted prior to the May 15, 2017, 5:00 p.m. EST deadline. Moreover, the contest will end on May 15, 2017, at 5:00 p.m. EST even if fewer than five finalists have been selected.

Finalists may update their entries without losing their finalist designation. Updates from the finalists will be accepted until 5:00 p.m. EST on the fourteenth day after the fifth finalist has been identified or May 15, 2017, 5:00 p.m. EST, whichever comes first. The newest entry from each team will be used for all judging purposes unless otherwise requested by the team.


Grand Prize Criteria

Twelve OIG employees who use the current version of the RAT-STATS software will review the submissions from the five finalists and vote on which package they would prefer to serve as the RAT-STATS replacement. The package with the most votes will be the winner of the Grand Prize. Ties will be broken by the Office of Inspector General, Office of Audit Services, Director of Quantitative Methods. The instructions to the judges will be as follows, "Please review the operation of the following software packages and select one that you would prefer to use in the future for selecting and analyzing your statistical samples." Critically, the instructions do not ask which software is most like RAT-STATS, but rather which software they would prefer to use. As a result, during the review for the Grand Prize, judges will consider features even if they were not part of the original RAT-STATS package.

How to Enter

Submissions must be entered electronically using the challenge.gov website and must contain a ZIP file with each of the following elements.

(1) Executable software that replicates the 4 target RAT-STATS functions: Single Stage of Random Numbers, Unrestricted Attribute Appraisal, Unrestricted Variable Appraisal, and the Stratified Variable Appraisal.

(2) Source code for the executable that is both human- and machine-readable. The source code can be written in any programming language(s). The source code must be commented sufficiently so that another user can understand the underlying operation of the code.

(3) A text file written in English documenting and explaining any non-cosmetic differences between the submission and the original RAT-STATS software.

(4) A text file written in English summarizing the changes required to add additional RAT-STATS functions to the submission.

(5) A text file written in English documenting all software licenses associated with the source code used as part of the project. The text file must describe the nature of each individual license and their overall compatibility.

(6) A one-page text file written in English that contains the following:

  1. Names and email addresses of the team captain and all team members
  2. A five or more character identifier for the entry that is used as a prefix in the names of all of the team’s submitted files
  3. A brief description of the submission
Prizes
Grand Prize $25,000.00 In addition to the cash payment, winners will be listed on the new version of the RAT-STATS software (once released), on the RAT-STATS download website, and in the RAT-STATS User Manual.
Finalist $3,000.00
Finalist $3,000.00
Finalist $3,000.00

17 Discussions for "Statistical Software for Healthcare Oversight"

  • jsmith3
    The competition deadline is in two months (May 15). Note that the review process takes roughly two weeks to complete. That means that you will likely not have a chance to correct any issues with your software if you first submit your entry in May. More generally, the earlier you submit your entry, the more likely you will have time to correct any issues identified in the review process.

    • Reply
      jsmith3
      An examination of the file format of the "text file" is not one of the steps performed when reviewing submissions. Consequently, you can use any format conditional on our being able to open the file using our standard Windows build. This build includes Microsoft Office and Adobe Acrobat

  • jsmith3
    An update has been posted to the federal register which clarifies when finalist entries can be changed. The update can be found the below link. I will update the rules on the challenge.gov page on Tuesday to reflect the changes outlined in the federal register notice https://www.federalregister.gov/documents/2016/12/27/2016-31182/announcement-of-updated-requirements-and-registration-for-the-simple-extensible-sampling-tool

  • Show Replies [+]
    jsmith1
    A participant correctly pointed out that the description of the simple random sample algorithm has typo in it. The file has been updated on the https://oig.hhs.gov/compliance/rat-stats/prize/ website. The issue is described below for your convenience. The error was in bullet 12 of the technical details document for the simple random sample. In particular, the bullet stated that the result of the preceding calculation had to be checked to see it was "not greater than equal to 30307”. Bullet 12 should actually state "If the result is not greater than equal to 30323". The version posted now is correct. I apologize for any confusion. The step in question is correct in the original example R code.

    • Reply
      Jeff
      Are you sure the instructions are still not in error? The src code example for a,b,c list the max seed check as 30269,30307,30323 respectively. In the documentation a and b both list the max seed check as 30269.

      • Reply
        Jared Smith
        Thank you for bringing the additional error to my attention. You are correct that b should refer to 30307 rather than 30269. I have submitted the updated file to the web folks who should have it posted in the next day or two. When there is a discrepancy, the code is likely correct over the description; however, it is best to ask just to make sure.

  • Show Replies [+]
    William Spangler
    Is submitting more than once possible? My concern is if I postpone submitting to build more than 4 functions, and the contest closes with all finalists meeting only the 4 specified target functions. I can make some small aspects of the program more convenient, like not having to return to the main menu each time, but it's difficult to guess if any major redesign would be helpful without review.

    • Reply
      jsmith1
      I have been thinking about this very question. I agree that once an entry has been made a finalist, it should be possible to update the entry without it losing the finalist designation. However, the competition rules as they are currently written, strongly imply that the submission of an updated entry will lead to the removal of the previous entry from the competition even if the previous entry was a finalist. I will look into changing the notice to make such updates possible.

        • Reply
          Jared Smith
          As of the moment, finalists are locked in. I am working on changing this and should know in a week or two whether it will be possible. Given the draft formulation, updates from individuals already determined to be finalists will be accepted until 5:00 p.m. EST on the fourteenth day after the fifth finalist has been identified or May 15, 2017, 5:00 p.m. EST, whichever comes first.

  • Show Replies [+]
    quocanh
    What are the screen sizes and resolution of the people that are going to be using this software? Which OS is commonly used? Do users frequently use the same function multiple times in a row?

    • Reply
      jsmith1
      Those are some great questions! 1. I do not have statistics for the user base in general; however, within OIG the software is used primarily on laptops with default resolution of 1366 x 768. A few users have larger monitors that likely have much higher resolutions. 2. By a large margin, the most common operating system is Windows 7 (see the more recently reply noting that Windows 10 may be in place by the time testing occurs). 3. Yes, it is common for the same module to be used multiple times. One of the primarily examples is with the random number generator for a simple random sample. In the current version of RAT-STATS this generator is also used for stratified statistical samples. To pull the stratified sample, the user pulls a separate simple random sample from groups that make up each of the individual stratum. For example, the random number generator would be used 5 times when pulling a statistical sample that involved 5 strata. With estimation, a function may be used multiple times if there are multiple different quantities that are being estimated as part of a review. For example, users may be interested in the overall dollar error amount and the amount that was paid by the federal government.

      • Reply
        jsmith1
        I recently learned that the agency will change over to Windows 10 sometime in 2017. The exact date is still uncertain. As a result, the testing machines may be either Windows 7 or Windows 10.

  • Show Replies [+]
    jsmith1
    That is a great question. During testing, the source code will only be used to check that the submission meets the rules of the challenge. For example, we will review the code to ensure that the steps for adding additional modules are accurately stated. The OIG will not share the source code with other participants or release it to the public. Moreover, submissions that are not selected as winners will not be used in part or in whole by the OIG. Unless you win the challenge, your submission (source code included) will only be used to judge your entry.

    • Reply
      L Ozeran
      That process may limit interest in participation. If that occurs, you might consider an approach that does not require source code until the final stage, and only as confirmation of the winning solution chosen.

  • L Ozeran
    Perhaps I am misunderstanding the requirements. Am I correct in reading that the submission must include the developed source code prior to being a finalist, prior to being a winner, and prior to any other potential payment?

Add to the Discussion

Solutions
No solutions have been posted for this challenge yet.
Rules

In order to take part in this Challenge participants must create a software package that replicates the operation of four of the functions of the original RAT-STATS software: (1) Single Stage of Random Numbers; (2) Unrestricted Attribute Appraisal; (3) Unrestricted Variable Appraisal; and (4) Stratified Variable Appraisal. The steps required to add additional statistical modules to the software must be reasonable and fully documented, and the submission must contain all of the required elements (refer to the Submissions page for a complete list).

In addition, participants must agree to the following terms and conditions.

Teams of one or more members can participate in this Challenge. There is no maximum team size.  Each team must have a captain, and each individual may only be part of a single team . Individual team members and team captains must register in accordance with the Registration Process section below.  The role of the team captain is to serve as the corresponding participant with OIG about the Challenge and to submit the team’s Challenge entry.  While OIG will notify all registered Challenge participants by email of any amendments to the Challenge, the team captain is expected to keep the team members informed about matters germane to the Challenge.

(1) To be eligible to win the Challenge prize, each participant (individual or entity) must —
a. register to participate in the Challenge under the rules promulgated by OIG as published in this Notice;
b. have complied with all the requirements under this section;
c. in the case of a private entity, be incorporated in and maintain a primary place of business in the United States or, in the case of an individual, be a citizen or permanent resident of the United States;
d. not be a Federal entity or Federal employee acting within the scope of their employment;
e. not be an employee of OIG, a judge of the Challenge, or any other party involved with the design, production, execution, or distribution of the Challenge or the immediate family of such a party (i.e., spouse, parent, step-parent, child, or stepchild); and
f. be at least 18 years old at the time of submission.

(2) Federal contractors may not use Federal funds from a contract to develop their Challenge submissions or to fund efforts in support of their Challenge submission.

(3) Federal grantees may not use Federal funds to develop COMPETES Act Challenge applications unless consistent with the purpose of their grant award.

(4) An individual or entity shall not be deemed ineligible because the individual or entity used Federal facilities or consulted with Federal employees during a competition if the facilities and employees are made available to all individuals and entities participating in the competition on an equitable basis.

(5) By participating in this Challenge, each individual (whether competing singly or in a group) and entity agrees to assume any and all risks and waive claims against the Federal Government and its related entities (as defined in the COMPETES Act), except in the case of willful misconduct, for any injury, death, damage, or loss of property, revenue, or profits, whether direct, indirect, or consequential, arising from participation in this Challenge, whether the injury, death, damage, or loss arises through negligence or otherwise.

(6) No individual (whether competing singly or in a group) or entity participating in the Challenge is required to obtain liability insurance or demonstrate financial responsibility in order to participate in this Challenge.

(8) By participating in this Challenge, each individual (whether participating singly or in a group) and entity grants to OIG, in any existing or inchoate copyright or patent rights owned by the individual or entity, an irrevocable, paid-up, royalty-free, nonexclusive worldwide license to use, reproduce, post, link to, share, and display publicly on the Web the submission, except for source code. This license includes without limitation posting or linking to the submission, except for source code, on OIG’s public facing website (http://www.oig.hhs.gov/compliance/rat-stats/index.asp). In developing its future software systems, OIG may include algorithms and software from Challenge entries and may consult with individuals or teams that submitted entries.  Thus, the license also permits OIG to develop a future software system, independently or with others, using any algorithms or software from Challenge entries, including those obtained from other Challenges or solicitations, and OIG may freely use, reproduce, modify, and distribute the resulting future software system without restriction.  Each participant will retain all other intellectual property rights in their submissions, as applicable.

(9) OIG reserves the right, at its sole discretion, to (a) cancel, suspend, or modify the Challenge through amendment to this Federal Register notice, and/or (b) not award any prizes if no entries meet the stated requirements. In addition, OIG reserves the right to disqualify any Challenge participants or entries in instances where cheating or other misconduct is identified.

(10) Each individual (whether participating singly or in a group) or entity agrees to follow all applicable Federal, State, and local laws, regulations, and policies.

(11) Each individual (whether participating singly or in a group) and entity participating in this Challenge must comply with all terms and conditions of these rules, and participation in this Challenge constitutes each such participant’s full and unconditional agreement to abide by these rules. Winning is contingent upon fulfilling all requirements herein.

(12) Each individual (whether participating singly or in a group) and entity grants to OIG and its contractors assisting OIG with this Challenge the right to review the submission, study the algorithms and the code, and run the software on other sets of images.

(13) Submissions must not infringe upon any copyright, patent, trade secrets, or any other rights of any third party. Each individual (whether participating singly or in a group) or entity warrants that she or the team is the sole author and owner of any copyrightable work that the submission comprises, that the submission is wholly original with the participant or is an improved version of an existing work that the participant has sufficient rights to use and improve.  In addition, the submission must not trigger any reporting or royalty obligation to any third party.  A submission must not include proprietary, classified, confidential, or sensitive information.

(14) The licenses for any code used as part of a submission must be compatible with each other and must allow OIG to distribute and modify the software both within and outside the agency without incurring any reporting or royalty obligations to any third party.

(15) The submission must not contain malicious code such as viruses, timebombs, cancelbots, worms, trojan horses, or other potentially harmful programs or other material or information.

(16) The submission must be unique and must not represent a modification of a previous submission from another team.

(17) Submitted software must be fully functional and operate correctly on Microsoft Windows systems configured in accordance with the applicable United States Government Configuration Baseline (USGCB) and applicable configurations (https://usgcb.nist.gov/).  The group policy settings associated with this configuration are also available on the NIST website (https://usgcb.nist.gov/usgcb/microsoft_content.html).

(18) Submitted software must be a stand-alone product that is designed for end users to run in the standard user context without requiring elevated administrative privileges.

(19) Submitted software must not require or make use of any network capabilities.

(20) Submitted software must be section 508 compliant. For example, it must be possible to run all functions using a keyboard, the software must include a well-defined indicator of current program focus, any results provided in image format must also be available in text, and color coding shall not be used as the only means of conveying information.  For more details refer to section 36 CFR § 1194.21

Submit Solution
Challenge Followers
Public Profile: 1
Private Profile: 41