11 Aurora Health Care: Using Electronic Medical Records for a Randomized Evaluation of Clinical Decision Support
Laura Feeney (J-PAL North America, Massachusetts Institute of Technology)Amy Finkelstein (Massachusetts Institute of Technology)
11.1 Summary
This chapter is a case study that describes the process for sharing and using individual-level data from electronic medical records (EMR) for a randomized evaluation with Aurora Health Care. Aurora is a large, private, not-for-profit, integrated health care provider in Wisconsin and Illinois, comprising fifteen hospitals and more than 150 clinics in thirty communities.
Researchers from the Massachusetts Institute of Technology (MIT) and the Abdul Latif Jameel Poverty Action Lab’s (J-PAL) North America office partnered with Aurora Health Care to conduct a randomized evaluation of clinical decision support software on the ordering of high-cost imaging (e.g., MRI or CT scans) by health care practitioners (Doyle et al. 2019).
Having worked to establish data sharing agreements and worked with data from a variety of other health care partners, the authors believe this case study is representative of the process and some of the challenges of data sharing and data use in similar contexts. Since completing this research study, one of the co-authors, as well as other researchers associated with J-PAL, continues to work closely with Aurora-based researchers to develop and identify opportunities for research collaboration.
In this case, the delivery of the intervention and the measurement of outcomes were conducted through the EMR system, making access to administrative data a critical feature of the research project. The data used for this research included characteristics of patients and health care providers, an indication for the patient’s health problem (e.g., headache), scan orders (e.g., x-ray, CT scan, MRI), and a score indicating the relative appropriateness of the scan order. The data set included this information as related to scans ordered between November 1, 2015 and December 15, 2017 and covers the study population of 3,511 Aurora providers.
Aurora shared in the motivation to conduct research and may have benefited operationally from the insights gained throughout the process of obtaining approval to share data and preparing data for transmission. Nonetheless, the research team had to overcome concerns over protecting patient and provider confidentiality and the cost of providing data access to researchers. The research team addressed these challenges by providing funding for data extraction and by agreeing to take possession of only de-identified data.
The case describes the process by which the research team sought approval to conduct the study and access data, worked to understand data not originally designed for research, and addressed the challenges of working with de-identified data.
Approval to conduct the research and share data took several months to obtain. The data necessary for this research was complex and drawn from multiple tables and systems not designed for research. Gaining access and then making the data usable for research in de-identifiable form required a significant amount of work from analysts at both Aurora and MIT and required strong communication and trust between teams.
11.2 Introduction
11.2.1 Motivation and Background
This case study describes the process for sharing and using individual-level data from EMRs for a randomized evaluation of the effect of clinical decision support software on the ordering of high-cost imaging. The evaluation was conducted at Aurora Health Care—a large, private, not-for-profit, integrated health care provider in Wisconsin and Illinois—by researchers affiliated with the MIT, J-PAL North America, and Aurora.
Research was the motivation for making data available in this case study. In 2014, the US Centers for Medicare & Medicaid Services (CMS) announced an impending mandate that in order to be reimbursed for high-cost scans for a Medicare beneficiary, such scans must be ordered using an approved clinical decision support (CDS) system. CDS is software that consults clinical guidelines to deliver assessments of procedure appropriateness to providers at the time of order entry. CDS generates appropriateness scores for scan orders and other practices, such as prescribing medication, given clinical indications and patient demographics. If a scan order meets a set of criteria, CDS generates a best practice alert (BPA) that is shown to the provider. While several observational studies have been conducted to assess the impact of CDS on provider behavior, there had been no large-scale randomized evaluations.
The impending mandate from CMS to use a decision support mechanism for imaging orders generated the motivation to engage in this particular research project.106 Researchers affiliated with MIT and J-PAL North America—Sarah Abraham, Joseph Doyle, Laura Feeney, and Amy Finkelstein—as well as a radiologist at Aurora—Dr. Sarah Reimer—conducted a randomized evaluation of CDS on scan ordering at Aurora Health Care. Aurora is the largest health care system in Wisconsin, comprising fifteen hospitals and more than 150 clinics in thirty communities. In December 2016, the research team enrolled 3,511 Aurora health care providers in the study and randomly assigned half of them to receive CDS (treatment group) and half to order as usual (control group). For the evaluation, CDS software was configured to trigger a best practice alert (BPA) when a scan order met a set of criteria and was ordered by a provider assigned to the treatment group.
Aurora Health Care was planning to implement a CDS system in order to prepare for the CMS mandate; participating in the study enabled the institution to gain practical knowledge for internal decision-making and planning. The timing of the study allowed the research team to provide information and evidence relevant to the upcoming policy change.
Aurora was an active collaborator on this study. The research team included a co-investigator from Aurora, and Aurora Health Care houses the Aurora Research Institute (ARI), which is committed to supporting research that leads to new and improved ways to care for people and manage community health. The result was that this relationship was a mutually beneficial research partnership, not simply a data exchange between a provider and a research team.
Nonetheless, leadership at Aurora had significant concerns about making data available to external parties. The costs of creating and maintaining health care data for administrative purposes (i.e., even before preparing such data for use by external researchers) are significant. These costs are driven by factors such as time spent by clinicians entering data, time spent by IT administrators supporting the system, infrastructure costs of computing, and time spent by analytics teams extracting and interpreting data. Although the data exist as a natural byproduct of health care administration, some raised concerns about giving away data without compensation given the value of the data to researchers or other external parties.
The team addressed leadership concerns in two ways:
First, the research team made the case that sharing data with the external research team would enable rigorous research that would itself be valuable to Aurora Health Care. Aurora would learn about the effects of CDS in advance of the mandate. Moreover, because the team partnered with a researcher from ARI, Aurora benefited from the research generation process in terms of producing publications and publicity.
Second, the research funding from Arnold Ventures (then known as the Laura and John Arnold Foundation) included funds for a sub-award to Aurora to support the randomized evaluation of the CDS system. Part of this award compensated Aurora’s business intelligence and data analytics teams for their time spent extracting, transforming, and preparing the data for this project and working with the research team to ensure a full understanding of the data. The award also provided funding for server space needed to store the snapshots and extracts of the data created for this project. Although it did not compensate the costs of the initial creation and maintenance of the data, external funding with an allowance for overhead helped to alleviate leadership concerns and secure buy-in for the project.
11.2.2 Data Use Examples
The research team analyzed scan order outcomes across the treatment and control groups to determine the impacts of CDS on the ordering behavior of providers. Aurora’s administrative data, which are primarily housed within its EMR and linked by design, provided information on scan orders and completions, patient encounters and patient demographics, provider employment and demographics, and health care encounters when the BPA was shown. The team also received data on appropriateness scores, alternative scans, and the version of the ruleset used within Aurora’s instance of the CDS.
The outcome measures are based on scan-request level data that contain the indication and imaging order requested as well as the score and set of alternative scans available (for each scan request ordered by one of the study’s providers). These data allow researchers to identify the appropriateness of the order as well as whether the BPA would be shown if the provider were in the treatment group. A data set of ordered imaging scans allowed the team to link each request for a scan to the provider associated with the order. Researchers received an alert-level data set that allowed the team to observe which orders showed a BPA and to confirm that a BPA was shown only when the order was entered by a treatment provider and according to the criteria set for displaying the BPA.
More details on this study can be found in (Doyle et al. 2019) and are summarized in a J-PAL Evaluation Summary.
11.3 Legal and Institutional Framework
11.3.1 Institutional Setup
The parties to the data access mechanism in this case study are Aurora Health Care (the data provider and custodian) and MIT (the research team’s academic institution). These institutions executed the legal agreement necessary to gain access to the data and approved the underlying research activities. Some of the data needed for the study was collected by a third party, the National Decision Support Company (NDSC), which built the clinical decision support software that integrates with Aurora’s Epic EMR. This data, which did not contain direct personal identifiers, was sent to Aurora, linked to the Aurora data by Aurora analysts, and then transferred to the research team. Existing business and data sharing agreements between NDSC and Aurora covered this arrangement; no additional agreements or amendments were needed for this evaluation.
11.3.2 Legal Context for Data Use
In the United States, the Privacy Rule of the Health Insurance Portability and Accountability Act107 (HIPAA) regulates the sharing of health-related data generated by health care entities such as Aurora Health Care. It allows, but does not require, sharing data for research. Implications of the HIPAA Privacy Rule for research, as well as a list of guides maintained by the US Department of Health and Human Services, are discussed in Using Administrative Data for Randomized Evaluations (Feeney et al. 2015).
The Privacy Rule is complex, with different requirements and obligations depending on the purpose of a data sharing arrangement. Even experienced legal professionals have difficulty interpreting its requirements with respect to research. In this environment, and with strict penalties and liability for non-compliance, many entities subject to HIPAA (referred to as covered entities) are very cautious about how and with whom they share data. Entities not already subject to HIPAA may hesitate to obligate themselves to comply with the Privacy Rule’s stringent requirements.
The Privacy Rule defines three levels of data: research identifiable data, limited data sets, and de-identified data. Identifiable data may only be shared for research purposes with individual authorization from each patient involved or a waiver of such authorization approved by an institutional review board (IRB) or privacy board (a process similar to informed consent or waivers of consent as required by the Common Rule108). Limited data sets may be shared either with individual authorization or with a waiver of authorization from a privacy board or IRB and with a data use agreement (DUA) outlining the purpose of the data share, the conditions under which data will be stored, and other stipulations. HIPAA permits health care providers to share de-identified data for research purposes without further obligations (U.S. Department of Health & Human Services 2018).
Neither limited data sets nor de-identified data sets may contain direct identifiers such as name, medical record number, or other account numbers. For research purposes, one of the most consequential differences between these data types is the treatment of dates. In a limited data set many elements of dates are allowed. In a de-identified data set all of the following must be excluded: all elements of dates (except year) for dates directly related to an individual, including date of birth, admission date, discharge date, and date of death; and all ages over 89 and all elements of dates (including year) indicative of such age, except that such ages and elements may be aggregated into a single category of age 90 or older.
11.3.3 Legal Framework for Granting Data Access
To accommodate the preferences of both Aurora’s and MIT’s legal teams, the research team did not attempt to access identifiable data for health care providers or patients. While the team initially pursued access to a limited data set, which would have allowed the inclusion of dates of health care encounters and scan orders, to accommodate the preferences of Aurora’s legal team researchers used a data set that was de-identified. This enabled the use of a relatively simple non-disclosure agreement (NDA), rather than the more stringent data use agreement requirements associated with limited or identified data sets. This was also important for gaining the agreement of several satellite sites of Aurora, which were wary about sharing data for this study. Their agreement was crucial, as their data was thoroughly integrated with the rest of the system and could not feasibly be excluded from the study.
The NDA was supplemented by a sub-award contract between MIT and Aurora that included additional provisions typically found in a data use agreement. For example, the sub-award contained a provision pertaining to intellectual property derived from confidential information, obligations for return of information upon request, survival of obligations of confidentiality, and provisions pertaining to the publication of a public data set and the scholarly work to be produced using the data.
The research team satisfied the HIPAA definition of de-identified data using the Safe Harbor method (45 CFR 164.514(b)). Under this method, 18 identifiers of the individual, or of relatives, employers, or household members of the individual, are removed from the data set, and the data provider must “not have actual knowledge that the information could be used alone or in combination with other information to identify an individual who is a subject of the information.”
Because the study involved data from an intervention with living individuals, Aurora required review of the intervention and the data sharing agreement by their IRB. The Aurora IRB required the removal of minors from the data set and requested that the team use their best effort to exclude the data from prisoners and pregnant women from the data sent outside of Aurora, but recognized that imprisonment and pregnancy are variable statuses that cannot always be readily discerned.
Through the sub-award agreement, Aurora retained intellectual property rights to their confidential information but permitted the publication of a public data set (see Doyle et al. 2018). The prime award between the Laura and John Arnold Foundation (now Arnold Ventures) and MIT required that the research team publish the data set and code needed to replicate the analysis in a repository to the “maximum extent” allowed by privacy laws, IRBs, and applicable binding agreements. During negotiation of the sub-award to Aurora, the MIT and Aurora teams negotiated acceptable terms to meet this requirement. Those terms, which outlined the level of aggregation and anticipated variables to be included in the data set, are copied in the appendix to this chapter.
Both the NDA and the sub-award recognized that the purpose of the agreement and data sharing was to generate an academic research paper, and both required the provision of a thirty-day period for Aurora to review outputs before publication. Aurora’s review was limited to ensuring that confidential information was not disclosed; this satisfied MIT’s legal team and the research team’s desire to maintain academic freedom to publish without any perception of the possibility of censorship.109 Clearly specifying the intended purpose of the research and the intended output of an academic paper (rather than, for example, a patent, process, or product) may have expedited agreement to terms involving intellectual property and publication.
11.4 Making Data Usable for Research
Making the data usable for research required overcoming several hurdles and a significant amount of time from both Aurora- and MIT-based data analysts. First, the team worked to identify relevant data sets, tables, and variables for the study; to extract data from these tables; and to understand how to create a linked panel data set. Second, researchers had to interrogate the data generation process in order to understand each variable beyond existing documentation. Third, the team created a process to link individual-level data from multiple sources and generate indicators for relative dates in order to create a panel data set that would comply with HIPAA’s de-identification standards.
11.4.1 Identifying Relevant Data
Aurora uses the Epic EMR system, the most common in the US, which includes an industry standard radiology information and order entry system. ACR Select, which is a third-party software designed by NDSC, integrates with Epic to generate best practice alerts, using a ruleset developed by the American College of Radiology (ACR).
A massive amount of data are stored and/or generated by Epic and integrated services such as ACR Select. These data include patient and provider characteristics, information entered during health care visits (termed in Epic as “encounters”), medications prescribed, images ordered, and many other topics. Much of the data from the EMR is created automatically through user interactions and is used for operational purposes. For example, data produced when a health care provider orders a radiology image is used to generate a best practice alert if orders meet certain criteria, to send information to the radiology department about the order, to generate information for billing purposes, and to tie the scan order to the correct patient information. These data are not typically used after these automated processes occur. Data are stored in a relational database structure (SQL in the Aurora instance) with a primary key (a field that uniquely identifies each entry) and foreign keys (a column or group of columns that provides a link between data in two tables).
In this structure, linking between tables is straightforward, but identifying the correct table(s) of interest can be a challenge. Depending on the use case for the data, data may be stored or accessed in a production database, a reporting database, or an interactive records viewer. The data frames are updated with different frequencies, and some historical data may only be accessed through the data warehouse. Access to these systems is distributed across multiple teams within Aurora and within the software companies.
Data for this project came from the Epic reporting database, Clarity. Given the wide number of use cases for the data, the breadth of patient and health information stored, and the history of provider actions recorded within the database, Clarity contains an overwhelming number of tables.110 Many of these had not been explored previously by Aurora analysts. For example, the research team wanted to confirm through data whether a BPA had been displayed in order to confirm that random assignment had been implemented properly. Because there had never been a business use for this data set, analysts at Aurora did not know whether these data would exist. Confirming the existence of these data and the name of the table was a team effort between MIT, Aurora, and NDSC.
11.4.2 Understanding Beyond Documentation
The research team gained a deeper understanding of the data through observing and speaking with providers about their interactions with the EMR and through detailed discussions with the data analytics team at Aurora. During several visits to Aurora, the research team directly observed various health care providers interacting with the EMR and spoke with them about how they interact with the system and interpret the data entry fields. Researchers learned, for example, that there are several points at which providers are asked to enter a health indication (e.g., headache, broken bone). These include a visit diagnosis used for insurance billing, a separate indication to attach to a radiology scan order, and an optional field for additional instructions to a radiologist. Speaking with health care providers helped researchers to understand which of these indications was least likely to be impacted by an intervention, and which of these indications they spent relatively more time reviewing to ensure precision and accuracy. Observing and discussing the order in which providers entered data and moved through screens and pop-ups also enabled the team to better interpret patterns in the data, such as how a cancelled or revised scan order may appear in the database.
While some documentation of variable definitions exists, this documentation is targeted to users with the original data use-case (i.e., health care operations and delivery), not to researchers. In addition, the research team found that definitions and terminology may vary based on context and familiarity with data. For example, scans receive an appropriateness score ranging from one to nine. The underlying data set records a numerical value outside of that range under certain circumstances. The MIT team initially referred to those scores with the numerical value, while analysts familiar with the CDS system would refer to them as “unscored.” Similarly, within a health care setting, a health care “encounter” can describe any interaction between a provider and patient, regardless of clinical setting. However, on an initial reading of data documentation by someone without a clinical background, an “encounter” may connote an in-person interaction between provider and patient. Through frequent communication with Aurora’s data and clinical experts, the MIT team was able to develop a much deeper understanding of the data and definitions than relying solely on written documentation.
Both processes of locating and interpreting data required substantial input from analysts and providers at Aurora as well as strong communication between multiple teams with varying perspectives and backgrounds in research and data analysis. In-person site visits were key to developing strong, trusting work relationships that enabled the team to make the data usable for this research. Making in-person connections and explaining the scope of the project with the Aurora teams enabled them to engage deeply and think critically about how to prepare the data for research and to make suggestions that may not have occurred to the MIT-based team. Without these connections, data tasks may have been assigned without context, precluding the ability to make adjustments to improve the quality or relevance of the task. These relationships enabled open dialog for discussing data questions. In-person meetings fostered trust and a shared vision, affirming the dedication of all teams to the project success. In this commitment, the Aurora team was readily available to approach even the most difficult data questions and actively participate on weekly project calls.
By combining MIT’s insight on the planned analysis with the clinical and systems expertise brought by Aurora, the team could determine whether additional information or data checks were necessary, ensuring that the results from potential analyses were fully justified by the underlying data and their accurate representation of clinical practices. If additional data were necessary to improve the quality of analyses, Aurora was willing to check IRB compliance and quickly send the new data to the MIT team.
11.4.3 Linking De-Identified Data
Under the data sharing agreement, researchers were not able to receive patient, provider, or encounter IDs. In order to support a stable identifier for these entities (to allow for a replicable process as well as the appendage of additional data) a surrogate mapping process was created and verified by data analysts at Aurora. The mapping process populated a two-column table for each entity: the first column represents the source system identifier, and the second column represents the surrogate mapping ID produced by a random number generating procedure within SAS. Every ID that is extracted from the source data is merged into the mapping table for its respective entity. If an ID does not match an existing entry in the table, the ID is inserted, and a new unique surrogate mapping ID is produced for the source ID. The MIT research team wrote pseudo code and template SAS code for the Aurora team to use to verify that the de-identification and linking processes produced consistent results and accurate linkages. For example, pseudo code and documentation described a process to run the same merge twice and assert that the resulting data sets are identical. Other parts of the code print the number of unique records in each data set to be merged, the number of records from each data set that were successfully linked, and the number of unique IDs at each stage. This process was followed on-demand at regular intervals throughout the study period.
Aurora maintained this crosswalk and confirmed the stability of IDs within data pulls by running and re-running the code and ensuring identical outputs. Data from Aurora were sent in cumulative batches; batch two would be a superset of batch one and new data since the creation of batch one. By comparing these cumulative data sets, researchers confirmed the process was stable across data pulls.
No action was taken to remove or de-duplicate records in this case, as a small number of duplicates was unlikely to cause issues given the provider-level analysis planned. Ensuring each patient is uniquely identified by a single Aurora internal ID is as much a priority for health care administration as it is for research. As an additional validation exercise, however, the Aurora data team created an algorithm to identify potential duplicates. First, a set of criteria was defined for assessing potential duplicates: same first and last name; same medical record number (MRN); same social security number (SSN); same Medicare number; same Medicaid number; same date of birth or four-decimal address geocode plus first and last initials. All potential duplicate matches were evaluated in terms of their similarity on previously standardized identifiers. Where possible, edit distance scores were calculated to capture the number of changes that would have to be made to a comparison identifier to turn it into the target (e.g., the last names Smith and Smyth have an edit distance of 1, reflecting that one letter would have to be changed to turn each into the other). Records with sufficient difference in identifiers were ruled out as potential duplicates. From the remainder of records, researchers estimated that a maximum of 0.998 percent of records could be duplicates, with the actual proportion likely to be between 0.376 percent and 0.607 percent. Given the type of analysis planned, these ranges did not cause concern.
11.4.4 Working with Relative Dates
Under HIPAA, de-identified data may not contain exact dates. However, for the planned analysis, researchers needed at least some information on the relative sequence and time period of scan orders. For example, the team needed to identify scan orders from the pre-intervention period and for three-month periods within the year-long intervention period. Aurora developed a patient-specific reference date, converted each date variable into the number of days from that reference date, and sent only the relative days from that date to the MIT research team. The patient-specific reference date was necessary, as normalizing all dates to the same intercept would reveal more individual-level information and add specificity that would increase the risk of potential re-identification. Aurora maintained a crosswalk of patients’ IDs and reference dates throughout the study to ensure consistency and replicability within the study period. Finally, Aurora created a series of binary variables to indicate the timeframe of an encounter or scan from the beginning of the intervention (binned in 2-month periods).
Though the MIT research team provided draft code, the process for making the data usable for research without identifiers or dates required significant processing of the study data by Aurora staff. Aurora’s analysts created queries to extract data, merge data across tables, exclude data from certain patient groups as required by the IRB (further description in the Legal Framework for Granting Access section), and de-identify the data. This required a substantial time investment by the Aurora analyst team, which was enabled, in part, by adequately budgeting for and reimbursing the time spent by these analysts in the research sub-award. Further, from the MIT research team’s perspective, this amount of preprocessing without the ability to directly review the queries and data transformations required strong communication and trust between the two teams.
Two primary teams at Aurora had access to data and skills to process data. One team’s primary mission was business intelligence: mainly safeguarding data and developing routine reports on demand. Another was housed within the Aurora Research Institute with a broader research mission. Identifying analysts who thought about data like researchers and who found personal or professional interest in learning new research or analytic techniques helped to facilitate the data preparation and research process.
11.5 Protection of Sensitive and Personal Data: The Five Safes Framework
In this case study, the research team describes the sharing of data for a single research project, rather than the development of an overarching framework for providing secure access to data. Aurora had an established process for some steps; others were developed to meet the specific needs of the clinical decision support evaluation. Overall, assessments and approvals for the project as a whole took several months, during which researchers were cautious about devoting additional resources to a project that might not be approved. As such, the time for assessments and approvals pushed back the timeline for the research study. This timing is consistent with experience from other efforts to access administrative data and is typically reflective of long pauses or delays in response in between rounds of iteration or document review. Converting data from a raw, identified form into a linked, de-identified, and usable form ready to share with the MIT-based team may have taken a greater number of person-hours than working through the approvals process. However, this process of preparing data for transfer and use did not have a significant impact on the timeline of the study, since the team was able to simultaneously make progress on other aspects of the research.
11.5.1 Safe Projects
The application and review process for the research and data sharing involved several formal steps established by Aurora. A research coordinator and team of regulatory support staff supported coordination between the Aurora principal investigator (PI) and various teams and review systems at Aurora. Informally, gaining support and generating enthusiasm among leaders at Aurora helped to facilitate the process and maintain the momentum needed to work through concerns. Much of the credit for the approval of the study is attributed to having the support of key leaders within Aurora and the ability of the research team to anticipate and address the unarticulated concerns and motivations of Aurora reviewers.
Initially, the academic research team connected with Dr. Sarah Reimer (the Aurora PI) through a memo describing the proposed research design. The Aurora Research Institute requires any potential research projects to receive Research Administrative Preauthorization (RAP) before proceeding to IRB review; Dr. Reimer, in collaboration with the MIT investigators, prepared and submitted an application for review. Limited information about this process is available on Aurora’s public website (Aurora Health Care n.d.); additional information is available to potential Aurora investigators via e-mail or contact with Aurora’s compliance and regulatory staff. Proposed research should be clinically and scientifically significant, be feasible to execute, and have sufficient resources available. Proposals are reviewed by the service line director under which the project falls; proposals are assessed to ensure they are of the highest quality and align with Aurora’s philosophies and values. This proposal was reviewed by the director of Investigator-Initiated Research.
Once the RAP is approved, projects that are determined to involve human subjects must be reviewed by Aurora’s IRB.111 For this research study, the IRB requested that the team submit two separate protocols: one for a retrospective review of historical data and another for the intervention and prospective data. The review board required additional documentation of the scientific justification for the research design as well as a detailed description of data security experience and plans.
The research (including the use of data and the steps involved in implementing the randomized evaluation of the Clinical Decision Support (CDS) system) had to be approved by the entire medical group, the primary care council, three informatics committees, the executive committee of a subset of Aurora hospitals, the finance group, the chief medical officer, the chief transformation officer, chief compliance officer, the chief information officer, the research legal team, the data security team, and the IRB. This process took several months, multiple iterations, and justification at each stage.
The RAP application was submitted, and it received approval in December 2015; IRB applications were submitted in January 2016; conditional IRB approvals were received in February and March 2016, which enabled the research team to proceed with project planning and to begin negotiating a data use agreement. The full approval process required iteration between IRB and data sharing review committees (a requirement for full approval by the IRB was a finalized data use agreement, and a requirement for approval of a data use agreement was approval by an IRB). The team executed an agreement to receive a limited amount of data in the fall of 2015; they finalized an agreement to share more detailed data necessary for the full research study in September 2016 with additional variables added through a modification in January 2017.
11.5.2 Safe People
Aurora does not have a routine process for assessing researchers that is publicly available. As a part of the request for data and IRB approval, a memo was sent to Aurora describing the previous experience of the MIT team’s two lead investigators—Amy Finkelstein and Joseph Doyle—with using confidential data and protected health information for research while maintaining high levels of security; this was accompanied by their CVs to demonstrate significant and relevant research experience. All investigators and research assistants completed CITI or NIH training in human subjects research, which are commonly used in the United States to certify that researchers understand the rules and procedures governing ethics and safety when conducting human subjects research.
Only de-identified data were shared with non-Aurora researchers, resulting in a straightforward data handling procedure without further inspection or oversight from Aurora.
11.5.3 Safe Settings
The data sharing agreement allowed the MIT team to access only data sets that were de-identified per the HIPAA Safe Harbor method. These data were shared via Secure File Transfer Protocol (SFTP) and stored on a secure, encrypted server maintained by IT professionals at the MIT Department of Economics. Researchers accessed this server using an encrypted Secure Shell (SSH) protocol after connecting to the MIT VPN, which utilizes an independent authentication system.
The MIT investigators and data analysts made several visits to Aurora to meet with the data teams but were not permitted to directly interact with or view the raw data. Access to raw, identified data may have expedited the process of linking data sets and ensuring a replicable and consistent de-identification process. However, this level of access likely would not have reduced the time spent by either team in understanding and interpreting the data.
11.5.4 Safe Data
As described in the Legal Context section above, most of the data generated by a health care provider is protected by the HIPAA Privacy Rule. Pricing and billing data, even when completely de-identified, are considered sensitive, as they could be used by competing health care systems or by insurance companies during pricing negotiations. The specifications of the data that would be released outside of Aurora, and that which could be released in a public use file, were negotiated specifically for this project. The specifications are included in the Appendix.
In this case, it was possible to conduct the analysis on data that were de-identified at the patient and provider level per the HIPAA Safe Harbor method. This process was conducted by analysts at Aurora. As discussed in the section Making Data Useable, a process was defined and followed specifically for this research study, and the MIT-based team consulted with the Aurora analysts to ensure record linkages would be replicable and stable throughout the research process.
11.5.5 Safe Outputs
The agreements between Aurora and MIT explicitly acknowledged and permitted the publication of scholarly work that would include analytic results based on the confidential information (i.e., the de-identified data shared by Aurora).112 The sub-award agreement also permitted the creation of a public data set that would be able to replicate the published results.
To mitigate against disclosure risk, the public data set was aggregated to the provider rather than scan-order level.
The data sharing agreement between Aurora and MIT gave Aurora a thirty-day period to review any scholarly work for disclosure of confidential information. This review period was used to review both the written manuscript and the public data set. The manuscript was reviewed by Sarah Reimer, the Aurora-based investigator. The data set was reviewed by Dr. Reimer as well as by the manager of Research Analytics who oversaw data preparation throughout the study. The manager reviewed each measure in the data set to ensure patient and provider privacy and requested approval from the vice president of Research Development & Business Services and the president of the Aurora Research Institute. Per the data sharing and sub-award agreements, this assessment was limited to reviewing for disclosure of confidential information. This limitation is often required by academic institutions such as MIT to mitigate against the risk of suppressing results or otherwise interfering with or casting doubts upon academic freedom and integrity.
Any future outputs from the shared data would need to undergo the same review process initiated by the research team; one example is described below.
11.6 Data Life Cycle and Replicability
11.6.1 Preservation and Reproducibility of Researcher-Accessible Files
Aurora Health Care does not actively maintain the researcher-accessible de-identified files made available for this research study. The files sent to MIT were snapshots of a data warehouse, which is periodically updated, with the potential for certain values to be over-written; for example, the status of a scan order will change as the scan is performed, changed, or canceled. The MIT team did not receive the code used by Aurora to extract data nor did the team request that this code be maintained in perpetuity.
Within the study period, the MIT team received data in regular updates, typically every two to four weeks. These data were created when an analyst at Aurora manually initiated a query to extract data. Each data extract built cumulatively on the last, enabling researchers to quantify the extent of any changes. For each data extract, the team documented the date it was received, the description of the files and variables received, and the e-mail communications related to this extract.
11.6.2 Preservation of Researcher-Generated Files
Research-generated data files and code are preserved on MIT’s secure servers. Researchers do not have permission to share the raw de-identified data nor the intermediate or final disaggregated data sets. The data use agreements require that MIT return or destroy confidential data upon request by Aurora; however, to date no such request has been made.
The public data set, described above in the Safe Outputs section, contain data aggregated at the provider level—including the number of high-cost scan orders, treatment assignment, provider type and characteristics, and aggregate patient and financial information—along with documentation and replication code. These were published on J-PAL’s Dataverse, which is part of the Harvard Dataverse repository (Doyle et al. 2018).
As the team worked on data extraction, cleaning, and analysis, they generated their own documentation of the data and the indicators or derived variables they created. This culminated in a step-by-step guide on how to run the data pipeline from processing raw data to producing analysis tables. However, without a clear use case for this level of internal documentation (particularly given the complexity of the process and underlying data, that the data are not public, and the uniqueness of the purpose for which the data were prepared), researchers did not prepare this full level of documentation for publication or for use by the general public.
The Public Use Files are sufficient to replicate all published results. However, due to the aggregation and limited fields of the data set, the possibilities for further analysis may be limited. For example, the team received a request to report on the impact of CDS on musculoskeletal scans by another research team conducting a meta-analysis. Because this could not be produced from the public data set, the MIT team sought and received approval from Aurora to publish the new results from analysis conducted on the nonpublic data. The MIT research team is willing to field requests for additional analysis and seek approval from Aurora to share data if the request has merit and relates to the original purpose of the research collaboration. However, these extra steps are limitations of the arrangement stemming from the agreement to only publish aggregated data.
11.7 Sustainability and Continued Success
11.7.1 Revenue
The research on clinical decision support received funding from the Laura and John Arnold Foundation (now Arnold Ventures). Through a sub-award from MIT to Aurora Health Care, the research team provided funding on a cost-reimbursable basis for data extraction and preparation as well as support for interpreting the data. While this award seemed to garner goodwill for the project, the sub-award would have accounted for an extremely small fraction of Aurora’s annual operating budget.113
11.7.2 Metrics of Success
The research team attributes much of the success of this data sharing and research collaboration to clear communication, strong relationships, and patience. As the team negotiated data use agreements and IRB review, they relied on their prior work with hospitals and health records as well as a familiarity with the HIPAA Privacy Rule and IRB review process; this allowed the researchers to anticipate and thoroughly respond to concerns of the Aurora review committees and to identify data sharing procedures that met the needs of both safety and research. After receiving approval for the study, frequent in-person meetings fostered trust and a shared vision and affirmed the commitment of all teams to the project success. By investing in the relationship, researchers were able to communicate about the data openly and clearly in order to develop a strong understanding at all stages of the pipeline. Recognizing that Aurora’s analyst teams likely had competing priorities and limited time, the MIT team was as directly helpful as possible by generating code, pseudo code, or step-by-step instructions. Throughout the study, the MIT team had touchpoints with leadership at the Aurora Research Institute who were enthusiastic about the research partnership and helped to facilitate approvals within Aurora.
The research team and the ARI executives hoped that this research and data sharing collaboration would pave the way for future collaborations with external researchers through clarifying processes, setting precedents, and demonstrating the value of sharing data. From conversations after completion of the CDS intervention, the experience on this project contributed to an interest in further collaboration on research and in bringing deeper research expertise on staff at Aurora. As an example demonstrating this interest, Aurora’s Vice President of Research Development and Business Services, Kurt Waldhuetter, visited the MIT team at J-PAL North America to discuss how to do more collaborative research work as well as to discuss a planned center on outcomes research.
After the clinical decision support project, two of the key executives at ARI left the institute, and Aurora underwent a merger with Advocate Health Care. These changes—and the resulting shift in focus at Aurora—makes it hard to determine the ultimate impact of this project on future data sharing opportunities or research.
Beyond process, the Aurora analysts who worked on data extraction gained insight into the data and techniques for processing data that have improved their efficiency. For example, the best practice alert data used for this project is not commonly analyzed by the health care system; however, the analysts stated that understanding these data has helped them respond to internal requests from the pharmacy information systems group on how to analyze their alert data. As another example, many internal reporting and assessment projects at Aurora require matching data across data systems, and the need for, and complexity of, linking data has increased as the teams must now integrate data from the Advocate systems. The Aurora analyst team has applied their knowledge gained on the CDS project of how to match patient names or other records in a string format to these internal projects.114
Acknowledgements
We are grateful to a research grant from the Laura and John Arnold Foundation for the original research underlying this case study. We received feedback from many of the original data analysts, research assistants, and study co-authors, including Andrew Marek, Noah Pearce, and Sarah Reimer from Aurora, and Sarah Abraham, Joseph Doyle, Jesse Gubb, Andelyn Russell, and Sam Wang from the J-PAL and MIT research teams. We acknowledge and thank them for their time, input, and perspective. The interpretations expressed are solely those of the authors and do not represent the views of our funder nor Aurora Health Care.
Appendix
The following are excerpts from Sub-Award or Non-Disclosure Agreements
Public Use Data Set
The following clause was included in the sub-award agreement between Aurora Health Care (the Sub-awardee) and MIT permitting the publication of a data set upon completion of the study
“Notwithstanding the foregoing, a subset of the Subawardee Confidential Information will be made public according to the terms set forth in this section (”Public Data Set“) and will not be treated as Subawardee Confidential Information once made public. The Public Data Set will not include patient, encounter, and financial level data elements. It will only include provider level data elements and may include aggregate patient and financial information. The exact nature of the Public Data Set will be determined at the end of the Research as it may depend on the findings. The Public Data Set is anticipated to be a provider-level data set with physician characteristics including provider type (MD, DO, NP, PA), specialty, age bins and average patient characteristics, along with outcomes including the number of scans that would trigger the best practice alert (BPA) being evaluated in this Research, the number of high cost scan orders, the number of scan orders with a score of 1-3, the number of scans with a score of 4-6 and the number of low cost scans. The outcomes will be measured over various timeframes such as 0-1 month, 0-3 months, 0-6 months, 0-9 months, and 0-12 months. The Public Data Set may be made available on the OSF and DataVerse websites and may be available to the public indefinitely. The exact list of data elements and aggregate data to be made public will be agreed upon by both Parties in year 3 of the Research and prior to the Public Data Set being made public.”
Publishing
The following is from the sub-award agreement:
“If and to the extent that each Party has contributed to the results of the Research, the two Parties may work together in good faith to publish the results jointly, as appropriate. The foregoing notwithstanding, both MIT and Subawardee shall have the right to publish the results of the Research arising from such portion of the Research performed solely by such Party, along with any background information about the Research that is necessary to be included in any publication of results or necessary for other scholars to verify such results. Prior to publication of Research performed solely by one Party, the publishing Party must provide the other Party with at least thirty (30) calendar days advance notice for the non-publishing Party to review the manuscript in order to identify patentable subject matter or the inadvertent disclosure of Subawardee Confidential Information.”
The following is from the NDA:
“Aurora acknowledges that MIT is receiving Confidential Information in anticipation of its faculty preparing written scholarly work (”Scholarly Work“). In the event MIT personnel seek to publish a Scholarly Work, Aurora will have a thirty (30) day period to review the Scholarly Work for any disclosure of Confidential Information. Aurora shall, within the thirty (30) day period, give MIT notice identifying specifically any Confidential Information it believes would be disclosed in the Scholarly Work. If Aurora does not provide timely notice, it will be deemed to have waived any objection to disclosure of Confidential Information.”
References
Aurora Health Care. n.d. “Research Administrative Preauthorization (RAP).” Accessed June 23, 2020. https://www.aurorahealthcare.org/aurora-research-institute/researcher-resources/research-administrative-preauthorization.
Aurora Health Care, Inc. and Affiliates. 2018. “Unaudited Consolidated Financial Statements and Other Information for the Year Ended December 31, 2017.” Aurora Health Care. https://www.aurorahealthcare.org/-/media/aurorahealthcareorg/documents/annual-report/2017-annual-financial-information-and-operating-data.pdf.
Centers for Medicare & Medicaid Services. 2018. “Medicare Program; Revisions to Payment Policies Under the Physician Fee Schedule and Other Revisions to Part B for CY 2019.” 83 Fed. Reg. 59452 (Nov. 23, 2018). https://www.federalregister.gov/documents/2018/11/23/2018-24170/medicare-program-revisions-to-payment-policies-under-the-physician-fee-schedule-and-other-revisions.
Doyle, Joseph, Sarah Abraham, Laura Feeney, Sarah Reimer, and Amy Finkelstein. 2018. “Clinical Decision Support for High-Cost Imaging: A Randomized Clinical Trial [Data Set].” Harvard Dataverse, December. https://doi.org/10.7910/DVN/BRKDVQ.
Doyle, Joseph, Sarah Abraham, Laura Feeney, Sarah Reimer, and Amy Finkelstein. 2019. “Clinical Decision Support for High-Cost Imaging: A Randomized Clinical Trial.” PLOS ONE 14 (3): e0213373. https://doi.org/10.1371/journal.pone.0213373.
Feeney, Laura, Jason Bauman, Julia Chabrier, Geeti Mehra, and Michelle Woodford. 2015. “Administrative Data for Randomized Evaluations.” J-PAL North America. https://www.povertyactionlab.org/resource/using-administrative-data-randomized-evaluations.
Hentel, Keith D., Andrew Menard, John Mongan, Jeremy C. Durack, Pamela T. Johnson, Ali S. Raja, and Ramin Khorasani. 2019. “What Physicians and Health Organizations Should Know About Mandated Imaging Appropriate Use Criteria.” Annals of Internal Medicine 170 (12): 880–85. https://doi.org/10.7326/M19-0287.
Office for Human Research Protections. 2018a. “Revised Common Rule Regulatory Text.” Office for Human Research Protections. https://www.hhs.gov/ohrp/regulations-and-policy/regulations/revised-common-rule-regulatory-text/index.html.
Penn Medicine Information Systems Data Analytics Center. n.d. “Epic Clarity.” Website. Data Analytics Center, Perelman School of Medicine at the University of Pennsylvania. Accessed January 30, 2020. https://www.med.upenn.edu/dac/epic-clarity-data-warehousing.html.
U.S. Department of Health & Human Services. 2018. “Health Information Privacy: Research.” Health Information Privacy. https://www.hhs.gov/hipaa/for-professionals/special-topics/research/index.html.
At the time of the study’s design, the mandate was to be implemented in January 2018. Implementation was later delayed until January 2020 with penalties for noncompliance to be implemented in January 2021 (Hentel et al. 2019; Centers for Medicare & Medicaid Services 2018).↩︎
The HIPAA Privacy Rule (45 Code of Federal Regulations (CFR) Part 160 and Subparts A and E of Part 164, accessed at https://www.ecfr.gov/cgi-bin/text-idx?tpl=/ecfrbrowse/Title45/45cfr160_main_02.tpl on 06-23-2020.) provides regulation for the use, storage, and sharing of medical records and other protected health information. It holds health care providers, health insurance providers, researchers, and others accountable for safeguarding certain types of health information in the United States. Compliance requirements differ based on the party, such as individuals, researchers, or health care providers or insurers; the purpose of the data usage; and on stipulations or structure of data use agreements. The US Department of Health & Human Services provides a detailed guide (see U.S. Department of Health & Human Services 2018) to the requirements associated with research and identifiable health data, how the HIPAA Privacy Rule applies to research, and a guide (see U.S. Department of Health & Human Services n.d.) to understanding HIPAA for all types of users.↩︎
The US Federal Policy for the Protection of Human Subjects or the “Common Rule”, located in 45 CFR Part 46, provides regulatory guidance for research involving human subjects and governs institutional review boards (IRBs), which approve research (Office for Human Research Protections 2018a).↩︎
See the text of these clauses in the Appendix.↩︎
Penn Medicine, for example, reports that their instance of Clarity has over 18,000 tables (Penn Medicine Information Systems Data Analytics Center n.d.).↩︎
MIT’s IRB ceded review to Aurora.↩︎
For example, the NDA included a clause that read, “Aurora acknowledges that MIT is receiving Confidential Information in anticipation of its faculty preparing written scholarly work.” Although all data shared were de-identified, all agreements reference the data as confidential information.↩︎
For example, Aurora’s operating income was $339.1 million in 2017 (see Aurora Health Care, Inc. and Affiliates 2018).↩︎
For example, the Aurora team gained a strong understanding of how to use the concept of edit distance (a way of quantifying how dissimilar two strings are to one another by counting the minimum number of operations required to transform one string into the other) to match such records.↩︎