HB Ad Slot
HB Mobile Ad Slot
Proposed Regulatory Oversight on the Emerging Use of Artificial Intelligence in Digital Health
Sunday, May 14, 2023

Recent months have seen a heightened interest in artificial intelligence (“AI”)-based technology solutions. Although AI is derived from neural networks that date back to the 1940s, new technologies such as generative AI models like Generative Pre-Trained Transformer ("GPT")-3 have prompted recent industry and regulatory attention.

Enticed by the potential to dramatically transform the health care reimbursement and delivery system and accelerate health care innovations, the health care sector has seen its own surge of interest in AI-based solutions. In the health care industry today, predictive models are increasingly being used and relied upon to inform an array of decision-makers, including clinicians, payors, researchers, and individuals, and to aid decision-making through clinical decision support (“CDS”) tools. Often, certified health IT is a key component and data source of these predictive models, providing the data used to build and train algorithms and serving as the vehicle to influence day-to-day decision-making.

The heightened interest and use of AI has come with new concerns. As a result, there has been a bipartisan effort to ensure federal agencies optimize the use of AI while working to address potential risks in the development and use of predictive models and AI, including efforts to promote transparency and notice, ensure fairness and non-discriminatory practices, and protect the privacy and security of health information.

From late 2022 through early 2023, there has been a flurry of early-stage regulatory activity, which suggests the early stages of a dedicated AI regulatory framework developing. It is important for organizations contemplating the use of AI technology, or who may be affected by AI technology, to understand this developing regulatory model. These efforts include a White House “Blueprint” for AI regulatory policy, a request for comments by the Departments of Commerce, and a proposed rule by Health and Human Services (“HHS”). Agencies are actively seeking comment on these policies, so entities that may be affected, including health care providers, innovators, payors, and advisors have an important opportunity to shape the future of AI policy in health care.

White House Blueprint for an AI Bill of Rights

In October 2022, the White House Office of Science and Technology Policy (“OSTP”) released a “Blueprint” document containing a proposed framework for a so-called “Bill of Rights” about the use and regulation of AI technology (available here). This blueprint document has limited legal standing, but it lays down important principles likely to reflect the White House’s approach to future binding regulations concerning AI.

The Blueprint illustrates the tension between promoting useful applications of AI while limiting foreseeable harm. On one hand, the OSTP takes a critical view of AI tools in the health care space, warning that AI may: “limit our opportunities and prevent our access to critical resources or services,” and “systems supposed to help with patient care have proven unsafe, ineffective, or biased.” On the other hand, the OSTP notes that “these tools hold the potential to redefine every part of our society and make life better for everyone,” as long as such progress does not harm “foundational American principles” of civil rights or democratic values.

To achieve this balance, the OSTP identified five guiding principles for AI development. These are: 1) standards for safety and effectiveness, including requirements concerning outside consultation, appropriate testing, risk identification and mitigation, monitoring, and oversight 2) protections against algorithmic discrimination, including nondiscrimination based on protected classes, use of appropriately robust data, and evaluation and mitigation of disparities, 3) requirements for data privacy, including rules around disclosure, appropriate consent, security, and standards for surveillance, 4) standards for notice and explanation, including requirements for documentation, explanations issued by automated systems, and reporting, and 5) defined rules for human roles, including alternatives to AI processes, consideration of issues or complaints, and fallback, including governance standards and rules for overruling the AI.

Health use cases are prominently featured in the Blueprint. For example, health data is deemed a “sensitive domain” subject to higher regulatory concern. Many of the Blueprint’s problematic examples involve situations in which AI technology is used to deny coverage, limit care, or deliver care in a sub-optimal (and often discriminatory) manner. It is likely that the Blueprint will be consulted in developing regulations in the health space.

Department of Commerce AI Accountability Request for Comment

On April 13th, 2023, the Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) issued a formal Notice and Request for Comment (“RFC”) in the Federal Register concerning potential directions for AI regulation. Specifically, NTIA requested information on “self-regulatory, regulatory, and other measures and policies” designed to provide assurance that “AI systems are legal, effective, ethical, safe, and otherwise trustworthy.” (RFC available here). The RFC will inform NTIA’s development of a formal report on AI accountability policy, which will influence regulatory development.

The RFC cites and builds on the Blueprint to specifically solicit comments regarding voluntary and mandatory policy tools to mitigate various dangers identified in the Blueprint. It contemplates accountability measures including internal and external audits or assessments, governance policies, documentation standards, reporting requirements, and testing and evaluation standards. NTIA raises questions around reviews, use of sensitive data, and timing requirements within AI lifecycles. NTIA asks over 30 specific questions around the current regulatory landscape, types of AI technology, the strengths and shortcomings of existing AI oversight mechanisms, and the potential impact of certain regulatory approaches. Of particular concern to health care providers, NTIA specifically requests information on whether AI accountability mechanisms can effectively deal with systemic and/or collective risks of harm, including harm related to worker health and health disparities.

While the RFC still represents an early stage of policy development, it is important because it reflects the ongoing influence of the Blueprint. The RFC presents an important opportunity for developers or users of AI to ensure their perspective is heard at this early stage of AI policy development.

Office of National Coordinator’s HTI-1 Proposed Rule

On April 11, 2023, the HHS Office of National Coordinator for Health Information Technology (“ONC”) released a proposed rulemaking titled, “Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing” (“HTI-1 Proposed Rule”), available here. The HTI-1 Proposed Rule implements provisions of the 21st Century Cures Act, and incorporates agency guidance, including the White House Blueprint, the Biden-Harris Administration Executive Orders, “Ensuring a Data-Driven Response to COVID-19” and “Future High-Consequence Public Health Threats,” as well as guidance advancing racial and other health equity. The HTI-1 Proposed Rule aims to advance interoperability and improve transparency and trust in predictive decision support interventions and the use of electronic health information.

One major provision of the Proposed Rule deals with the use of AI for clinical decision support (“CDS”).  Currently, developers must comply with CDS criteria to offer certified health IT, including Certified Electronic Health Record Technology (“CEHRT”). Providers who use CEHRT are eligible for additional funding and/or avoidance of penalties under governmental payment programs. Under the HTI-1 Proposed Rule, developers would have to meet additional “decision support interventions” (“DSI”) certification criterion to achieve certification, including CEHRT status. To promote transparency, the criterion would require certified health IT modules that enable or interface with predictive DSIs to allow users to review information about source attributes used in the DSI. This criterion would also advance health equity by ensuring that users are made aware of when data relevant to health equity, such as race, ethnicity, and social detriments of health, are used in DSIs.

The certification criterion would require developers to undergo intervention risk management practices for the predictive DSIs they interface with. The risk management practices include risk analysis, risk mitigation, and governance. Developers would have to keep detailed documentation regarding their risk management practices and provide such documentation to ONC upon request. Developers would also be required to make their risk management practices publicly available via an easily accessible link. Under the Proposed Rule, developers would have to comply with these criteria by December 31, 2024.

ONC notes that predictive DSIs can promote positive outcomes and avoid harm when those DSIs are “FAVES” - fair, appropriate, valid, effective, and safe. ONC does not propose to establish or define regulatory baselines, measures, or thresholds of FAVES for predictive DSIs but aims to establish requirements for information that would enable users, based on their own judgment, to determine if a predictive DSI enabled by or interfaced with a Health IT Module is acceptably fair, appropriate, valid, effective, and safe.

Because the Proposed Rule modifies CEHRT requirements, it would affect not only developers of health IT modules that seek to obtain or retain certification, but also the healthcare providers who use and rely on such technology to deliver healthcare services and receive reimbursement for such services. ONC will accept public comments to the Proposed Rule until June 20th.

HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins