HB Ad Slot
HB Mobile Ad Slot
US FDA to Develop Regulatory Scheme for AI in Medical Products, Foster International Cooperation
Wednesday, April 3, 2024

Go-To Guide:

  • The U.S. Food and Drug Administration (FDA) has outlined its plan to regulate artificial intelligence (AI) in medical products, including building its AI infrastructure and technical expertise, fostering international regulatory cooperation, and dialoguing with stakeholders to develop its regulatory framework.
     
  • FDA must comply with the Office of Management and Budget (OMB)’s March 28 Memorandum on Advancing Governance, Innovation, and Risk Management for Agency Use of AI when using AI to streamline its regulatory activities.
     
  • With AI regulation in its infancy and considerable regulatory activity planned, including guidance, demonstration projects, public meetings, and requests for information (RFIs), industry should engage with regulators now to inform and influence the development of regulatory regimes globally.

Industry Seeks Global Regulatory Certainty and Consistency

AI stands to transform medical product development by creating efficiencies that shrink timelines and costs, and by producing insights. In fact, according to a Berkeley Research Group (BRG) report, the AI health care market is expected to grow by nearly 500% to $187 billion between 2024 and 2030.

However, while regulators in 60 countries, including the European Union, South Korea, the United States, and China, have begun to develop their own regulatory frameworks in response to the rapid proliferation of AI tools, global regulators are attempting to build AI infrastructure and technical expertise.1 Consequently, while regulatory agencies build their respective AI infrastructures and technical expertise, they should also proactively collaborate with their international counterparts to harmonize these efforts and ensure the responsible use of AI.

FDA Prioritizes AI Use in Development of Medical Products

Following a series of AI-related publications2 in recent years, on March 15, 2024, the FDA’s Center for Biologics Evaluation and Research (CBER), Center for Drug Evaluation and Research (CDER), Center for Devices and Radiological Health (CDRH), and Office of Combination Products (OCP) (the Centers) jointly published a paper—“Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together”—detailing the Centers’ four high-level priorities for a patient-centered, risk-based regulatory approach that strikes a balance between fostering responsible and ethical innovation and upholding quality, safety, and effectiveness. The priorities are listed and elaborated below:

1. Foster Collaboration to Safeguard Public Health. Like other FDA initiatives, such as FDA’s membership in the Coalition for Health AI, FDA seeks to dialogue with stakeholders in developing effective, internationally consistent standards, guidelines, and best practices. As such, FDA intends to solicit input from a myriad of stakeholders, including global regulators, developers, patient groups, and academics, to inform critical regulatory elements, including transparency, explainability, governance, bias, cybersecurity, and quality assurance. Moreover, to encourage information exchange, not simply solicitation from stakeholders, FDA also seeks to promote the development of educational initiatives related to AI in medical products to support stakeholders in their efforts.
 
2. Advance the Development of Regulatory Approaches that Support Innovation. Because AI/machine learning (ML) is a rapidly advancing field, FDA seeks to establish predictability and clarity in the regulation of AI in medical products by monitoring trends to anticipate and detect knowledge gaps and opportunities to improve and refine regulatory efforts; building upon existing initiatives like CDER’s Framework for Regulatory Advanced Manufacturing Evaluation (FRAME) initiative, which seeks to prepare an internationally harmonized, science- and risk-based regulatory framework to support the adoption of advanced manufacturing technologies, including AI; and issuing guidance.

Though no timeline has been established for publication, FDA intends to issue several related pieces of guidance, including:
 

Final Guidance from CDRH on Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions
 
Draft Guidance from CDRH on Artificial Intelligence/Machine Learning (AI/ML)-enabled Device Software Functions: Lifecycle Management Considerations and Premarket Submission Recommendations
 
Draft Guidance from CDER on Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drugs and Biological Products
 
3. Promote the Development of Standards, Guidelines, Best Practices, and Tools for the Medical Product Life Cycle. Expounding upon FDA’s previously issued Good Machine Learning Practice Guiding Principles, which, together with Health Canada and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA), identified 10 guiding principles to inform the development of Good Machine Learning Practice (GMLP), FDA plans to take a number of actions toward developing standards, guidelines, and best practices across the medical product life cycle, including evaluating best practices for safety and performance monitoring across ethics, representativeness, bias, transparency, safety, cybersecurity, quality assurance, and risk mitigation dimensions.
 
4. Support Research Related to the Evaluation and Monitoring of AI Performance through Demonstration Projects. Subject to available resources, the Centers also plan to support demonstration projects, including projects that highlight and seek to manage and mitigate risk of potential bias; projects that consider health inequities, promote equity, ensure data representativeness, and leverage ongoing diversity, equity, and inclusion efforts; and projects that ensure adherence to standards, performance, and reliability throughout the product life cycle. FDA has not yet provided timelines for project funding.

OMB Memorandum Defines Parameters for Advancing Governance, Innovation, and Risk Management for Agency Use of AI

In the paper, FDA also noted its intention to use AI to streamline regulatory processes. However, FDA now must abide by the government-wide policy published by OMB March 28, 2024, that seeks to protect the rights and safety of the public when agencies use AI to inform, influence, decide, or execute agency decisions or actions by:

Strengthening AI Governance, which requires each agency to designate, within 60 days of issuance of the Memorandum, a chief AI officer (CAIO) to establish a framework for promoting AI innovation, managing AI-related risk, and coordinating with other technical and policy areas in these efforts. This will also require an annual AI use case inventory.
 
Advancing Responsible AI Innovation, which requires agencies to increase their capacity to use AI, enable sharing and reuse of models, code, and data, and, within 365 days of the issuance of the Memorandum, to develop and publicly release an enterprise strategy for how agencies will advance the responsible use of AI, including reducing barriers to its use (e.g., IT infrastructure, data, cybersecurity, workforce, generative AI-specific challenges).
 
Managing Risks from the Use of AI, which builds upon existing AI risk management requirements. Agencies must, by Dec. 1, 2024, implement the safeguards and minimum practices for “safety-impacting AI” and “rights-impacting AI,” as defined in the Memorandum. Moreover, agencies are encouraged to incorporate existing best practices for AI risk management as appropriate.

 
Opportunities for Agency Engagement and Collaboration

Because AI regulation is in its infancy, stakeholders should dialogue with regulators to inform how FDA crafts its regulatory framework. Stakeholders should be prepared to provide robust information about how AI functions in their medical products and potential limitations to the technology, including monitoring and mitigation of risks related to cybersecurity, bias, safety, transparency, and privacy, among other elements, and AI’s impact on product quality and patient outcomes.

Stakeholders should keep a close eye on regulatory developments in a cross-border context, including any guidance or requests for comment as well as any opportunities to participate in demonstration projects or RFIs. For example, in addition to the planned guidance above, OMB plans to issue proposed regulations relating to responsible use of AI in federal procurement standards later this year.3 Moreover, at an April 25, 2024, public meeting, FDA plans to release more information about its recently established Quantitative Medicine (QM) Center of Excellence, which will oversee ML and AI approaches. In the meeting, FDA plans to solicit public feedback on education, outreach, and policy needs and to provide an update on the QM Center of Excellence’s status and objectives.


1 Approximately 60 countries have national AI strategies. The European Union passed the EU Artificial Intelligence Act, which created an AI Office and uses a risk-based framework that places the strictest regulation and oversight on the highest risk products, such as medical devices. In January, South Korea enacted a digital medical product law to regulate and facilitate growth of digital health products. The U.S. Congress passed the AI in Government Act of 2020 and the Advancing American AI Act, along with several actions stemming from the White House (e.g., the AI Bill of Rights). China is also developing its own regulatory regime.
3 OMB published an RFI on federal procurement standards for AI products and services. The RFI has a 30-day comment period to solicit input on the development of appropriate standards for AI use under federal contracts.
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins