FDA Joins Other Regulators in Focus on AI and Machine Learning
Monday, November 22, 2021

The Food and Drug Administration recently sought comments on the role of transparency for artificial intelligence and machine learning-enabled medical devices. The FDA invited comments in follow up to a recent workshop on the topic.

The workshop was part of a series of efforts the FDA has had in this space. These include its Digital Health Center of Excellence and a five-part Action Plan for AI and machine-learning enabled medical devices. As part of the action plan, the FDA indicated it wants to issue guidance on software learning over time and help the industry be “patient-centered.” In other words, that companies be transparent when using AI and machine learning-enabled software with patients. These initiatives are especially important given the increase in AI/ML in healthcare.

Workshop participants explored how to provide transparency.  One idea proposed was using a “nutrition fact label” approach to give individuals enough information to make informed decisions. The graphic would be similar to a food label, disclosing quickly and visually the key things patients might want to know. (This is similar to an approach launched by Apple late last year, which we discussed here.) Other agencies have looked at machine learning and AI with similar transparency recommendations. We have written about those in the past, including for the financial services industry. Advice about use of these tools has also been issued by the FTC and the EU.

Putting it Into Practice: While the FDA continues to explore this area, companies are reminded that the FDA (like other regulators) expects transparency with consumers. From a privacy perspective, the workshop reminds digital health companies this includes telling users when AI or ML-enabled software is being used.

 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins