HB Ad Slot
HB Mobile Ad Slot
California Attorney General Probes Bias in Health Care Algorithms
Monday, September 12, 2022

A spurt of letters from California Attorney General Rob Bonta to leaders of hospitals and other health care facilities sent on August 31, 2022 signaled the kickoff of a government probe into bias in health care algorithms that contribute to material health care decisions.  The probe is part of an initiative by the California Office of the Attorney General (AG) to address disparities in health care access, quality, and outcomes and ensure compliance with state non-discrimination laws.  Responses are due by October 15, 2022 and must include a list of all decision-making tools in use that contribute to clinical decision support, population health management, operational optimization, or payment management; the purposes for which the tools are used; and the name and contact information of the individuals responsible for “evaluating the purpose and use of these tools and ensuring that they do not have a disparate impact based on race or other protected characteristics.” 

The press release announcing the probe describes health care algorithms as a fast-growing tool used to perform various functions across the health care industry.  According to the California AG, if software is used to determine a patient’s medical needs, appropriate review, training, and guidelines for usage must be incorporated by hospitals and health care facilities to avoid the algorithms having unintended consequences for vulnerable patient groups.  One example cited in the AG’s press release is an Artificial Intelligence (AI) algorithm created to predict patient outcomes may be based on a population that does not accurately represent the patient population to which the tool is applied.  An AI algorithm created to predict future health care needs based on past health care costs may misrepresent needs for Black patients who often face greater barriers to accessing care, thus making it appear as if their health care costs are lower.

Not surprisingly, the announcement of the AG’s probe follows research summarized in a Pew Charitable Trusts blog post highlighting bias in AI-enabled products and a series of discussions between the Food and Drug Administration (FDA) and software as a medical device stakeholders (including patients, providers, health plans, and software companies) regarding the elimination of bias in artificial intelligence and machine learning technologies.  As further discussed in our series on the FDA’s Artificial Intelligence/Machine Learning Medical Device Workshop, the FDA is currently grappling with how to address data quality, bias, and health equity when it comes to the use of AI algorithms in software that it regulates. 

Taking a step back to consider the practical constraints of hospitals and health care facilities, the AG’s probe could put these entities in a difficult position.  The algorithms used in commercially available software may be proprietary and, in any event, hospitals may not have the resources to independently evaluate software for bias.  Further, if the FDA is still in the process of sorting out how to tackle these issues, it seems unlikely that hospitals would be in a better position to address them.

Nonetheless, the AG’s letter suggests that failure to “appropriately evaluate” the use of AI tools in hospitals and other health care settings could violate state non-discrimination laws and related federal laws and indicates that investigations will follow these information requests. As a result, before responding hospitals should carefully review their AI tools currently in use, the purposes for which they are used, and what safeguards are currently in place to counteract any bias that may be introduced by an algorithm. For example:

  • When is an individual reviewing AI-generated recommendations and then making a decision based on their own judgment?

  • What kind of nondiscrimination and elimination of bias training do individuals using AI tools receive each year?

  • What kind of review is conducted of software vendors and functionality before software is purchased?

  • Is any of the software in use certified or used by a government program?

  • What type of testing has been done by the software vendor to address data quality, bias, and health equity issues?

On the flip side, software companies whose AI tools are in use at California health facilities should be prepared to respond to inquiries from their customers regarding their AI algorithms and how data quality and bias have been evaluated, for example:

  • Is the technology locked or does it involve continuous learning?

  • How does the algorithm work and how was it trained?

  • What is the degree of accuracy across different patient groups, including vulnerable populations?

HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins