HB Ad Slot
HB Mobile Ad Slot
Artificial Intelligence’s Role in Reshaping Behavioral Health and Navigation of Legal Risks Ahead
Tuesday, March 19, 2024

Introduction

As artificial intelligence (AI) continues to redefine the landscape of healthcare delivery, its transformative influence on behavioral health is both profound and promising. The integration of AI technologies in behavioral health care holds the potential to revolutionize diagnostics, treatment approaches, and overall patient outcomes. Virtual mental health assistants, driven by AI, may enhance accessibility by providing continuous support and monitoring. Additionally, predictive analytics could enable early identification of potential issues, facilitating proactive interventions. However, amidst this technological evolution, it is imperative to acknowledge and address the legal and regulatory implications that accompany such advancements. As we venture into a future where AI plays a pivotal role in shaping behavioral health care, balancing the potential benefits of AI with robust legal and regulatory measures will be instrumental in harnessing its transformative power responsibly, ethically and sustainably.

AI-Based Tools for Behavioral Health

Developers and behavioral health providers have explored many ways to apply AI in behavioral health settings, and these applications are certain to expand as the capabilities of AI technology and trust in AI-based systems continue to grow. One of the highest profile applications of AI in behavioral health (if not all of healthcare in recent years) is the development of AI-based chatbots providing chat therapy to patients. Some companies have developed apps using AI chatbots to deliver chat therapy to patients with mildto-moderate signs of depression or anxiety and assistance to providers in delivering Cognitive Behavioral Therapy (CBT). AI tools are also being developed to monitor patient biometric data through smartwatches or other wearable technology for behavioral changes that are potentially symptomatic of depression.

Over the past few years, we’ve also witnessed a rise in the use of AI in Electronic Health Records (EHR) technology to assist in diagnosis and treatment. These tools aim to leverage large language models (LLMs) to assist providers in handling large amounts of clinical data, with applications ranging from data management assistance for accessing patient information to clinical decision support systems (CDSS) that recommend potential diagnoses and treatment options based on the AI’s review of a provider’s EHR. Within the behavioral health space, EHR developers have introduced CDSS tools that use AI to, for example, assist in the diagnosis management of treatment for major depressive disorder and identifying institutional patients that have the highest need for interventional care.

While these are a few of the AI technologies being used and explored in the behavioral health space, it is likely that the future of behavioral health will come to rely on, at least in part, AI and machine learning technology.

The Current Regulatory Landscape for AI and Behavioral Health

Given the ever-expanding uses of AI, in both the behavioral health space and in health care more generally, regulation of AI technology is poised to play a major role as 2024 progresses. As of today, the US does not yet have a binding federal law that regulates the development and use of AI. However, over the past year we have witnessed preliminary bipartisan efforts to address potential risks in the development and use of AI, including efforts to promote transparency and notice, ensure fairness and nondiscriminatory practices, and protect the privacy and security of health information.1 These efforts include:

  • The White House “Blueprint for an AI Bill of Rights,” which identifies five guiding principles for AI development: 1) standards for safety and effectiveness; 2) protections against algorithmic discrimination, including nondiscrimination based on protected classes; 3) requirements for data privacy, including rules around disclosure, appropriate consent, security, and standards for surveillance; 4) standards for notice and explanation; and 5) defined rules for human roles, including alternatives to AI processes. Health use cases are prominently featured in the Blueprint. For example, health data is deemed a “sensitive domain” subject to higher regulatory concern. It is likely that the Blueprint will be consulted in developing regulations in the health space.
  • The AI Accountability Policy Request for Comment, which requested information on regulatory and other measures and policies designed to provide assurance that AI systems are legal, effective, ethical, safe, and otherwise trustworthy. The request for comment is aimed at informing the Department of Commerce’s National Telecommunications and Information Administrations development of a formal report on AI accountability policy.
  • The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, an Executive Order (EO) introduced by the Biden Administration to promote AI innovation while protecting against potential harmful consequences. Section 8 of the EO focuses on the risks and developments associated with AI in the health care industry, specifically covering critical areas including AI use in drug development, predictive/diagnostic AI use cases, safety, healthcare delivery and financing, and documentation and reporting requirements.
  • The Office of the National Coordinator for Health Information Technology’s (ONC) Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing Final Rule (“HTI-1 Final Rule”), which creates new certification criterion under ONC’s Health IT Certification Program that requires health IT developers and their health IT modules to participate in public facing transparency efforts. These new requirements signal a tonal shift that will impact how health care providers interact with health IT modules, including AI systems. ONC’s new decision support interventions (DSIs) certification criterion promotes “responsible AI” as health IT developers are required to assess, quantify, and publish consistent, baseline sets of information about their health IT modules/AI algorithms to the public in effort to provide health care providers insight into the tools available to support their patient care decision making.
  • Federal Trade Commission (FTC) Omnibus Resolution, which authorizes FTC to issue civil investigative demands (CIDs) (a process like a subpoena) in investigations relating to AI to obtain documents, information and testimony that advance FTC consumer protection and competition investigations. Through this resolution, FTC recognizes that although AI, including generative AI, offers many beneficial uses, it can also be used to engage in fraud, deception, infringements on privacy, and other unfair practices, which may violate the FTC Act and other laws. AI can also raise competition issues in a variety of ways, including if one or just a few companies control the essential inputs or technologies that underpin AI.

Interestingly, recent congressional discussions highlight questions over which level of government should regulate AI, with some arguing that it should be left to states and others pushing for a federal law. Lawmakers in some states like California have proposed their own health care AI legislation, which could lead to a clash between State and federal regulations, as has happened with privacy regulation. While the use of AI in health care and behavioral health is well underway, stakeholders are eager to understand how to effectively use AI while adhering to guardrails that may be, soon, mandated by state and federal legislatures.

Overview of potential key legal risks based on the use of AI in behavioral health treatment and some practical ways to mitigate the risks.

As noted above, the regulatory landscape for behavioral health providers is still highly unsettled and is likely to change in the coming years. Still, providers, vendors, and investors should be aware of certain foreseeable key risk factors, which include, among others, data privacy, algorithm bias, and professional liability.

  • Data Privacy: Behavioral health providers face significant data privacy risks when implementing AI tools in their practice, particularly concerning compliance with the Health Insurance Portability and Accountability Act (HIPAA) and its implementing regulations, 42 CFR Part 2 (Part 2), and state privacy laws governing sensitive information. Behavioral health providers are by and large covered entities, and as such have an obligation to ensure that any AI tools being used comply with HIPAA. This includes safeguarding protected health information (PHI) and implementing appropriate security measures to prevent unauthorized access or disclosure. As a best practice, providers should seek to obtain informed consent from patients before using AI tools that analyze or process their PHI to ensure that patients understand how their data will be used and shared, as well as any potential risks associated with AIdriven interventions. Additionally, providers should carefully vet third-party vendors offering AI solutions, to ensure they adhere to data privacy and security rules. Providers should understand how the AI tool stores, transfers, retains, and uses patient data, and whether each of these forms of processing is permitted under HIPAA. Behavioral health providers that furnish substance use disorder (SUD) treatment should also be aware of patient privacy and confidentiality requirements under Part 2. Part 2 has historically governed the confidentiality of SUD patient records, and until recently, providers subject to Part 2 and HIPAA have had to deal with long standing inconsistencies between the two laws, largely pertaining to patient consent and disclosure requirements. HHS’ final rule released on February 9, 2024 streamlines some SUD patient record requirements under the two frameworks.2 The update permits use and disclosure of Part 2 records based on a single, one-time patient consent for treatment, payment, and health care operations purposes, and expanded permission for the redisclosure of Part 2 records by entities that are subject to HIPAA, which are generally consistent with HIPAA (except for disclosures in law enforcement and the judicial and administrative context). However, the changes also include enhanced requirements for deidentification of Part 2 records, among other requirements aimed at better aligning Part 2 with HIPAA. Further, to the extent that a provider may share or consider sharing PHI to train an AI model, the provider will need to ensure that such sharing of information complies with Part 2 to the extent applicable. Importantly, for PHI to be used in compliance with HIPAA in relation to the creation and development of AI algorithms and machine learning, it must meet the research exception. However, the requirements under Part 2 for the use or disclosure of identifiable information from a behavioral health or substance use disorder patient in research is more restrictive. Providers should also stay abreast of state privacy laws that govern the use and disclosure of sensitive health information, which includes behavioral and mental health information. These laws may impose additional requirements or restrictions on data handling practices, such as mandatory breach notification requirements. 
  • Algorithmic Bias: While there is hope that AI tools will increase access to and streamline behavioral health care, the behavioral health industry must remain cautious of risks related to algorithmic bias. “Algorithmic bias” refers to an AI algorithm that are trained on data that includes inequalities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation, which can then lead to the perpetuation of existing biases and discrimination against certain groups of people. AI models are trained on large amounts of data, detecting and incorporating patterns and connections within that data, which may lead to the AI tools relying on data that includes historic bias. While for the time being, algorithmic bias is largely an ethical issue that AI developers face, providers using AI tools should seek transparency from developers and vendors and ensure that they understand the data in which the tools they rely upon were trained.
  • Professional liability: AI tools create a legal thicket of professional liability considerations for behavioral health providers and vendors. Sophisticated AI-enabled strategies can support clinicians to deliver more accurate diagnoses, promote person-centered treatments, and enable patient self-management. But as with any clinical support tool, licensed clinicians continue to hold ultimate legal responsibility for their medical judgment in providing patient care. If the use of an AI tool results in an adverse patient event (for example, misdiagnosis or poor outcomes), the clinician may be held liable. Malpractice claims inherently turn on whether a clinician’s actions were consistent with the standard of care. Navigating potential liability for clinicians using AI tools can be difficult to discern given the rapid development of AI technology, the changing use patterns of clinicians, and the lack of clear regulatory guidance. The failure to comply with regulatory standards may make it easier for a plaintiff to successfully allege malpractice, which puts providers in a difficult position in an environment where those standards are still being developed and refined. Moreover, malpractice claims inherently turn on a clinician’s failure to perform consistent with the standard of care for his or her profession. This can be difficult to assess at a time when the standard of care maybe changing to accommodate AI – there is little uniformity around clinicians’ adoption of tools, evaluation of the quality of such tools, or the supplemental use of AI-enabled tools to support patient education and self-management. AI-enabled tools may also have implications for professional liability coverage. As AI tools become more integrated into behavioral health practice, insurers may adopt requirements concerning the permitted use of tools, audit obligations, or the implementation of technical or procedural standards. On the other hand, if AI-enabled tools reduce liability risk (for example, by helping identify high-risk patients or avoiding harmful medication interactions), professional liability insurers may advocate the use of AIenabled tools. Insurers may also define certain AI use cases (for example, a patient’s independent use of self-management tools) as “excluded coverage,” or acts not covered by the professional liability policy. Clinicians should consider reviewing the scope of their professional liability coverage before adopting AI-enabled tools. These issues do not only concern clinicians. Due to the uncertain legal and regulatory environment, clinicians are likely to shift as much risk as possible to the vendors of AI technology. For example, clinicians will likely require vendors to contractually represent and warrant that the technology complies with all applicable law and will comply with future changes to the law. Clinicians may also insist on indemnification provisions or other requirements that shift liability for the use of these products to the vendor.3 To the extent patients directly interact with AI-enabled tools (for example, chatbots incorporated into patient education models), the contracts should specify standards and expectations around the content of such interactions. Finally, clinicians may attempt to shift potential liability risk to vendors by pointing to vendor representations around safety, effectiveness, and lack of bias.

Conclusion

The utilization of AI in behavioral health care signifies a groundbreaking leap toward more effective and accessible mental health services. However, as we navigate this transformative landscape it is crucial to anticipate and address the evolving regulatory frameworks and legal risks associated with AI applications. AI regulation is a moving target with significant developments expected in the coming years and anticipating and mitigating legal risks will be instrumental in fostering a trustworthy and secure environment for both practitioners and patients.


1 See our prior analyses addressing the use of AI in the health care industry: Biden’s October 30, 2023, Executive Order on AI: Key Takeaways for Health Care Stakeholders, December 2023, available here; and Proposed Regulatory Oversight on the Emerging Use of Artificial Intelligence in Digital Health, May 10, 2023, available here.

2 For more information on the update to Part 2, see our update: HHS Finalized Part 2 Revisions: What Has Changed? available here.

3 Additionally, if the AI tool was trained on or was derived from proprietary data and other third-party content, such tool may be vulnerable to claims that it infringes or misappropriates the intellectual property rights of others. The healthcare system providing the AI tool for its clinicians may be exposed not only to infringement claims but may be potentially enjoined from continuing to use the AI tool (which would be costly and disruptive to its operations). The system may consider contractually requiring the vendor to defend it against such claims and indemnify it against any resulting liability. Also, the user should consider requiring that the vendor provide additional remedies such as the vendor providing a non-infringing functionally equivalent replacement, or a refund of fees paid for licensing the allegedly infringing tool. The vendor will want to consider disclaimers and other contractual clauses to limit its liability.

HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins