Skip to main content

Generative Artificial Intelligence (AI) and 401(k) Plan Fiduciary Implications

Generative Artificial Intelligence (AI) and 401(k) Plan Fiduciary Implications
Wednesday, April 17, 2024

AI is emerging as a major transformative force across various industries, including finance and retirement planning. Like everyone else, fiduciaries are increasingly turning to AI-powered tools and algorithms to optimize investment strategies, enhance decision-making processes, and improve participant outcomes. However, integrating AI in 401(k) plan management has its challenges. While fiduciaries have a duty to act prudently and in the best interest of plan participants, what does that mean in the era of AI? To understand what that means, fiduciaries should consider conducting a formal evaluation of AI’s impact on their 401k plan. This may include looking at the investment selection process, investment performance, the investment advisor/manager process, recordkeeper capabilities, the potential risks of using (or not using) AI, and its impact on service and the plan participants’ overall experience. Based on the results of a formal evaluation, ongoing fiduciary oversight of AI may be warranted.

401(k) Plan Fiduciaries and the Investment Committee Process

Most 401(k) plans are structured to provide a core menu of investments with specific choices selected by plan participants. Many plans offer a qualified default investment alternative selected by fiduciaries. Typically, 401(k) fiduciaries are organized into an investment committee (the Committee) whose duties are spelled out in a detailed charter. The Committee generally establishes the investment option menu that plan participants can select from, and frequently uses an ERISA 3(21) investment advisor to assist this selection process. This advisor is a fiduciary under ERISA because their advice is given to the Committee for a fee. However, even when using a 3(21) advisor, the advisor’s recommendations cannot simply be rubber-stamped because the Committee retains ultimate authority to determine investment options offered to plan participants. In addition, some 401(k) plans offer self-directed brokerage accounts (SDBAs), which provide hundreds, if not thousands, of potential investment choices. U.S. Department of Labor guidance regarding fiduciary obligations related to SBDAs is currently pending.

In some cases, the 401(k) plan sponsor, typically an employer, is not comfortable reviewing or making fiduciary decisions and instead appoints an ERISA 3(38) investment manager, who actually creates and implements the investment menu available to plan participants.

AI’s Impact on Retirement Plans and 401(k) Fiduciaries

Blackrock announced “The AI revolution in retirement” because “it can be used to extract early insights on economic activities across regions, which can be used to inform macro (e.g., regional) and micro (e.g., company level) tilts in portfolios.” Blackrock is not an anomaly—AI is gaining traction in the retirement plan, investment, and financial services industries. Specifically, AI is used to:

But not everything AI is positive. Just last month, the European Union approved a new artificial intelligence law to create a regulatory framework aimed at protecting consumers. And the International Monetary Fund recently discussed how AI’s adoption in the financial services industry contains inherent risks, including embedded biases, privacy concerns, outcome opaqueness, unique cyberthreats, and the potential for creating new sources and transmission channels of systematic risks. Even Elon Musk has stated, on multiple occasions, that AI could be more dangerous to humanity than nuclear weapons.

Nearly every day, we hear about AI. What should fiduciaries be doing about it, if anything? The following are possible considerations:

  • Is the current 3(21) investment advisor using AI appropriately (or inappropriately)?
  • If there is a 3(38) investment manager, how much work is automated by AI, and does the plan sponsor understand the effect this may have on plan participants?
  • What risks are associated with using (or not using) AI to help select investment options?
  • What’s the risk of misinformation or a biased output?
  • Who is liable if the AI’s advice leads to poor investment decisions?
  • How does one evaluate the quality and accuracy of the content it produces? If the AI generates investment advice or market analysis for a 401(k) plan, how do fiduciaries ensure the information is reliable and compliant with regulations? There’s a risk of misinformation or biased output, which could lead to poor investment decisions.
  • If the 401(k) permits SDBAs, should it be limited to reducing risks associated with AI?
  • Do the fiduciaries understand how the AI works, its biases, and the potential impact on investment decisions?
  • Does the recordkeeper use AI to combat cybersecurity threats, communicate with participants, or for other purposes?
  • Was a privacy or security assessment conducted on the AI systems? Vast amounts of sensitive participant data fuel these systems, making them prime targets for malicious actors seeking to exploit weaknesses in security protocols. A data breach or cyberattack could not only compromise the integrity of the retirement plan but also expose fiduciaries to legal and regulatory repercussions.
  • Does the recordkeeper permit the Company to opt out of the use of AI?
  • Are individual plan participants permitted to opt out of the use of AI?
  • Do the benefits of AI-powered recordkeeping functions outweigh the potential risks?
  • Should the Committee seek independent professional advice regarding what AI can provide to the Committee as a resource to satisfy fiduciary obligations under ERISA?

While AI may revolutionize 401(k) management, it does have its limitations and constraints. AI algorithms don’t correctly account for unforeseen events and market fluctuations. While AI excels at analyzing historical data and identifying patterns, it may struggle to adapt to sudden changes or “black swan” events, leaving fiduciaries vulnerable to unexpected losses. Just like no two snowflakes or fingerprints are the same, the same is true with AI algorithms. Their capability is based on the quality and quantity of data available for training, resulting in vast differences in performance and reliability between AI algorithms. When data is limited, outdated, or biased, or where AI systems inadvertently perpetuate or amplify existing biases present in the data used for training, AI systems may produce unreliable or biased outcomes, skewed investment recommendations, and unequal treatment of plan participants, leading to suboptimal investment decisions. Fiduciaries must exercise caution when relying on AI-generated recommendations and ensure that algorithms are trained on comprehensive and accurate data.

While AI may one day better understand human emotion, currently, the AI algorithms may lack the human intuition and judgment necessary to navigate complex investment landscapes effectively. While AI excels at processing vast amounts of data and identifying trends, it may struggle to incorporate qualitative factors, market sentiment, and subjective assessments into investment decisions.

There are no clear-cut answers to these questions, which is where the fiduciary decision-making process comes into play.

Fiduciary Decision-Making Risks and Potential Liabilities

As every plan sponsor involved in 401(k) fee litigation knows, one of the most critical factors in ERISA litigation is the process. On the one hand, the process can be tantamount to mounting a defense, getting a lawsuit dismissed in the early stages, and reducing potential settlements. On the other hand, failure to follow a documented process can lead to expensive litigation, large settlements, and the perception of impropriety coupled with reputational damage, even if the underlying ERISA claim is largely without merit.

AI may also be used to sue a plan’s sponsors and fiduciaries. If the 401k fee litigation roadmap is followed, AI-related claims will attack the fiduciary decision-making process, or lack thereof, by claiming that plan participants were hurt by fiduciaries’ neglect of, indifference to, or lack of competency regarding AI’s impact on the retirement plan industry. Unlike traditional investment strategies, where decisions are made based on clear, understandable criteria, AI algorithms often operate as “black boxes,” making it challenging for fiduciaries to understand and justify the rationale behind AI-generated recommendations. This lack of transparency can erode trust and confidence in the retirement plan among participants and regulators, potentially creating litigation exposure. Claims may allege investment performance suffered compared to other plans that used (or didn’t use) AI, and plan participants would have been better off if the recordkeeper used (or did not use) AI for cybersecurity, participant communications, other vital plan functions, or other novel claims. The underlying lawsuits will take a shotgun approach, even if there is little to no basis for the underlying claims. The goal is to get to discovery in the hopes of discovering a process problem and extracting a costly settlement.

Solutions

Integrating AI in 401(k) plan management presents opportunities and challenges for fiduciaries. While AI has the potential to revolutionize decision-making processes and improve participant outcomes, it also introduces new risks and limitations that must be addressed.

For ERISA litigation, we believe the actual decision to use (or not use) AI will take a backseat to whether plan fiduciaries had a process to evaluate AI issues and whether that process was followed. Ultimately, fiduciaries may need to weigh the benefits against the risks associated explicitly with using (or not using) AI. Potential solutions include:

  • Stipulating the evaluation of AI in the Committee’s chartered duties. This includes defining clear roles and responsibilities for stakeholders involved in AI implementation, conducting regular audits and assessments of AI systems to ensure compliance with regulatory requirements and best practices, and implementing mechanisms for monitoring and mitigating algorithmic bias;
  • Questioning (and documenting the questioning) of the plan’s 3(21) advisor’s or 3(38) manager’s use of and options related to AI. Fiduciaries must prioritize transparency and accountability, which includes documenting and disclosing the methodologies and assumptions underlying AI algorithms, as well as providing plan participants with clear explanations of how AI is utilized in investment decision-making processes;
  • Evaluating current recordkeeper AI capabilities, risks, and options;
  • Educating, training, and equipping fiduciaries with the knowledge and skills necessary to evaluate and leverage AI effectively. This includes staying informed about advancements in AI technology, understanding the potential risks and limitations associated with AI, and cultivating a culture of ethical and responsible AI usage;
  • Reviewing plan service provider contracts for any AI-specific provisions or any AI-related liability shifting or disclaimers;
  • Assessing cybersecurity and data privacy protocols, policies, and procedures to protect against potential threats and vulnerabilities associated with AI systems, implementing robust cybersecurity measures, such as encryption, access controls, and intrusion detection systems, to safeguard sensitive participant information and prevent unauthorized access or tampering with AI algorithms;
  • When running an RFP for a new recordkeeper, ask a few AI-related questions;
  • Considering that rather than replacing human expertise, AI could be viewed as a complement to human judgment and intuition. Fiduciaries could leverage AI tools and algorithms to augment, rather than replace, human decision-making processes and develop more robust and well-informed investment strategies that account for a broader range of factors and considerations;
  • Obtaining or increasing fiduciary liability insurance coverage; or
  • Doing nothing (which, for some, may be the best option).

The 401k fee litigation has taught us that not having a process or omitting an analysis is bad. Deviating from a documented process is disastrous. Depending on the size of the plan, the sophistication of fiduciaries, and the current participant mix, it may or may not be appropriate for fiduciaries to take specific action related to AI.

Given the current trajectory of AI, AI-related issues will likely become a part of the fiduciary decision-making process or, at a minimum, influence decisions, requiring fiduciaries to remain vigilant and proactive in adapting to the changes brought on by AI as it continues to evolve and mature. At Foley & Lardner LLP, we have AI, retirement plan, cybersecurity, labor and employment, finance, fintech, regulatory, and ERISA practitioners who routinely advise fiduciaries on the potential risks and liabilities associated with these AI-related issues. As AI continues to evolve and mature, fiduciaries must remain vigilant and proactive in adapting to the changing landscape of retirement planning to ensure the long-term success and sustainability of 401(k) plans.

© 2024 Foley & Lardner LLP