HB Ad Slot
HB Mobile Ad Slot
From Enhancement to Dependency: What the Epidemic of AI Failures in Law Means for Professionals
Tuesday, August 19, 2025

Key Takeaways

  • AI dependency is now a widespread, cross-professional risk.
  • Professionals risk losing core competencies and judgment.
  • Liability frameworks are rapidly evolving to address these changes.
  • Managing dependency requires deliberate policy and continuous competency development.

When New York lawyer Steven Schwartz filed a brief in Mata v. Avianca containing more than a half-dozen fictional case citations generated by ChatGPT, the legal world saw it as an embarrassing one-off. But what began as an "unprecedented circumstance" has proven to be the tip of an AI-dependent iceberg now surfacing in multiple US and international courts, scientific studies, and even US government cabinet-level policy reports.

Since mid-2023, more than 300 cases of AI-driven legal hallucinations have been documented, with at least 200 recorded in 2025--only eight months into the year. From Arizona to Louisiana, from Florida to courts in the UK, Australia, Canada, and Israel, attorneys and pro se litigants are submitting briefs riddled with fabricated case citations generated by AI tools.

In the first two weeks of August 2025, three separate federal courts sanctioned lawyers for AI-generated hallucinations, including one attorney who used a well-known legal research database that produced fabricated citations. The epidemic has grown so severe that courts are now distinguishing between “intentional deception" and "inadvertent reliance on AI,” though both can result in sanctions. As one federal judge articulated in an August 2024 court order, while misuse of AI could be viewed as "deliberate misconduct in an attempt to deceive the court," the standard of responsibility is clear: "even if misuse of AI is unintentional," the attorney is still fully responsible for the accuracy of their filings.

And now, with the increasing sophistication and prevalence of AI tools used in professional contexts, we may be witnessing systematic professional dependency that can fundamentally alter existing liability frameworks. It's no longer a question about whether professionals will use AI, but rather whether they can maintain the independent competence that professional liability standards require.

Professional Dependency Evolution

Professional AI dependency follows a predictable pattern that organizations rarely recognize until it is too late:

  • Phase 1 (Enhancement): AI assists with routine tasks, improving efficiency while professionals maintain a full understanding of underlying processes.
  • Phase 2 (Integration): AI handles increasingly complex tasks as professionals become comfortable with algorithmic assistance and begin relying on it for standard workflows.
  • Phase 3 (Dependency): Professionals struggle to perform tasks without AI assistance, losing familiarity with manual processes and independent analysis capabilities.
  • Phase 4 (Atrophy): The skills necessary for independent practice deteriorate, rendering professionals unable to effectively verify AI outputs or operate when systems fail.

Recent studies suggest many knowledge workers have reached Phase 3, particularly in legal research, medical diagnosis, and financial analysis (where AI capabilities have rapidly expanded).

The numbers tell a stark story of accelerating dependency. What was once dismissed as isolated poor judgment has metastasized into a profession-wide phenomenon. The database maintained by Damien Charlotin documents the grim progression: from a handful of cases in 2023 to over 300 identified instances of AI hallucinations in court filings. 

This isn't just about individual lapses anymore. It's about systematic professional failure on an unprecedented scale.

Professional Dependency: Lessons and Liabilities

The Mata v. Avianca incident exposed the real risks of algorithmic dependency—a lawyer trusted machine outputs over professional judgment, leading to sanctions and reputational harm. Informal surveys indicate a significant percentage of litigators now rely on AI tools for legal research. And, increasingly, many of those fail to verify sources. This reliance undermines the traditional expectation that professionals can conduct, analyze, and verify their work. The shift compels courts, insurers, and regulators to revisit what constitutes "competent" practice when automation becomes standard, creating a paradox where both dependence and non-use can be sources of liability. 

The way forward? Conscious collaboration—where organizational safeguards, continuous education, and independent skill maintenance become as central as technological proficiency.

Standard of Care Implications

Professional malpractice law typically defines competent practice by comparing conduct to peers in similar circumstances. But AI creates a paradox: as more professionals adopt AI tools, using AI may become the standard of care while simultaneously creating new liability categories.

Courts may soon address questions including:

  1. Whether competent legal practice requires using available AI research tools.
  2. Whether attorneys can claim malpractice protection for decisions made with AI assistance when the AI provided incorrect information.
  3. What level of AI output verification constitutes reasonable professional diligence?
  4. Whether professionals are liable for failing to use AI when it might have prevented errors.

The emerging consensus suggests that courts will likely hold professionals responsible for understanding the capabilities and limitations of AI tools, while potentially requiring the use of AI when it becomes standard practice. This creates a double bind: professionals may be liable both for misusing AI and for failing to utilize it effectively.

Cross-Professional Impact

Similar dependency patterns have emerged across other professional services areas:

  • Healthcare: Physicians are increasingly relying on AI diagnostic tools, which may lead to a decline in their clinical observation skills, raising concerns about the independent verification capabilities.
  • Accounting: Automated systems handle routine functions, but practitioners may lose detailed financial analysis skills necessary to identify errors AI systems missed.
  • Engineering: Design software with AI optimization may produce solutions that human engineers cannot evaluate using first principles analysis.
  • Consulting: Strategic consultants who utilize AI for market analysis may risk losing their independent research and analytical capabilities, which are essential for justifying professional fees.

Insurance Industry Response

Professional liability insurers have begun to assess how they can cover AI-related risks through policy modifications, creating new exclusions and requirements:

  • AI use disclosure requirements and adequate training demonstrations.
  • Competency maintenance provisions for ongoing education in both AI capabilities and traditional methods.
  • Verification standards with explicit requirements for human verification of AI outputs.
  • System failure coverage addressing liability when AI systems fail and professionals cannot maintain acceptable service standards.

Regulatory Adaptation

Professional licensing bodies are struggling to adapt competency standards for the AI era:

Regulatory adaptation, however, nearly always lags behind technological adoption, creating uncertainty about professional obligations and liability standards.

The New Reality: Remedial Measures That Work

Courts are recognizing specific remedial steps that can mitigate sanctions:

  1. Immediate Withdrawal - Pull problematic pleadings the moment errors are discovered.
  2. Candid Disclosure - Full transparency about AI use and verification failures.
  3. Compensation - Voluntarily cover opposing counsel's fees for wasted time.
  4. Systemic Reform - Implement robust AI usage policies with documented safeguards.

As the court in Johnson v. Dunn (N.D. Ala., July 2025) recently found, these remedial steps can mean the difference between a warning and disbarment proceedings. 

Risk Management Strategies

Organizations can mitigate AI dependency risks through systematic approaches, preserving human competency alongside AI efficiency:

  1. Require regular manual skills assessments and regular testing. 
  2. Implement dual review protocols for AI outputs, accompanied by clear documentation.
  3. Maintain non-AI backup processes to ensure services can continue when technology fails.
  4. Train teams in both AI and traditional skills for effective AI oversight.
  5. Communicate openly with clients about AI use and inherent risks, providing explicit discussions of capabilities and limitations.

Implementation Framework

Conscious Collaboration Model: The most successful organizations develop approaches that leverage AI capabilities while preserving human judgment and competency. This involves utilizing AI for efficiency while maintaining human oversight over critical decisions, fostering AI literacy alongside traditional professional skills, and developing expertise in AI evaluation rather than merely operating it.

Professional Standards Evolution: As AI becomes essential infrastructure for professional practice, competency standards must evolve to address both AI proficiency and independent practice capabilities.

The Urgency of Now

The transformation from “unprecedented circumstance” to hundreds of documented failures represents more than a technological challenge—it's an existential crisis for professional competence. When attorneys using Westlaw Precision—a tool specifically designed for legal research—still submit hallucinated citations, we must confront an uncomfortable truth: the problem isn't just the technology, it's our wholesale abdication of verification responsibilities.

As AI becomes essential infrastructure for professional practice, the challenge is not avoiding AI dependency but managing it consciously. The path forward demands more than policies and procedures. It requires a fundamental recommitment to the core principle that makes us professionals professional: we, not our tools, bear ultimate responsibility for the integrity of our work. Whether that work product emerges from hours in a law library or seconds of AI processing, the signature on the brief remains human. 

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters