HB Ad Slot
HB Mobile Ad Slot
IMS Expert Insights: The Complex Litigation Landscape of Contemporary AI
Friday, October 27, 2023

AI is now a ubiquitous term in the common vernacular, but few understand the complexities of the acronym and “artificial intelligence” as it is implemented today.

As a grad student in the mid-80s studying electrical and computer engineering as well as computer science, I produced one of the early AI expert systems that incorporated cognitive methods gleaned from psychology. An expert system is a computer system emulating the decision-making ability of a human expert.1 This is accomplished by the creation of a database that includes knowledge an expert has in a particular area (the most popular being medical) and often formatted as question-and-answer or term-and-definition pairs. One then queries the database and the system attempts to answer the question the way that the expert would. If this reminds you of ChatGPT, Alexa, and other common AI systems, it should—all of them are variations on the same theme, just with far more advanced technology than what was available 40 years ago.

Later, when I became a professor, I built a system with my research team for NASA that we called Automated Knowledge Generation or AKG. AKG was able to scan engineering drawings of process systems and generate a knowledge base (database) for the Knowledge-Based Autonomous Test Engineer (KATE) that NASA had developed to dynamically monitor and diagnose complex launch systems. We copyrighted AKG, which brings us to the core of this article: the litigation of intellectual property associated with modern AI systems.

The Current State of AI

Natural language processing, or NLP, was something of the AI holy grail back in the day. The goal of NLP is to enable a machine to converse using human language. Normally, we interact with a computer via the interface that was enabled for whatever program we are running or by coding our instructions using a programming language. What was missing was the ability to command a machine using human speech and have the machine respond in kind. Ancillary to this objective is the notion of machine translation of human language. Early work tried to turn human language into syntactic rules that could be coded into a database, much like an expert system. Natural language proved far too complex for this approach, and in the ‘90s, NLP research shifted towards statistical approaches that were eventually incorporated into the burgeoning deep learning (DL) techniques that use artificial neural networks to produce generative AI.

Just as the name implies, generative AI is a system capable of generating a response to a query in terms of text, images, video, or even music. It requires sophisticated NLP as well as DL. Deep learning emerged in the late ‘80s and refers to the use of multi-layer neural networks to accomplish the gamut of AI-based technologies.2 Major breakthroughs were made in the late ‘90s as convolutional neural networks were developed that could handle large datasets efficiently, including large language models (LLM).

A good example of a large language dataset is the Cornell Movie-Dialog Corpus. Researchers at Cornell University compiled a database of 220,000 lines of movie character dialog from more than 600 movies. When deep learning is applied to the database, and a large language model is created, the LLM becomes conversant in the same way that characters in the movies are. Many other datasets exist, and some are very specific to specialized areas of expertise. When a generative AI with deep learning and a generalized large language dataset is developed, we have a system like OpenAI’s Chat Generative Pre-Trained Transformer (ChatGPT). Space and focus constraints preclude a discussion of transformer technologies in this piece, but suffice it to say that the notion of a transformer in DL was transformative in making systems like ChatGPT realizable.

What generative AI brings to the party is the ability for companies to produce human-level response characteristics to automated sales and marketing systems. The most expensive resource in any enterprise is the human resource, and generative AI allows companies to respond to customers 24/7 with highly accurate and adaptive responses to queries. As an example, Bloomberg now has BloombergGPT, “a 50 billion parameter language model that is trained on a wide range of financial data.”3 As the complexity and importance of the information increases, so does its value. The underlying idea—although you most likely will not see it stated anywhere—is to replace financial advisors. This underscores the great public fear of artificial intelligence: loss of jobs. An important point, however, is the fact that a generative AI is only as good as the data it was trained on. Examples abound regarding the gaffs and incorrect conclusions that these systems have produced.

Software vs. Hardware

So far, we have discussed the software aspect of contemporary AI, but what of the hardware? AI research officially began when John McCarthy, a computer scientist at Dartmouth, coined the term at a workshop in 1956.4 Early on, the overall idea was to program a computer to think like a human. It was understood that hardware was an issue because “The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have.” Researchers also identified artificial neural networks as a key technology in the effort and the simulation of neural processing by conventional computer hardware as slow and inefficient. Therefore, most of the modern advances in AI have had to wait for the hardware technology to catch up—and it has.

In prepared remarks on Intel’s second-quarter 2023 earnings provided at its July 2023 conference, Intel CEO Pat Gelsinger stated:

“Our strategy is to democratize AI–scaling it and making it ubiquitous across the full continuum of workloads and usage models. We are championing an open ecosystem with a full suite of silicon and software IP to drive AI from cloud to enterprise, network, edge, and client, across data prep, training, and inference, in both discrete and integrated solutions.”5

The “silicon IP” that he is talking about is Intel’s 13th Generation processor family, Meteor Lake, which will include a dedicated AI engine—specialized hardware designed to run neural networks efficiently. All of this is in response to other computer hardware companies that are fielding AI-specific hardware, such as AMD and Nvidia. The hardware in the early days consisted of general-purpose computers where the neural algorithms, which are inherently parallel, were constrained by the software to run sequentially, which is very slow. The AI-specific hardware is such that the hardware itself is configured to run the neural algorithms as close as possible to the parallel configurations of biological networks.

Patent Litigations in the Contemporary AI Space

Getting a patent on software is problematic since “Neither software nor computer programs are explicitly mentioned in statutory United States patent law.”6 The courts and legislature have allowed some, but in general, the court and the USPTO want to see more than an abstract idea. Hardware IP is tangible; there is a machine to be described and its utility and inventiveness can be argued. When selecting experts for AI-related IP litigations, the choice is often best for hardware-knowledgeable experts, who, by the nature of computing hardware, are also knowledgeable in software. The reverse is not the case for experts solely knowledgeable in software, as the training for an engineer, which features extensive coursework in mathematics and physics, is very different from the training of a computer scientist, for example.

Just like the validity issues surrounding generative AI systems, the validity of an expert’s opinion is predicated on their training and experience. Since the nature of expert opinion in the patent space has to do with their successful explanation of complex technical topics, strength of training and hands-on experience are tantamount.

What’s Next for AI?

Almost daily, one reads about various AI conflicts and disputes that range from ethical issues, job displacement, and economics to questions about regulation and oversight. As development continues and more domains access AI systems, these issues will continue to surface and be debated. Elon Musk called it “the biggest existential threat to humanity” and highlighted the necessity of regulation and oversight, which is a current focus in the US Congress as lawmakers seek to regulate without stifling the advancement of AI technologies.7

So, what do we watch for moving forward? At some point, a question-of-fact—or even one of law—will be impacted by the opinion of an AI. How will the truth be verified, and can it be verified? How extensively will the courts be influenced by AI? Unlike an expert, the source is not human and does not have the accredited and verified education (and/or the record of experience) that establishes the credibility the court now expects and demands. This is new ground, and the issues surrounding it have only just begun.

In a twist of irony, instead of inquiring why we should not have a concern, I asked ChatGPT, “What are some of the current conflicts and disputes about AI?” It responded with a bulleted list of items and explanations for each; those have been left off for brevity.


As of my last knowledge update in September 2021, there were several ongoing conflicts and disputes surrounding AI. Keep in mind that the situation might have evolved since then. Here are some of the key issues:

1.   Ethical Concerns

2.   Privacy and Surveillance

3.   Autonomous Weapons

4.   Job Displacement and Economic Impact

5.   Regulation and Oversight

6.   National and International Competition

7.   Intellectual Property and Patents

8.   Data Privacy and Ownership

9.   Deepfake Technology

10. Explainability and Transparency

11. AI in Healthcare and Medicine


Even though this ChatGPT opinion is dated, the issues remain—and will for the foreseeable future. The ease of accessing these AI tools and their lack of regulation and oversight demands an increased awareness of the issues for both litigators and experts.

References

1 Jackson, P. (1999). Introduction to Expert Systems. Germany: Addison-Wesley

2 Dechter, Rina. “Learning while searching in constraint-satisfaction problems.” (1986): 178-185

3 Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, Gideon Mann, “BloombergGPT: A Large Language Model for Finance.” (2023). arXiv:2303.17564

4 McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. AI Magazine, 27(4), 12

5 Intel Corporation Press Release, 2023, “Comments from CEO Pat Gelsinger and CFO Dave Zinsner,” retrieved from https://download.intel.com/newsroom/2023/corporate/2Q2023-Earnings-Call-Comments.pdf

6 Wikipedia contributors. (2023, April 8). Software patents under United States patent law. In Wikipedia, The Free Encyclopedia. Retrieved 17:31, July 28, 2023, from https://en.wikipedia.org/w/index.php?title=Software_patents_under_United_States_patent_law&oldid=1148753975

7 Gibbs, S. (2014, October 27). Elon Musk: artificial intelligence is our biggest existential threat. The Guardian. https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat

This article was authored by Dr. Harley R. Myler, IMS Elite Expert.
 

HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins