BENEFITS AND LEGAL RISKS OF EMBRACING GENERATIVE AI APPLICATIONS
Monday, March 27, 2023

We are currently witnessing an AI revolution, and an unprecedented AI arms race among Big Tech over the incorporation of AI into their search engines and chatbot capabilities. Most notably, ChatGPT has been dominating news headlines. ChatGPT is a chatbot, developed by OpenAI that understands human language prompts, and can continuously produce human-like dialogue and content.  Generative artificial intelligence (“generative AI”) is a type of artificial intelligence (“AI”) technology that can produce various types of novel content, including text, images, and audio. That is to say, it does not only spit out existing relevant information from its database, but is able to generate new content. Generative AI starts with a prompt that could include a verbal prompt, text submitted as part of a chat, or an image. Then AI algorithms return new content in response to the prompt.  

In general terms, ChatGPT is powered by generative AI, but more specifically ChatGPT is powered by two main technologies:

  • First, large language models (“LLM”) that use deep learning to process natural language—deep learning is based on algorithms using neural networks, which recognize patterns and relationships among datasets. Thus, LLM can decipher human natural language;[1] and

  • Second, it uses reinforcement learning from human feedback, which means that the model is trained and fined-tuned based on human feedback.

Different tools, other than ChatGPT, are powered by generative AI, such as DALL-E, Bard, and Harvey.  Bard is Google’s response to Microsoft’s incorporation of ChatGPT into its search engine, Bing.  DALL-E automatically creates images from a text description or generates text captions from images.  Harvey is another tool that is powered by generative AI, but is specifically tailored as a legal research tool for lawyers.  Harvey provides lawyers with a natural language interface where they can verbally describe the task they wish to accomplish with simple instructions and the legal chatbot will generate a response. [2]

POTENTIAL PITFALLS

We are still in the early days of generative AI, and as any new technology, there is much room for improvement. There are concerns on many fronts that will need to be addressed: the fact that generative AI applications may provide answers that sound correct and coherent, but are factually wrong; plagiarism; propagation of bias; intellectual property infringement issues; and data protection and privacy issues in generative AI applications.  To name a few.  If these concerns are addressed head on, then the full potential of generative AI may be realized.

One of the main risks associated with AI is the potential for data privacy and security breaches.  AI systems rely on enormous amounts of data to learn and make decisions, and this data often contains sensitive information about individuals and organizations.  If this data is not properly secured via robust security protocols, it can be accessed and misused by unauthorized parties, leading to data breaches.  To mitigate this risk, companies should ensure that they have robust data security and privacy protocols in place, including measures such as encrypting conversations, implementing access controls, and regular security audits.

Another risk associated with respect to AI is the potential for infringement of intellectual property rights.  AI tools, such as generative AI models like ChatGPT are developed based on large datasets that include publicly available text data, social media posts, and web pages, to generate new content.  This can lead to unintended infringement of patents, trademarks, or copyrights.  Organizations must conduct thorough IP clearance searches before developing and launching any AI-powered products or services, and should consider consulting with IP experts to ensure that their products do not infringe on existing IP rights. Further, AI-based tools can make it difficult to determine who owns the rights to the content that is produced.  Given that AI-based tools are capable of creating content without direct human input, it may not be clear who is responsible for the output.  If generative AI systems are used to create infringing content, it can lead to legal liability for the developer, user, or owner of the system. The legal landscape for IP infringement with generative AI is still evolving, and it is unclear how courts will handle these types of cases.[3]

Overall, there are many risks and concerns on several fronts associated with AI, and generative AI in particular, and it will be vital for developers, users, and policymakers to consider these risks and work to mitigate them. This may involve forming a government agency dedicated to technology oversight and regulation to develop legal frameworks to address these concerns, and implement technical safeguards to prevent any of these risks to manifest. 

GENERATIVE AI POTENTIAL

This technology has the capacity to disrupt many industries.  Generative AI has the ability to write code, design new drugs, develop products, redesign business processes and revamp supply chains.   Investors are enthusiastic about the promise of AI which can be seen by the exponential increase in investments in AI.  Investments in AI have increased 71% YoY in 2022, from $1.5 billion to $2.6 billion (BofA research article).  According to a study by Gartner, AI-enabled drug discovery and AI software coding have received the most funding.[4]  Many industries have already incorporated AI into some aspects of their business processes and operations, but the uses of generative AI and the capabilities are still in its early stages and have not been fully realized.

LEGAL INDUSTRY

One of the industries that can be transformed by generative AI is the legal industry.  The revolutionary tools powered by generative AI raise many questions: will the legal industry prohibit or embrace this technology?  If it embraces the technology, what will adaptation look like in practice?  And, it even poses an existential question – does it have the potential of making lawyers obsolete?  Currently, some companies are prohibiting the use of generative AI tools in the workplace altogether due to concerns previously described, while other companies are fully embracing it and incorporating it into their practice.

AI is already being used in the legal industry via the automation of standard legal documents, as well as to extract targeted contract provisions to assist lawyers in the M&A due diligence process.  But generative AI has the potential of taking these uses a step further.  Generative AI tools, which have the ability to produce various types of content, could be used in different ways to increase efficiencies and therefore reduce legal fees. The uses of generative AI in the legal industry can range from providing a basic overview of an area of law, assisting with legal research, to producing a nuanced agreement from a knowledge bank of templates. Harvey is a prime example of a company using the technology behind ChatGPT tailored to legal tasks. Although currently in the beta stage, we can see the possibilities. For transactional practices, which heavily rely on templates, this tool can potentially evolve into transactional practitioners being able to ask a legal chatbot in a conversational manner to produce a purchase agreement by verbally providing the main business and legal terms and specific provisions that the practitioner would like to see.  Further, generative AI could potentially evolve into being able to mark up an opposing counsel’s agreement based on training the tool to spot off-market provisions.

The next question is whether this will obliterate the need for a real life attorney.  The answer is no. Individuals with legal training will still be needed to review the outputs of AI, and make judgments based on experience.  AI will help to make the lawyer more efficient and automate administrative and standard tasks that are time consuming.  Of course, the great promise of generative AI will only be realized if the pitfalls and concerns are addressed first and the responsible development of generative AI tools is put at front and center.


FOOTNOTES

[1] Me, Myself and AI – Artificial Intelligence Primer. February 28, 2023. BofA Global Research.

[2] https://techcrunch.com/2022/11/23/harvey-which-uses-ai-to-answer-legal-questions-lands-cash-from-openai/

[3] A portion of this article was generated by ChatGPT.

[4]https://www.gartner.com/en/articles/beyond-chatgpt-the-future-of-generative-ai-for-enterprises

 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins