HB Ad Slot
HB Mobile Ad Slot
My Employees Are Using ChatGPT. What Now?
Monday, July 17, 2023

In the rapidly evolving world of artificial intelligence (AI), one development stands out for its transformative potential: the rise of generative AI tools. Many major technology companies are building the large language models (LLMs) that power these tools, training them on billions of inputs. But among its peers, OpenAI's ChatGPT has emerged as a game-changer; becoming the fastest web platform to reach 100 million users. This milestone is not just a testament to the tool's capabilities but also a clear indication that generative AI is here to stay.

The Proliferation of ChatGPT in the Workplace

ChatGPT is a LLM that has been fine-tuned to be useable as a general purpose chatbot. The current base models are OpenAI’s GPT-3.5 and GPT-4 LLMs. ChatGPT understands and responds to natural language prompts and is beginning to find its way into various professional settings. From small startups to multinational corporations, employees across the spectrum are leveraging this tool to enhance their productivity and streamline their workflows.

The applications of generative AI tools in the workplace are diverse. They are being used to draft content, generate documents, conduct fact-checking and research, and even write software code. This widespread use, while enhancing productivity, also brings with it a host of potential risks that organizations need to address. The integration of AI into the workplace is not a simple plug-and-play scenario; it requires careful consideration and strategic planning.

The Need to Provide Guidance to Employees

Given the potential risks associated with the use of ChatGPT and similar tools, it's crucial for companies to provide guidance to their employees. This guidance can take the form of a formal policy or a more general best practice guide that leans on existing information security policies implemented by the company.

While a few large companies may have the resources to build their own internal LLMs, most companies do not. For these companies, adopting the use of third-party generative AI tools safely and with guidance can help them remain competitive against companies with access to their own LLMs. However, this doesn't mean that smaller companies should rush to adopt these tools without due diligence. The potential risks and challenges associated with the use of AI tools like ChatGPT should be carefully evaluated and mitigated.

Understanding the Risks

The use of ChatGPT and other LLMs in the workplace can pose several risks, including:

Confidentiality: Sharing confidential company or client information with generative AI systems may violate contractual obligations or expose trade secrets to public disclosure. This is a significant concern, especially for companies that handle sensitive data. Employees need to be aware of the potential risks associated with sharing confidential information with AI tools. In addition, most generative AI tools are cloud-based or software-as-a-service, meaning that data is being sent to a third-party service provider. If companies provide confidential information to these third-party platforms and the information is then fed back into the model for training purposes, companies could lose trade secret protection for that information.

Personal Data and Privacy Violations: Sharing personal information about customers, clients, or employees with generative AI systems can create privacy risks. This is particularly relevant in the context of stringent data protection regulations such as the General Data Protection Regulation (GDPR) in the European Union and the increasing number of state data privacy laws in the United States, including the California Consumer Privacy Act (CCPA). Further, if private information ever enters the training data of a LLM, e.g., the information is scraped from a data leak, it is feasible that a malicious actor can extract the private information from the model.

Quality Control: Generative AI tools, while remarkable in their capabilities, are susceptible to producing erroneous outputs, leading to potential quality control issues. This propensity to inaccuracies may be further exacerbated by the phenomenon known as automation bias, where users over-rely on the outputs of these AI tools, often without questioning their accuracy. Despite the substantial enhancement of productivity that AI tools offer, they are not impervious to faults. The dangerous part lies in the fact that generative AI tools can produce incorrect results in a very convincing manner, mimicking human-like generation and causing users to trust in their authenticity.

As these generative AI tools evolve over time, they will likely improve in accuracy and exhibit fewer 'hallucinations' or false creations, making it increasingly challenging to detect incorrect information. Don’t be fooled – use of the term ‘hallucinations’ is simply a euphemism for ‘mistakes’. This reduced detectability, coupled with automation bias, escalates the likelihood that individuals will become more prone to accept outputs without thorough scrutiny, thereby increasing the potential for misinformation to be propagated or erroneous decisions to be made. Companies in particular (and by extension, their employees) need to be vigilant and cannot absolve themselves of responsibilities and liabilities by solely relying on the use of generative AI tools. It is incumbent upon them to ensure that any information generated and acted upon is as accurate and reliable as possible.

Bias and Discrimination: Generative AI systems can produce biased and discriminatory results. As LLMs are trained on data available on the internet, they are capable of repeating biases found there. If companies rely on generative AI tools, they need to ensure that they do not engage in any biased or discriminatory actions based on the use of these tools. These risks are particularly present and acute in connection with employee recruiting, screening, and hiring.

Product Liability: Generative AI tools can be used in product research, design, development, and manufacturing phases. Products may be physical (e.g., construction products) or software-based (such as autonomous driving technology). If a product or system powered by AI makes a decision that harms a user, it could result in claims and liability for all actors and organizations in the “chain” of the AI development and use.

Intellectual Property Ownership: The use of generative AI systems raises complex IP issues, including whether documents or code generated by generative AI systems are entitled to legal protection or whether the company can be held liable for using the output of generative AI systems. The U.S. Copyright Office has stated it “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” This is a grey area in the law that is yet to be fully resolved.

Separately, there is still some uncertainty around the ownership of data generated by generative AI tools. The terms of use of certain tools mention that the user owns the data. However, owners of the data that was used to train the LLM may also have certain ownership claims.

Misrepresentation: Claiming output is human-generated when it's not can lead to consumer protection claims or other public relations concerns depending on its use. Companies should be aware that there is a risk of unfair or deceptive practice claims under state or federal law if they are incorrectly using AI tools. Accordingly, transparency is key when using AI tools. In addition, social media is notorious for calling out content that was created using generative AI but not disclosed, which at a minimum would be a blight on the user’s reputation and credibility.

Insurance Coverage: Depending on the policy, insurance may not provide coverage for liability resulting from the use of generative AI tools. As generative AI tools become more integrated into business operations and automate more functions, the possibility of adverse events rises, bringing about additional exposure to companies. It is possible these tools fall outside the scope of existing policies

Future Requirements: There may be future requirements to clearly identify AI-generated content if a company needs to make representations in a transaction (such as a sale or financing) or in connection with a commercial agreement with a vendor or a customer. Other regulations relating to transparency, consent, and notice will likely be enacted.

Potential Employment Discrimination: The use of generative AI systems may adversely affect the performance of individuals who are not using it relative to their peers. This could potentially lead to employment decisions stemming from the use or non-use of AI tools, which may have a discriminatory or adverse impact on a protected class of individuals (e.g., persons 40 and over). Companies need to ensure that the use of AI tools does not create an unlevel playing field within the workplace or potential bias in employment decision-making.

Addressing the Risks

Companies should provide employees guidance on how to use generative AI tools responsibly, including how to avoid the risks associated with their use. The guidance can come in the form of a new generative AI acceptable use policy or something less formal, such as a best practices guide. In addition, companies should consider additional mechanisms including systems for monitoring internal use and providing procedures or mechanisms for reporting inadvertent sharing of confidential information with the generative AI systems.

Confidentiality and Privacy: In light of potential data breaches and the privacy risks associated with sharing personal data, the company’s generative AI acceptable use policy or guidance should clearly define what data can and can't be shared with AI tools, with a particular emphasis on protecting sensitive company, client, and employee information. This approach helps ensure contractual obligations and trade secrets are maintained while complying with data protection regulations such as GDPR. Further, if companies use generative AI tools offered by third-party service providers, they must do so while abiding by contractual obligations to their clients, vendors, employees, etc. and confirm that they are authorized to share the information with the platform.  

Employees should be asked to opt out of having any data be used for machine learning training if such an option is available. For certain service providers, users may opt out by sending an email or filling out a form provided by the vendor.

Companies should also consider providing a reporting mechanism through which employees can report to management if confidential or sensitive data was inadvertently shared with a generative AI tool. This reporting mechanism could involve informing a manager or sending an email to a designated email address managed by the information security team.

Quality Control and Factual Inaccuracies: Due to the potential quality control issues arising from inaccuracies in AI-generated content, the company’s acceptable use policy or guidance should ask that employees review any output generated for accuracy and correctness. Employees should apply their expertise and exercise sound judgment on how best to use the output. In addition, if companies decide to directly provide the results of generative AI tools to their clients and customers without human review, they must understand the higher level of risk it entails as these tools can produce incorrect or even offensive results.

Intellectual Property Risks: Because copyright laws vary across jurisdictions, companies may not have copyright protection over the data output from a generative AI solution. Accordingly, others may be able to copy the data without any risk of copyright infringement. Separately, there are also concerns that owners of the data used to train a generative AI tool may have certain ownership claims on the output generated. Companies should be judicious about the use of such content and understand that they may not have clear title of ownership or copyright protection on the generated content.

Misrepresentation: To avoid misrepresentation claims and potential public relations issues, companies need to be transparent about their use of AI tools. In a world where consumers value authenticity, revealing that content was generated using generative AI is not just ethical, but could be appreciated by the audience. Certain jurisdictions already have notice provisions that require informing consumers about the use of a chatbot or other automated bots, and additional regulations will likely be passed that may require providing notice of the use of generative AI tools.

Insurance: With the risks associated with AI usage, insurance policies need to evolve in tandem. Companies must engage in discussions with insurers and insurance brokers to ensure coverage extends to potential liabilities resulting from AI use.

Regulations: Given the fast-paced evolution of AI regulations, it's important for businesses to stay ahead of the curve. Anticipating and preparing for potential regulatory changes will keep the company compliant and limit future disruptions.

Employment: To ensure fair play within the workplace, companies need to monitor the use of AI tools like ChatGPT. Measures should be implemented to ensure that the performance evaluation of individuals not using AI tools is not adversely affected and consider training for those employees who are not early adopters of generative AI tools, thereby mitigating potential discrimination. Moreover, the company’s generative AI acceptable use policy or guidance should consider restricting the employees use of personal email accounts to log into AI tools for work-related activities.

More generally, each company should evaluate these risks and assess what types of measures they should take when implementing generative AI acceptable use policies. Companies should be reminded that taking a very conservative approach to the use of generative AI tools may create an environment where employees may still use these tools but on personal devices and outside the purview of the company. Therefore, a more nuanced approach that understands the use cases where employees can use generative AI and determines what conditions and circumstances are allowed is advised.

The Time to Act is Now

The growth of ChatGPT and other generative AI tools is not slowing down. As these tools become more integrated into our professional and personal lives, the importance of addressing the associated risks becomes increasingly critical. Companies must be proactive in understanding these risks and implementing strategies to mitigate them.

In the face of potential future regulations and the evolving nature of AI technologies, it's essential for organizations to stay informed and adaptable. As we navigate this new landscape, the key to harnessing the power of generative AI tools lies in striking a balance between leveraging their potential and managing their risks.

The rise of ChatGPT and other generative AI tools presents both opportunities and challenges. By providing clear guidance to employees, transparency with consumers, understanding the associated risks, and implementing robust policies, companies can navigate this new landscape responsibly and effectively.

Click here to download our cheat sheet with guidance on addressing specific areas of risk.

HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins