We have been keeping a keen eye on the explosion of the use of artificial intelligence (AI) tools and generative AI. We are assisting clients with Governance Programs to formulate a process to evaluate the use of AI in their organizations, encourage safe and reliable use of AI tools by employees, evaluate appropriate uses of AI tools, and develop a process to mitigate legal issues that arise from the use of AI tools, including educating employees on the risks posed by the use of AI tools and how the organization is mitigating the risk. We find that many employees have no idea that their use of generative AI or other AI tools may have legal risks.
We are also dedicated to educating our readers on issues that arise that we think may be interesting for them to consider or to point you to articles we think are a worthwhile read.
A new article by eWeek, “AI and Privacy Issues: What You Need to Know” is one such example. The article outlines some of the privacy issues to consider “as AI becomes increasingly pervasive in our lives.”
Although the article is targeted to consumers, it is instructive for businesses using AI tools to be aware of what consumer facing publications are saying about business use of AI, and how consumers should be responding. This is an important part of an AI Governance Program: to consider how your employees and customers will react to your use of AI tools, including what data you are feeding into the AI tool, whose personal information or sensitive information may be used in the model for learning, and whether the business is disclosing personal or sensitive information to third party AI developers and what those developers are doing with the data.
This article outlines some of the concerns your employees and customers may have with your use of AI tools. You may wish to consider them when developing an AI Governance Program.