HB Ad Slot
HB Mobile Ad Slot
AI Takes the Stand: Speaking of Litigation Podcast – Episode 2
Tuesday, June 27, 2023

New episode of our podcast, Speaking of Litigation: From chart-topping artificial rap songs to employment screening tools, artificial intelligence (AI) is not only a societal phenomenon but also a growing legal dilemma.

Trial lawyers around the globe are focused on the emergence of AI-related disputes in and out of the courtroom.

Epstein Becker Green attorneys Teddy McCormick, Jim Flynn, and Ali Nienaber illustrate the influence that AI has on litigation, employment practices, music, and more.

[00:00:00] Teddy McCormick: Today on Speaking of Litigation, we will be discussing whether using AI technology could get you sued. Hello everyone. I'm your host, Teddy McCormick. I'm an attorney in Epstein Becker and Green's Princeton, New Jersey office. I'm part of our litigation practice. The inspiration for today's episode comes straight out of the news.

[00:00:20] Teddy McCormick: ChatGPT was released at the end of 2022, and it seems like AI is pretty much everywhere these days. You can't turn around without seeing another article or podcast about people cloning themselves and fooling their bank or their family, or hearing a song like Heart of My Sleeve, which was created using AI versions of Drake and The Weeknd's voices before it was taken down by the record company.

[00:00:46] Teddy McCormick: For us as litigators, this naturally got us thinking about all of the potential risks that may arise from using AI and what impact AI is likely going to have on litigation in the future. So today what we want to do is talk about how AI is literally reshaping the landscape when it comes to the future of litigation.

[00:01:06] Teddy McCormick: But first, a request for you. If you like the information we're sharing today, please subscribe to the show. Speaking of Litigation is available on speakingoflitigation.com, YouTube, and wherever else you get your podcasts. Joining our discussion today is Jim Flynn, Managing Director and Member of Epstein, Becker and Green.

[00:01:25] Teddy McCormick: He focuses much of his practice on intellectual property law, including copyright and trademarks. And recently Jim has spent a lot of time thinking about the implications of using generative AI tools to create music, writings, or artwork. Also, joining us today is Alexandra, Ali Nienaber. Ali is an Associate with Epstein Becker and Green.

[00:01:44] Teddy McCormick: Ali is a member of Epstein Becker and Green's AI task force and regularly counsel's clients regarding litigation risks associated with the use of AI tools in recruitment, hiring, training, and assessment of employees. So before we dive into our specific areas of discussion, I thought maybe we could spend a few minutes talking about how AI is reshaping litigation in our jobs right now.

[00:02:10] Teddy McCormick: Just one anecdote I wanted to share. I've been preparing for a jury trial and we just obtained a list of potential jurors, over 500 people. We hired a vendor that utilizes AI to put together profiles on each of the potential jurors, including a psychological assessment of each juror based on their social media profiles and other publicly available information.

[00:02:35] Teddy McCormick: To me it was really quite fascinating because the last time I did this was about 15 years ago, and we used a company that relied on people to do this kind of assessment and it was both much more time consuming and more expensive. We were able to get these profiles complete with pictures, psychological assessments, lists of interests, work history, educational background, for over 500 people.

[00:02:59] Teddy McCormick: And it took us, I think it took them less than 48 hours. So that kind of blew my mind, and also made me feel very, very old, to be honest. Jim, have you had any recent experiences like that where AI has just completely transformed some function of your job?

[00:03:13] Jim Flynn: So I can't really say that I've had recent experience that has changed it or been as dramatic as what you just described, but I will say that I think it's important at the outset here to say AI is not new.

[00:03:26] Jim Flynn: Certainly ChatGPT and generative AI is something that's now coming in that seems very new, and we have to talk about the implications of it. But litigation's been transformed previously by AI, right? Predictive coding for discovery documents and going through emails is just a form of AI. It's simpler than generative AI, and certainly that transformed all of our lives.

[00:03:52] Jim Flynn: That's pushed into legal research. Any kind of free language text search on different databases is all a form of AI. So one of the things I try to emphasize in my writings or in discussions with colleagues, is we shouldn't be too afraid of it. We should not feel too old. Because we've lived with it for a long time, Teddy.

[00:04:16] Jim Flynn: So when you say, oh, I feel old. Don't, we're all learning the new stuff. And we've all had part of the stuff there before. So I think there's a lot of things going on in how we litigate using AI. And then obviously the substantive issues you talked about, particularly in copyright and elsewhere where it's being litigated in various fields.

[00:04:56] Teddy McCormick: So you're right. And we are good at adapting and we'll continue to adapt to these new technologies. Ali, you are much younger than both Jim and I, but have you had any areas of your job that have recently been transformed by using AI?

[00:05:14] Ali Nienaber: So I feel like I'm kind of going on the same train as you guys with document review.

[00:05:18] Ali Nienaber: I think that's the latest thing that I've seen, at least machine learning impacting it. Recently we had a project where humans went in. We trained the system basically to evaluate what would be most relevant. So we went in, did a small set coding it, non-relevant, relevant, and then allowed the system to do the remainder of the documents and pull what's more relevant so that we could quickly more produce it to the other side.

[00:05:45] Ali Nienaber: So that's a really interesting take for us to start seeing, at least with document review. I haven't seen it with motion or anything like that, and I don't anticipate using ChatGPT within the next month or so to do those types of things, but I can see that transforming it in the future.

[00:06:02] Teddy McCormick: So let’s switch gears. Ali, can you tell us a little bit about some of the litigation risks that arise from employers using AI tools in their recruitment, their hiring, their training and evaluation of employees?

[00:06:13] Ali Nienaber: Yeah, so I think the latest thing that we are really seeing is using screening tools. Specifically AI screening tools, when the hiring processes is going on.

[00:06:26] Ali Nienaber: And that is bringing up some of the basic issues that we always see with employment law, which is, discrimination, age discrimination, disability discrimination, race, gender, all of these different things. And kind of the latest case that has brought this more to the forefront is Mobley versus Workday.

[00:07:06] Ali Nienaber: Mobley is trying to do a class action lawsuit. So he is trying to be a representative. He is an African-American who's over the age of 40 and he has disabilities. And within the last couple of years, he's put in his resume to about 80 to a hundred jobs and he believes that Workday is being used by these companies he's applied to, and has been rejected from, and his allegations as of now are that Workday's tool is biased in which it discriminates against those specific individuals.

[00:07:35] Ali Nienaber: So this case is in the very beginning stage, it was only filed in February of this year, and the latest action on it is they've asked for an extension of time to file an answer to it. Workday has. So this is going to be a continuing, ongoing lawsuit that we're going to watch, evaluate, and kind of determine as the laws go forward.

[00:08:03] Ali Nienaber: So outside of that, the other area that we're really seeing that has kind of made big waves and a lot of people probably heard about it, is the New York City law, which is, I always call it as the AEDT, which saying that a million times can get you a little bit tongue tied, but it's the New York City Automated Employment Decision Tool law.

[00:08:31] Ali Nienaber: And the basic focus on that law is requiring any company that's in New York City or any individual who lives in New York City, if they have a screening tool, an AI screening tool used against them, that they get notified. And kind of the fun, interesting thing that I think is gonna come out of that litigation overall is going to be if the vendor or the AI product is actually considered AI.

[00:08:58] Ali Nienaber: Because they have a very specific definition they go to, which kind of goes into the database and machine learning and different things like that. But you're going to have, you're going to see some companies who are really focused on not being considered an automated employment decision tool. And you're going to have other companies who are going to go into there.

[00:09:21] Ali Nienaber: We can talk about it more in depth if you would like, or hear what Jim's kind of perspective on it is?

[00:09:29] Jim Flynn: Sure. So regarding the things that Ali's talking about, we certainly see those. I think what they highlight from a litigation standpoint is that you're going to need to be technically sophisticated or have access to people that are, so it becomes an important issue, and I know we'll talk later specifically about some of the things that come into play with regard to your ethical duties as a lawyer.

[00:09:55] Jim Flynn: But a few years ago before the AI craze, the ABA actually added comments to their model rules that said, some degree of familiarity and understanding of how technology is going to be used have to be built in to the very definition of competence we have as lawyers. So when you talk about something like the cases that Ali's mentioning, it's really going to meld, ok, am I technically sophisticated enough to make the argument that I don't qualify as one of these tools?

[00:10:23] Jim Flynn: And then of course, because you probably have to make both arguments simultaneously to then say, but even if I do, I don't violate this employment law or that employment law. So the ability to have a foot in both camps or to have a team of lawyers who can work across practices to really do that is one of the things that's going to really change for litigators because these are becoming much more prevalent.

[00:12:13] Teddy McCormick: And it's true too, that I think in addition to counseling clients with respect to whether to implement an AI tool for screening or something like that, I think that outside counsel in conjunction with consultants are going to be doing a lot of auditing of tools that people may already be using.

[00:12:36] Teddy McCormick: And I think, and Jim, I think we at EBG are already doing that with EBG Advisors. Is that true?

[00:12:44] Jim Flynn: So we do it both through our EBG Advisors consultancy, but we also do it directly with our attorneys depending on the nature of the engagement. How much you want privilege.

[00:12:56] Jim Flynn: We can use either side of that. But what's also important to understand in this context is how careful you need to be about language. Because when you're talking the employment side, bias can be used interchangeably with discrimination under the discrimination laws, but bias as a scientific concept in how algorithms work, they have nothing to do with some protected category.

[00:13:22] Jim Flynn: There could be a bias in the AI algorithm or system that produces skewed results that don't necessarily equate to discrimination because it may skew on something other than the protected category. And so even just having a conversation, you can have an IT lawyer or an IT consultant talking to an employment lawyer or an HR person.

[00:13:50] Jim Flynn: And they're both talking about bias, but they're talking about totally different things, right? So just the discipline to actually get a common lexicon becomes very important as you start to litigate these cases, and not only at the end when you're talking to a jury potentially, very important, obviously, to have your terms straight, but all the way at the beginning because you can waste a lot of time having a conversation that nobody understands.

[00:14:16] Jim Flynn: You're not talking about the same thing, right? It goes all the way back to the ancient law school case, the ship Peerless, right? We're each talking about two different ships. That's what happens here when you're talking about bias in the AI context, you have to be very careful in your terminology.

[00:14:35] Teddy McCormick: That’s a great point. Aside from the litigation risks associated with using AI in the employment context, one of the things that I think we've been seeing a lot in the news recently is, we're now seeing AI tools used to create music and writings and artwork, which creates a whole host of intellectual property issues.

[00:14:55] Teddy McCormick: And Jim, I know you've written a number of articles about this subject and given it a lot of thought. Do you think generative AI may eventually be eligible for copyright protection?

[00:15:07] Jim Flynn: I guess the simple answer is yes. But being a lawyer, I like to quibble, so my answer is AI generative work, generated works, I think will eventually be entitled to protection, but it might not be called copyright.

[00:15:22] Jim Flynn: Right now the copyright statute, at least in the United States, requires a human element. The definition of author or how it's referenced in the statute means a human being. And even before AI, I've been involved in cases about whether you can copyright something that you say was dictated by a spirit or a god.

[00:15:46] Jim Flynn: And I've actually had that as a real case. And so it becomes a very interesting, interesting process. I do think, however, that there will come a time where AI generated works will be protected. It could either happen because the copyright statute changes and gives a definition of author if it's more expansive than the current understanding.

[00:16:12] Jim Flynn: Because right now it just refers to author. But the copyright office has developed regulations that say that requires a human being. I don't envision the copyright office changing those without a statutory change, but there could be other forms of protection that could be established. So for instance, on an analogy to the patent field, in patent law to get a patent, you're supposed to invent something.

[00:16:48] Jim Flynn: You're not supposed to just find something. So if there's a natural process, normally under patent concepts, you wouldn't be able to patent it. But what people found, particularly in the concept of genetically modified plants or other organisms, or in some things in the pharmaceutical or health field, it might be a natural process in the body to produce insulin, let's say, or to produce some other substance.

[00:17:21] Jim Flynn: And when someone does all the research to figure out how to do that or replicate it or isolate it, there was a time where you couldn't patent that despite the fact of all the investment that was needed. And I think what will eventually happen is that on the intellectual property side, when we get past what I'll call the advocates, which I'll explain in a minute, and get simply to the business people, there will be a market imperative to require that somebody be able to protect this stuff.

[00:17:54] Jim Flynn: I mean, right now the cases we hear about are these cases, for instance by, there's a guy named Stephen Thaler. And he is a person who creates AI, AI processes and inventions and software. He keeps trying to get under the current law, the AI machinery or software or process recognized as the author, and he is battling up against the current law.

[00:18:25] Jim Flynn: At least in the United States, he’s lost. He's also lost in some other places overseas. And I know there's a, he tried to get a picture that was generated by AI, which was called the Opening to Paradise, and he's trying to get that copyrighted. He's also tried to patent certain inventions that have the AI listed as the inventor.

[00:18:51] Jim Flynn: I think we'll get past that when it's not about creating that buzz and it's just about, hey, I want to be able to sell posters of this picture and I just want to make money. There's going to be a market imperative where somebody's going to lobby Congress and figure out how to give that a protection. Are they going to call it copyright? Maybe.

[00:19:10] Jim Flynn: Maybe they're going to call it something else. But there will be a protection. I think the other issues we're going to see that you mentioned, and we're starting to see with this creation stuff, is it's one thing if you create it just from what you say, or what you feed into the AI process, but the way AI works is it reaches out and takes elements and sort of synthesizes things from thousands and thousands of other images.

[00:19:40] Jim Flynn: So we already see lots of litigation about, hey, are you copying my images and using them in your process without paying me a license fee? So in some sense we're going to get the portrait or picture version of what we used to see with, years ago, with Napster and before they figured out all the music stuff.

[00:19:58] Jim Flynn: We're also going to see litigation, and this will come in with some other forms of litigation as well, but we're going to see the right of publicity because if I start copying things, and it appears that this is now a duet being sung by two people who never appeared together before, am I robbing them of their image, even if I use a totally new work and a new song?

[00:20:21] Jim Flynn: So it's not copying their song, it's just portraying their voice or their image, that's protected under laws of various states, either a common law, or as it is in New York under Section 50 and 51 of the New York State Civil Rights aw. So there's a lot of litigation that's going to come out on the IP side and that already is.

[00:20:46] Jim Flynn: There's also people just trying to make money off of it. Getty Images as an example may be suing to stop people from copying. But some other database of photographs is signing contracts to license all their stuff so people can use their images to pull it into their AI creation.

[00:21:08] Jim Flynn: So that's going to be just an interesting thing to watch. And the copyright office has said they're going to continue to study it.

[00:21:17] Teddy McCormick: So what you say I think is so interesting and so true about, speaking for a moment about artists and using AI technology to create songs and things like that.

[00:21:31] Teddy McCormick: There was an article I think that we were looking at, either it came out yesterday or today, where Ice Cube was very anti AI and I think he even called it demonic. And then I think in the same article it talks about Grimes, and how Grimes was embracing it and trying to license it and do things like that.

[00:21:52] Teddy McCormick: Let's talk a little bit about whether AI is using these, these types, these types of technologies to create songs. First of all, have either of you listened to the Heart on my Sleeve song? I will confess that I have not. Ali or Jim, have either of you had a chance to listen to it?

[00:22:10] Jim Flynn: Yeah, I've listened to that and some other songs that have been generated through AI. And it's interesting. I mean, it's like other deep fakes that you see, I mean, if you've seen some of the things that they've done with Tom Cruise as an example, where it's not Tom Cruise speaking, but they manipulate it and you think it's Tom Cruise making those statements.

[00:22:37] Jim Flynn: Yeah, some of it is very hard to differentiate from the real thing. And that's why I think that ultimately cases on this right of publicity are going to really mushroom if people try to do that without a license.

[00:22:54] Teddy McCormick: Ali, did you get a chance to listen to the Heart on My Sleeve song?

[00:22:58] Ali Nienaber: I did. I actually played it for a couple friends to see if they would catch on, and neither of them did. And they both admitted that if they were listening to it in the car, they'd be like, oh, that's a Drake song, and then just not think anything of it. So it was really interesting to see that perspective of, most people wouldn't recognize it, nor think twice of it, especially when it's on a streaming service.

[00:23:23] Teddy McCormick: I think that's a hundred percent true. And I was, in preparing for today's podcast, I was listening to, I think it was another podcast where the host did sort of a fake Kanye West rap. And it was interesting because when I was listening to it, knowing that it was fake, it sounded slightly fake to me, but I think there was a bias there because I was watching it at the same time.

[00:25:12] Jim Flynn: There's going to be some people, like some of those you mentioned who are going to be very anti it, and consider it demonic. And then there's going to be others who are going to want to exploit it for artistic and for monetary reasons. And so it's interesting, particularly in the wake of the recent Supreme Court decision regarding Andy Warhol, right?

[00:25:33] Jim Flynn: A lot of people look at Andy Warhol as a genius and someone who took elements of others' creations or others' images, and took them to a new place, right? Others say, no, he copied. And he owes all this stuff. So we do have analogs in the past for this, right? So under the New York Human Rights… New York Civil Rights Law that I mentioned previously, there have been cases where somebody in a radio commercial used a voice double, somebody who could sing like Bette Midler or whatever.

[00:26:11] Jim Flynn: They didn't say it was Bette Midler, but they used an iconic song or something like that. And sang the song and associated with their product, and people could come in and sue. So you're going to see some evolution of using laws like that to go after what gets created in AI.

[00:26:30] Jim Flynn: But I would have to think there'd be a booming market for AI. I mean, AI created things. Who wouldn’t want to hear… we always get looks backward, right? So we could get a current artist singing oldies, but AI could let you take an old artist and have them sing the new stuff. There's got to be a market for that.

[00:26:57] Jim Flynn: And there's going to be lots of litigation about it. But, and that's why I said earlier, when we get past some of the advocates who are trying to make a point about AI and just get to the place where you're trying to make money from AI. That's when the law will sort out better, I think.

[00:27:14] Teddy McCormick: I think you’re a hundred percent correct and you make a great point about analogs.

[00:27:18] Teddy McCormick: I think we do have something of an analog with respect to, like the sampling industry. And I think, and I could be incorrect about this, when this first started, sampling was very big with rap songs going back 20, 30 years ago. And that generated a lot of litigation, and then once that sort of sorted itself out, it generated a model where people pay.

[00:27:41] Teddy McCormick: If you're going to sample an old Temptations song in your new rap song, you're going to pay a licensing fee. I mean there's still disputes that arise with respect to whether someone was actually using a riff or a hook or something like that. But, I think that once it gets sorted out, I think the market will sort it out and there will be a market for that. And I think it'll be amazing to hear someone like Frank Sinatra do a modern song or something like that.

[00:28:07] Jim Flynn: No, I agree a hundred percent.

[00:28:09] Teddy McCormick: So one thing I'd be curious to get your take on, Jim, is how fair use applies to model training in generative AI.

[00:28:17] Teddy McCormick: And I don't know if that's something that you've given any thought to or had an opportunity to evaluate.

[00:28:22] Jim Flynn: Yes, I have thought about it and written about it, and it's implicit in a lot of what I've already said. I tend to think that some of the image holders, particularly the aggregators of images, are being a little unrealistic in the position that they're taking in terms of just trying to shut it down.

[00:28:50] Jim Flynn: I think that AI will overtake that as a market reality. That being said, they shouldn't be left with nothing. But I think, a negotiated resolution of an appropriate licensing fee for the database is more likely the better economic and legal outcome than anything else. It's one thing to talk about those concepts simply in the artistic area, but the model sampling actually then can take us to a whole other area of privacy rights and of health care and health care imperatives.

[00:29:30] Jim Flynn: Because while somebody could go through a gazillion images to try to create the Opening to Eden or the Opening to Paradise as a painting, it's also the way that these models learn how to more efficiently do mammography and figure out what is evidence of breast cancer, or what is evidence of other gastrointestinal issues because they're doing a lot of this with things revolving around the liver and, and other testing. And so, and that's all done by somebody feeding them a lot of data and a lot of images from CT scans or MRIs or all kinds of other things.

[00:30:24] Jim Flynn: And so we're gonna have to get a handle, not only lawyers, but kind of as a society of, how do we do this and make sure that there's the right access to the data that allows that. And it's not very different frankly than some of the challenges we have in the medical rec, the electronic medical records area where we're just trying to make sure that people, and not only people but systems, can talk to each other in a way that's ultimately most beneficial for the most of us.

[00:30:59] Teddy McCormick: A hundred percent. So a lot of really interesting stuff on the horizon in the IP area. So another topic that I think we should cover is sort of the ethical considerations for lawyers and for consultants who are using AI.

[00:31:16] Teddy McCormick: For instance an associate at a law firm uses ChatGPT to write a legal brief or memo. This could give rise to a number of potential pitfalls. Ali, do you want to tell us a little bit about some of the potential ethical and other considerations that could arise from using a tool like that?

[00:31:49] Ali Nienaber: Yeah, so I think the first biggest thing is you're sharing client information in there, and as attorneys we obviously have the duty of confidentiality. That's, I think, one of the main issues. The second issue that I really see and a big part of my job that I do as an Associate is double checking citations.

[00:32:11] Ali Nienaber: When opposing counsel comes in with stuff, I always double check to make sure that what they've quoted is actually correct, and the information put in there is not helpful to my side in any way because if it is, I'm going to quote it. And I'm going to make sure to point that out to the court. So I don't want to be using ChatGPT or Bard or any of these other systems that have been known to make fake citations or have hallucinations.

[00:32:39] Ali Nienaber: So I think those are my two biggest concerns with using ChatGPT or Bard to draft a motion overall.

[00:32:48] Teddy McCormick: Those are great points. And Jim, for you as the Managing Partner of the firm, what about implementing ChatGPT, Associates using it, what are the things that kind of keep you awake at night and give you agita about the future, this brave new world we're all entering into?

[00:33:07] Jim Flynn: So, echoing my earlier comments, I'm actually pretty excited about it. I don't have a lot of agita. I mean, it will be a process and a learning curve about doing it and doing it the right way. And we have our quality assurance committee actually actively looking at, we put out some preliminary guidance, but we have them actively looking at more permanent guidance on how to do it.

[00:33:32] Jim Flynn: I think Ali identified the two biggest issues, which are confidentiality and accuracy. I find it very humorous that a mistake made by a machine is called a hallucination. Because it's already ascribing to it a human characteristic, right? Iit used to be called a bug or just a short or a problem.

[00:33:59] Jim Flynn: Right now it's given that kind of existential content by calling it a hallucination. Obviously that's really important, but again, it's a difference of degree. There certainly have been, in some legal citation systems, a flag that was or wasn't there, that got missed and you always had to take a little extra step or the like.

[00:34:26] Jim Flynn: I think the biggest thing is you've seen publicity around certain law firms that have made substantial investments in going to one of these generative AI providers and have them develop a system that then rather than just going out to the internet to get the information will just go to their system and get the information there.

[00:34:50] Jim Flynn: And the advantage there is hopefully on your internal system you have a lot less of the unverified stuff that might be on the internet. So it should reduce hallucinations and hopefully be more accurate. I think one of the things that in my management role I think about is when will be the right time for a firm like ours to do something like that.

[00:35:11] Jim Flynn: Because clearly there's a lot of kinks to be worked out and the like, and the more competition you get among people doing it, the more likely the products improve and the price comes down. So I don't necessarily see us as wanting to be at the forefront of that and make that sort of frontline investment.

[00:35:40] Jim Flynn: But I do think, as we often do, we let the market kind of figure out what seems to be the right place and move toward that in an appropriately aggressive way. What that compounds for me is I think it's a reality. People are going to use some of these tools and I'd rather have guidance that allows them to use them safely than to have rules that they won't follow.

[00:36:10] Jim Flynn: Where I say you can't use this at all and somebody uses it behind our back. So I want to avoid that. Because frankly it's that from a litigation perspective, if I was thinking about representing other professionals, right, be they architects or software coders, we get a lot of coders are using this, write me code for X, write me code for Y.

[00:36:36] Jim Flynn: When we represent them, we want to make sure that even if they use it, they have the quality assurance check process, they have the review process, they have the Ali’s of the world who are going to go and check the site, make sure they're not made up, or make sure the code doesn't have the bugs.

[00:38:57] Teddy McCormick: At first, I want to say I love your excitement around the future and the use of AI and embracing it. I think that's a hundred percent the right approach. I think it's not something we can avoid and we're all going to have to learn to use it. Ali, one thing I was curious to get your take on is, do you have, because I think lawyers tend to often be a little, we’re cautious by nature.

[00:39:19] Teddy McCormick: That's why we're lawyers. We want to evaluate everything. We want to analyze everything before we sort of jump in and start embracing these new technologies. It's been my experience and I'm curious to hear from Ali about this that my friends in other industries are already kind of using it pretty regularly.

[00:39:36] Teddy McCormick: And, Ali, I was wondering if that has been your experience as well, whether you have colleagues or friends who work in different fields who are regularly using AI as part of their, part of their job or profession. 

[00:39:50] Ali Nienaber: I haven't heard as many people using it for work related stuff.

[00:39:54] Ali Nienaber: I've heard of it more of, just playing around with it, having fun, seeing what they can get out of it, people trying different recipes, people looking up what AI can come up about it. So I had a friend who actually ran her name. There's enough information about her, work related, and she ran it just to see what the AI would say about her.

[00:40:18] Ali Nienaber: And it came up with some pretty detailed information, though some fake information as well about her at the same time. But I would say that you're going to have, as people keep playing with it in their own personal lives to see how they're using it. I think that there's going to be also an additional transition to where you're going to see them using it in their work as well.

[00:40:38] Ali Nienaber: People are going to probably find it easier to research things, write memos, write emails. Do those little simple things, even do social media stuff with it as well.

[00:40:49] Teddy McCormick: I completely agree. So one of the areas, and I think it's of particular interest to our firm because we do so much work in the health care arena, area, and then employment as well, is that I think AI machine learning has the potential to really transform health care by analyzing vast amounts of real time data and adapting to continuously evolving circumstances. One issue that's come up recently, and Jim, you touched on this, I am all for using biometric information to avoid passcodes. I'm so tired of passwords, of having 10 million different passwords for all these different things.

[00:41:26] Teddy McCormick: But that comes with its own set of risks. And Jim, you mentioned, I think you started talking about there's been, I think, some litigation, especially out in California, some class action litigation with companies that have been, I guess, obtaining biometric information without the consent of the consumer.

[00:41:45] Teddy McCormick: So first, why don't you give us a little explanation of what biometric information is, and then we can talk a little bit about some of the litigation that has grown out of that topic.

[00:41:54] Jim Flynn: So biometric information is, as I said, information about one’s biological makeup and structure, it's usually gathered through systems that, whether it's a retinal scan or an iris scan, it could be fingerprints, it could be voice analysis, facial recognition, which is all about face structure and bone structure and different things.

[00:42:23] Jim Flynn: And obviously there are at times, certain conditions that may alter some of those things, and certainly in terms of facial recognition and the like, there could be things there, but in these cases there's usually three issues. One is consent. One is, has somebody collected this information without really telling me that they were collecting it?

[00:42:53] Jim Flynn: So they'll call that sort of consent number one. Then is, okay, maybe I consented to let you have it, but I didn't really understand you were going to be sharing it with others. And is there a consent issue there? And if either of those consents isn't present, you're likely invading someone's privacy and depending on what jurisdiction you're in, that could be a statutory claim, that could be a common law claim or some combination of both of those.

[00:43:25] Jim Flynn: And then, the third element of it is the compensation one. Okay. I consented so you could have it. I understood you were going to use it with third parties, but I didn't think I was giving it to you for free. Right? And so should I be getting something out of this? And again, there are always analogs to that because now we're talking about digital data.

[00:43:53] Jim Flynn: But in the past we've seen issues with regard to actual tissue samples and, hey, somebody took this culture from me, or they took this tissue sample from me, or just as frequently from my loved one, right? It's part of an autopsy or something like that. And so we're going to have to look from a litigation perspective at how those cases got litigated and then figure out will those same principles and rules apply as we look at the digital world?

[00:44:23] Jim Flynn: Or is digital different because it's not physical in the sense that the tissue sample was, I mean, there's something about the nature of your body and your ability to protect it, where you say, okay, you shouldn't have taken anything from me physically, because it's an assault, right?

[00:44:41] Jim Flynn: If it's not consensual, we all learned in law school, it's an assault. That's a lot different than what's essentially a picture, right? It's not an assault to take somebody's photograph. And so which analogy you go with and how you argue it, is going to really impact on how you deal with some of these things.

[00:45:04] Jim Flynn: And again, AI is maybe not at the forefront of that. It's more about privacy and the collection of data, but the systems that create and retain this stuff and that sort it, and that use it for the purposes of diagnosis or the like, they're all AI based and so AI may not be the culprit that violates the privacy, but it's the reason or it's the motive.

[00:45:36] Jim Flynn: It needs that fuel, right? So, think of it like, okay, I didn't steal the car. The car was mine, but I stole the gas. Because that car's not going to run unless I pour that information into the AI machine. And adjusting to that, figuring that out, is going to be, again, something that's going to be hopefully first discussed, then maybe negotiated.

[00:46:00] Jim Flynn: And hopefully at 98% of the cases, that's where it ends and people reach a resolution. But there's going to be lots of times where people are not going to agree, they're going to fight. And as a litigator, when I say 98% to 2%, that's okay, because those 2% cases are going to be hard fought. And there's a lot of people out there looking at AI, and when you talk about 2% are going to end in dispute, that's going to be a lot of disputes.

[00:46:28] Jim Flynn: It's going to be enough for every litigator that's capable of dealing with it to deal with those controversies. 

[00:46:33] Teddy McCormick: What I find so fascinating about what you just said, your overview is, while the context and the circumstances are so different and unique to us, the issues really aren't. Privacy. Consent.

[00:46:45] Teddy McCormick: Should I be paid? These are issues we've all been litigating for all of our lives. And they're just in a new very unique context. So, I think the last thing, and this is probably something that's near and dear to all of our hearts as litigators, is let's talk about jury trials and how we all envision AI changing jury trials in the next like five to 10 years.

[00:47:08] Teddy McCormick: Ali, have, have you given any thought to like how, in a jury trial, you might be able to utilize AI, improving your evidence presentation or something along those lines? 

[00:47:19] Ali Nienaber: I think it's going to be much easier to run data through and to kind of figure out some more detailed information that would've taken hours for us to figure out overall.

[00:47:31] Ali Nienaber: I also think that AI may present some easier ways to present our evidence overall. So I think that those are the two things that I've been looking at.

[00:47:41] Teddy McCormick: That’s great. And Jim, how about you? What do you sort of see coming down the road in the future next time you have a jury trial? How do you think the use of AI might help you improve your presentation to the jury?

[00:47:54] Jim Flynn: Well, certainly the jury examination and voir dire aspects that you talked about already for jury selection, are going to impact on that, particularly if it happens as quickly as you described. And the price has come down. So you're going to rather than save the jury consultant for the really, really big case where the investment makes sense, it may come to a price point where you're doing it more regularly.

[00:48:20] Jim Flynn: I totally agree with Ali that what it's going to impact on is the presentation of the evidence. AI, generative AI is going to allow you not only, you know, predictive coding and that let you figure out what the relevant documents are. Generative AI is going to allow you to at least get a starting point at some point to say what's the best way to present the similarities or discrepancies between X and Y?

[00:48:52] Jim Flynn: And particularly if we move into systems that we can designate what the field of information is, right? So I don't have to just go out on the internet and say, tell me the difference between X and Y, when I can say, tell me the difference between X and Y as illustrated by these hundred thousand documents, and they're just going to my system.

[00:49:17] Jim Flynn: Again, it's not the end point, but it's going to be a great starting point for the presentation of evidence. I could also definitely foresee some curve ball happening during direct or cross-examination, and I'm sitting at the counsel table with my laptop. I might just throw something on the ChatGPT.

[00:49:38] Jim Flynn: As far as what should I ask in cross or in redirect? Again, you have to be sensitive to client specific information, but it's also a public trial at that point. And the testimony is coming out there, so you may have less confidentiality concerns, but I've used Google during trials just to look stuff up quickly, whether it's a statute, a dictionary definition, or the location of a particular intersection or whatever, if it was relevant.

[00:50:04] Jim Flynn: I've changed closing slides and statements that were in my PowerPoint as the other person was presenting their closing so that I was reacting to it. So there's definitely going to be a way during trial where I'm going to be banging something into a generative AI just to get a quick idea about something, if you’re moving on the fly.

[00:50:30] Jim Flynn: Look, you talked earlier about when we started, when I started practicing law I didn't have a computer on my desk. I had an AOL email account, but I didn't have a firm email account until I was about a third or fourth year associate. So we're going to  adapt.

[00:50:47] Jim Flynn: I thought one of the great inventions of all time was Post-Its. I mean, when I started out, they didn't even have Post-Its. We're going to find a way to use AI. And it's not going to be, and look, there were times, go back to the beginning of email. Lots of people said you couldn't use email because it was the equivalent of just having attorney-client privilege communications on a postcard, so the mailman could always read it.

[00:51:14] Jim Flynn: Again, the market overwhelmed that and we got encryption. AI's going to find its place. We just need to find the way to get there.

[00:51:25] Teddy McCormick: I completely agree and I think, and I touched on it at the beginning of our discussion, the recent experience I had with using it in jury selection, process, analysis.

[00:51:35] Teddy McCormick: The other thing is I think there's still going to be a place for the human element. Because when we actually, we have these profiles and like I said, they're fascinating. They did sort of like, almost a, like word bubbles, using words that the jurors that they were analyzing had used a lot in their social media and other posts.

[00:51:56] Teddy McCormick: So you would see like “Italian” or “restaurant” or things like that. And they did analyses of whether people had, whether they, like emotional analysis, like whether they were like joyful, angry, disgusted, that sort of thing. And I think that's all really, really fascinating.

[00:52:14] Teddy McCormick: But we are still using consultants when we actually pick the jury. Now this is a case that's a very, very big case. A lot of money is at stake. So it does, it merits paying more money to do these sorts of things.

[00:52:33] Teddy McCormick: But to your point, the price for doing that first take of the jury pool was not, it wasn't nothing, but it wasn't what you might expect, and you could use it for a case that isn't necessarily a 50 million dollar case, maybe it's a couple hundred thousand or a million or something like that. So I think it's going to, it may kind of democratize trials and more people are going to be able to use this type of technology.

[00:52:55] Teddy McCormick: So I think that's very, very much a positive. So just sort of closing the loop on all of these things. Jim, what do you think the biggest pros and cons to using AI in the legal profession are?

[00:53:20] Jim Flynn: The biggest pro is that it can be a great starting point and can quickly generate, when we're talking about generative AI, something for you to begin working with very, very quickly. The greatest con is the notion that, somehow, it's finished. And so I think you have to treat all of these, anything created by AI, is hopefully the right material, but it's got to be carved and sculpted and buffed and shined and checked as you go through it.

[00:53:54] Jim Flynn: And so, I think the notion that it's going to save lots of time for attorneys is probably a little overblown. It's going to shift time for lots of attorneys, and so there may be less time needed on the first draft, but there's going to be a lot more time necessarily devoted to the site checking and refinement, and so, look, its ability as we've already seen with predictive coding as an earlier iteration of AI, is certainly going to save time.

[00:54:30] Jim Flynn: But it's going to provide more time for the in-depth analysis and require it. Again, I think as it's used presently, if people are just using these kind of publicly available ones where it's searching the whole internet, you're going to spend a lot of time on the refinement and checking. As we move towards systems that are looking at our own database or databases, then it may be a time saver to a certain extent, but lawyers are always going to face new issues.

[00:55:04] Jim Flynn: And since AI depends essentially on old writings, it's not great. I mean, I have that, my own experience, and I wrote about it, when I asked ChatGPT to analyze all of the different amicus briefs and tell me what they were arguing in a case that was being argued this term, it couldn't give me a usable synthesis. When I… to test ChatGPT, I asked it to do the same thing about the Obamacare decisions from 10 years ago, and analyze all the things.

[00:55:44] Jim Flynn: It gave me a great work product, because so much had already been written about it and done. And because lawyers are always going to be a little bit at the cutting edge, there's still going to be a role for us because while they can, while ChatGPT can create arguments and filter things, they're not great at the analogs and analogies yet.

[00:56:06] Jim Flynn: You can't say, based on X, tell me what the outcome would be in Y. It's just not as good and you know you’re going to have to work with it. So, I see it as a potentially useful tool. And to me the biggest downside is just because it came out of the computer doesn't make it right.

[00:56:25] Jim Flynn: Making sure people actually remain as dedicated to their craft using this new tool as they did when they were using some other tool. Going all the way back to a quilt pen. I mean, just remain dedicated to your craft.

[00:56:41] Teddy McCormick: I completely agree, and I think it's going to change some of what we do, but it's going to create new things for us to do.

[00:56:46] Teddy McCormick: So I think there's a lot of opportunity there, but the worry that it's going to replace lawyers, I think is completely overblown. Some tasks, I mean, like we've said before, lawyers aren't necessarily doing maybe as much document review as they were in the past, but they're still doing it.

[00:57:05] Teddy McCormick: We still have people checking it. Even in big cases, it's not like we're relying completely on the technology, we still have people to do that. So, Ali, what is your prediction? What do you think is going to happen with the use of AI to screen employment applications?

[00:57:25] Teddy McCormick: Do you think a case can be made that because people training the systems have their own inherent biases, that such programs are always going to be biased and should be banned? Or do you think it's more likely that this is something that can be improved and should be embraced by employers?

[00:57:43] Ali Nienaber: I think the cat's out of the bag with this, that the screening tools are going to continue to be used no matter what we try to do. There's actually a survey done by Society for Human Resource Management in 2022 and it actually showed that, of those surveyed, 79% of them were using AI tools to do recruitment processes, including screenings.

[00:58:07] Ali Nienaber: So I think what's going to end up happening is that the tools are going to be audited in a way to try to eliminate some type of bias or adverse impact. And that's partially going to be done through cities like New York City implementing those laws or state laws as well. So, I don't think they're going away.

[00:58:30] Ali Nienaber: I think hopefully they're going to get better, and if not, then I'm going to, we're going to be seeing more litigation coming out of this area overall.

[00:58:38] Teddy McCormick: I completely agree, so thank you both Jim and Ali for joining us today and sharing your insights on this rapidly evolving and incredibly important topic.

HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins