One Step Closer to Skynet: Artificial Intelligence and Gaming [PODCAST]
Wednesday, February 27, 2019

Steve and Nick examine how increasingly complex Artificial Intelligence and neural networks have been developed using games as the testing grounds. They also interview Pedro Pavón, a thought leader in AI, about the legal and policy implications AI has for the future.

Transcript:

Nick: Hello and welcome to the LAN Party Lawyers Podcast, where we tackle issues at the intersection of video games, law and business. I'm Nick Brown.

Steve: And I'm Steve Blickensderfer.

Nick: And we are your hosts. We are lawyers at the firm of Carlton Fields who represent gamers and companies in the gaming space.

Steve: That's right.

Nick: Today we're going to talk about artificial intelligence in gaming. With us, we're so excited to have Pedro Pavón, Assistant General Counsel at Honeywell and a thought leader in artificial intelligence. But before we get going, we need to remind everyone once again that nothing we say here is legal advice.

Steve: So Nick, when I think of artificial intelligence, I immediately go back to Terminator 2 and Skynet. So...

Nick: You and me both.

Steve: ...when we think about artificial intelligence, dumbed down version, what is it? It's machine intelligence, it's machine learning, these days -- deep learning, examples include IBM's Watson who won Jeopardy in 2011.

Nick: A lot of people have won Jeopardy.

Steve: Have you won Jeopardy?

Nick: Not yet.

Steve: So Watson's got one up on you.

Nick: In that regard.

Steve: So Watson used natural language processing and analytics on vast repositories of data that it processed to answer human-posed questions. But there are other types of AI.

Nick: I think technically it gave questions, right? This is Jeopardy we're talking about.

Steve: Well you got me there, nice zing. There's also like Google's AI, DeepMind, which uses conventional neural networks based on the brain's neural networks to teach itself through reinforced learning. The next question that I have now we've got "what is AI?" out of the way, why is it [AI] a big deal for gamers or better yet, why is gaming a big deal for AI?

Nick: Well, AI's really come a long way over time, it started back in 1952, they got AI to win a game of tic-tac-toe. A few years later, the computers won at backgammon in 1979. In 1995, they got connect four. In 1997, they got chess. In 2015, they got the ancient Chinese strategy game GO. Heads-up, no-look hold 'em in 2017, and as Steve is going to explain in a minute, it [AI] actually mastered one of my favorite all-time games, StarCraft II recently.

Steve: That's right. AI really upped its game when it beat, when it tackled StarCraft II. It recently played, "it" being AI, recently played StarCraft II and beat professional gamers, and not just beat them, beat them like single handedly beat them...

Nick: 10-1.

Steve: ...10-1 record against pros. Okay? And we're talking about Google's project AlphaStar which is the first AI to defeat top pro StarCraft II players on team Liquid called MaNa and TLO. Also notable in 2017, Elon Musk's AI, OpenAI, beat a DOTA 2 player at the big esports competition for DOTA 2, The International. So AlphaStar is really fascinating in that in this particular instance after the games were played and the metrics were measured, AlphaStar actually had a slower-than-human reaction time and took fewer actions per minute than the pros. And it won by applying a variety of strategies demonstrating...

Nick: Deep strategies.

Steve: ...deep neural network strategies, demonstrating an understanding of stealth and scouting aspects of the game, pressing an advantage when it had one and retreating from ill-advised fights. It was really fun to watch.

Nick: Now, one of the differences here and why this is different than some of the other games is that StarCraft is an incredibly complex game. That's not to say Chess and GO aren't. But the difference here is that in StarCraft, you've got imperfect information, you've got to move the camera around, you've got a million decisions to make in the course of a game. A game can be five minutes, it can be an hour long but the number, the decision tree just expands exponentially because of all the options available to you. You're not just picking one move; you're not just moving one piece or placing one tile. And the way they got the AI to practice and get so good was they simulated over 200 years, actually, of StarCraft games, just played over and over again. Which, as I understand, is a little bit longer than StarCraft's actually been out...

Steve: Mm-hmm.

Nick: ...but the computers were able to learn that they could get better at the game and they ended up as we said beating the pros 10-1.

Steve: Other reasons why AI is a big deal, you know, we're constantly hearing that AI is going to replace jobs and upset industries. And we're obviously working in one, where AI is going to threaten jobs and that's in the legal industry with legal research, and things that AI can increasingly do also in the medical field as well. So it's this AI beating StarCraft II pro players is more of a big deal in the sense that it signals a turning point for AI and how it can problem solve, because StarCraft II is about complex problem solving in an imperfect environment with imperfect information where there's lots of objects and choices and microactions and fast paced human AI.

Nick: And if it can do that then that means, logically, it would pave the way to unlock novel solutions outside of gaming, to serious world problems with science, with political problems, all sorts of things, climate issues that you wouldn't normally expect AI to handle, at least not long ago.

Steve: And I would be remiss before we move on that, there's other instances of the video game industry getting in with artificial intelligence and that's Nvidia partnering with Baidu the Chinese company to use AI to create an autonomous car platform for automobile manufacturers. So at this point, I'd like to switch gears and I am so pleased and happy that we have for our interview today, Pedro Pavón, who is Assistant General Counsel at Honeywell, who has practiced in AI and the IoT space for several years. So Pedro, welcome to the podcast, why don't you just tell us about yourself.

Pedro: Hey, what's up guys? Thanks for having me. Well like I think you said all of the relevant pieces, I'm an attorney, I've been working in this space for some time, mostly in-house but I did spend a few years at a law firm before that. You know, a lot of lawyers out there say they work in AI, what does that really mean? Well for me, it means I get to work with development teams and product teams that are building AI capability on the back-end, creating functionality on the front-end and that's fancy talk for really smart guys who know a lot of math who create solutions based on machine learning artificial intelligence and neural networks. So that's what I do and I think that's why you invited me here so thanks for having me.

Steve: That's right, thanks so much for joining us.

Nick: Yeah and that sounds really exciting to me; I do not know a whole bunch about AI so can you tell us what most excites you about the current state of artificial intelligence?

Pedro: Well it's funny because I don't have a technical background, I'm a lawyer and so if you asked that question I think to an engineer they might answer differently than I'm going to, but what excites me the most about the current state of AI is just how much attention it's finally getting. You mentioned in your remarks, I was listening in earlier, that AI, the math for AI has been around since the 1950s, but it's really been in the last couple of years that AI has hit the mainstream in a big way meaning there are a lot of products out there advertising that they're driven by AI and there's a lot of consumer interest in using the technology to make their life easier. So for me what's exciting about it is that we're finally going from a kind of technical engineering exploration phase to an adoption phase meaning people are interacting with AIs everyday whether they know it or not. But that's what I find most exciting.

Steve: So if that's exciting to you, what most scares you about the current state of AI? I'm interested to hear your thoughts on this.

Pedro: Yeah, well it's funny because the same thing that gets me excited kind of is a little scary, right? Also, there's widespread adoption and when we think of AI kind of in the general sense, we think of solutions like Amazon's Alexa or Siri or in the case of this podcast, we're talking about how AI's are being used in video games but there's some other more nefarious applications that are also starting to get some attention. And one of them to me is like lethal autonomous weapon systems. You talked about Terminator 2 at the beginning, well that might not be a reality yet, but...

Nick: Yet?

Pedro: ... the capability already exists for autonomous weapon systems to make battlefield decisions without human interaction and that's a little scary to me. I think autonomous vehicles present a lot of great benefits but also will create some confusion when they're adopted more widely and so the speed at which AI is getting adopted in some use cases I think is exciting and going to be fantastic. In other use cases I think we need to be a little bit cautious and make sure we don't put something in play that gets out of hand quickly.

Nick: That autonomous weapon thing sounds to me a little bit like StarCraft, right?

Pedro: Yeah.

Nick: You're going to see these battles but they're going to be taking place from the wrong prospective.

Pedro: Yeah, yeah.

Steve: Maybe the same program -- AlphaStar -- could be put into a tank to know what to do and where to go, where to shoot -- it's as a very scary proposition.

Pedro: Yeah. I think the technology exists, now it's just a matter of trying to figure out what the right thing to do is and, as you guys know, the right thing varies depending on who you ask.

Nick: And that's why you need lawyers, right Pedro?

Pedro: I suppose.

Nick: So tell us, Pedro, what's the last video game that you've played, and how do you think you'd do against AI playing that game?

Pedro: I play a lot of Forza, it's a racing game on...

Nick: Yeah.

Pedro: ...Xbox. Forza Horizon is the last game. I'm actually sitting pretty close to my Xbox now and played it last night. How do I do against the AI? Well I can tell you that the mini-AI built into the game can kick my butt if I put them on an advanced or highly skilled setting, but I tend to do better against the AI than I do against really skilled kids and human beings who are amazing at that game. But I play a lot of Forza and it's fun.

Nick: So we as humans still have the advantage in that respect, at least for the time being?

Pedro: Well at least and so far as what Microsoft has decided to build into the game, right? I mean, what you guys are talking about with the topic today is a little bit more complicated because someone really built an AI to beat humans. I think a lot of the AI built into video games now, while yes, it's to make the game competitive and challenging, it also kind of helps you learn the game, right?

Steve: Right.

Pedro: I don't think that's the purpose of the AI you guys are talking about.

Nick: So what do you think, if they put the fancy AI, the deep learning neural network on Forza, and put that up against some of the best, top e-sports players and televised that, who do you think would win in something like that? Are we still ahead or would we see something like a StarCraft II situation?

Pedro: Man, I have tell you, in my experience just watching the development of AI in all sorts of use cases and applications, when an AI gets its task right, meaning it understands what its goals are, it has enough data and enough processing power to pursue those goals, usually if the goals are simple enough, they will beat humans. And I think you gave a bunch of examples in a video game context but there's more like office assistance, just can do things more quickly than people and even in the medical field, you know, AI assistance in surgery and other medical applications are just faster, better and make less mistakes. So if you cranked up an AI on Forza and trained it to race those cars really well, my suspicion is that it'd be really tough for the very best folks to beat it, but I guess they'd have to race.

Steve: That's why we watch the competitions.

Pedro: That's why they have to race.

Nick: It sounds to me like driver-less cars, which I know...

Steve: I know.

Nick: ...are supposed to be on the way someday soon too.

Pedro: There are driverless cars on my Xbox, I can tell you that right now.

Steve: Well, Pedro, what does the AlphaStar -- and I'm not reading as much into the artificial intelligence community and industry on the engineer side of things as you are -- but what does the AlphaStar project and the AI beating the pros at such a richly complicated game like StarCraft II tell you about the development of AI and where we are in terms of that? Did we turn a corner with AI in that regard? Is that like a Skynet kind of thing [for cyber warfare] where it's like a brave new world for AI?

Pedro: Yeah, you know, I'm not so sure. And I think the reason for that is in the specific case with AlphaStar ..., from my understanding the AI had the ability to see the entire playing field the entire time and that's not something that the human could do. And so if they're playing under a set of different rules and the rules for AlphaStar has some advantage, I'm not sure that that's fair, right? And it seemed to me like when they made the AlphaStar have to scroll around and only see the same amount of the battlefield that the humans could, the humans did much better. So in so far as whether we have turned, you know, made the torturous turn, I'm not sure.

Steve: Mm-hmm.

Pedro: But what I can say is, again, when you give an AI a task, if you give it enough data and enough time to learn from that data and it has the processing power it needs to fulfill its goals, it's going to do it at a really high level. And the fact that we're talking about a primitive AI or a, let's just say an alpha stage AI, I'm sure that's why they call it AlphaStar, but a first level development of an AI in this particular application already beating the masters...

Steve: Mm-hmm.

Pedro: ...not just your ordinary players, that tells me that the capabilities after maybe some more tweaking of the AI are going to be tremendous. So I'm not sure that we've, you know, crossed the event horizon if you will but I think we are definitely on the path to see AI, in particular in the gaming application, far exceed the abilities of even the best human players. And I don't think that that's too far off.

Nick: Yeah. Well I'm glad to hear that we're not quite at Skynet yet.

Pedro: Yeah, I don't think so.

Nick: What is the limiting factor here? Is it available data, is it processing power, is it just financial resources are prohibitively expensive to run this type of thing? What's holding it back? Why aren't we there yet?

Pedro: Well I don't think Skynet, if you think about what something like Skynet is -- this sensational artificial intelligence that is autonomous -- has general intelligence meaning it can conceive of and can consider multiple issues at the same time and problem solve them all simultaneously and make predictions about behavior and do all of these things. You know, we're nowhere near some type of technological development of that level like a general super intelligence; we're several scales behind that. What I think we're going to continue to see is AI become more effective and more efficient in single track uses like very, very narrowly defined goals with very clear objectives. The AIs are already faster than us at most of these things when they're simple tasks, and we will be able to add some complexity.

You asked what's holding us back. I think it's a bunch of things. One, I just don't think we have the substrate, meaning I don't think the silicon chips that all of our computers run on have the capability to process as much information as our brains do in a second or a millisecond, and I don't think that's going to happen any time soon. So I think processing power is a limitation. And then, the other thing is human creativity is a limitation. At the end of the day, humans are the ones building these AIs. When we get to the place where AIs are building other AIs, meaning you develop an AI that is really good at building artificial intelligence capability, I think that's when we'll see things scale because, like I said before and you guys said it at the beginning, AIs are going to be much faster and more efficient than us at those types of activities. So when we have an AI that can build AI, that's when I think we'll turn that corner.

Steve: That's when I move to the mountains of West Virginia and bunker down for the apocalypse.

Nick: Give me a call when you do that.

Steve: Last question Pedro, what would you say as a lawyer working in AI are some of the biggest challenges from a legal perspective to the development or implementation of AI?

Pedro: Yeah, I think there's a few. The first one is right now the AI, like I said, I think we are in an adoption phase, not so much of a development phase. I mean, they're both happening, but we're really turning the corner into adopting technology that's been around for some time. However, applying the current legal framework to some of these AI use cases is proving to be pretty difficult. You want to talk about stopping development? Well, there was a car accident in Arizona a bit back where someone was killed and that stopped research in Arizona of autonomous vehicles.

Nick: I read about that, I remember that.

Pedro: Yeah, a woman was killed in a car accident with an autonomous vehicle and essentially the vehicles had to come off the road. That's not going to help development. Now, hopefully we don't see any more tragic issues but there are several other examples out there of some AI in a development phase that needs to be halted because something really bad happens. Now we can move beyond the actual accident and say okay well there was this accident that harmed this person and it was an autonomous vehicle, well if the autonomous vehicle is at fault, who do we hold responsible for the accident? If two humans are in a car accident with each other, we have an entire legal framework that's been developed since the beginning, since the invention of cars, that helps us determine who is liable, what are the consequences, what you might owe somebody. There's an entire insurance industry built around it. None of this exist in the context of autonomous vehicles or autonomous machines, and I think we're going to have to do some really hard thinking there.

Another area where I think the law and AI are in some tension is in, going back to the beginning here, lethal autonomous weapon systems and the application of AI in the military. In the context of the battlefield, the rules of engagement for war are complicated and have been around for some time, and incorporating these technologies which nation states and rouge actors will no doubt use going forward is going to be tricky. So those are two areas that come to mind to me.

Another big one that is front and center, and I saved it for last because it's the one I think about the most, is how all of this is going to affect our human rights. And I'm talking the human beings.

Nick: Yeah.

Steve: Mm-hmm.

Pedro: And I'm a privacy lawyer, well at least I try to call myself one, and I think about all the privacy implications of the increased use of AI technology in our everyday lives. We know for sure that to build a good AI and for it to be helpful, it needs to collect and analyze a lot of data. And in the video game context, you mentioned that the [Alphastar] AI studied 200 years' worth of gaming to get really good at beating people. Well, you know, let's change it to a medical context where AIs have to have access to sweeping amounts of medical data to become excellent or become even functional in a task. The privacy implications of that are very significant and you don't need to look any further than China, where China is essentially building a surveillance state that monitors its citizens 24/7 to advance its AI capabilities, and really in the context of security and for political reasons you see it a lot in China. But also to create cool, fun commercial applications that people love. The challenge is that once the data has been processed and AI has drawn signal from that data, you can delete the original data and the AI has this cool capability, but the AI can't forget the things that it learns.

Nick: Right.

Pedro: So if it collects a bunch of data about me and then it decides that I'm at high risk for, you know, let's say cancer, you could delete all the data that it used to make that conclusion, but the conclusion has been made and now it's out there. And that's not my data necessarily under the current legal framework and it will be interesting to see what companies and governments are allowed to do with the insides that AI are going to bring to the table in the coming years. So privacy is a big one for me...

Steve: Yeah.

Pedro: ...lethal autonomous weapon systems are as well.

Nick: I'm guessing they don't have HIPAA in China, do they?

Pedro: You know, they've got a lot of things in China, but I don't think they do.

Steve: Well, if we could take away anything from this conversation, I would say AI is not going anywhere, it is here to stay and we have to figure out a way to put these regulations or to maybe rethink the regulations and how we're implementing AI. And obviously, Pedro, the privacy challenges that you've identified, they're huge and we're obviously struggling trying to figure out privacy without AI, and then you add an AI to the mix and it's just super complicated but that's why they need lawyers.

Pedro: Yeah, I'm here, we'll help.

Nick: Well thank you so much for being with us today, that's fascinating, we appreciate your insight on these cutting edge issues.

Pedro: Hey guys, thanks for having me.

Steve: Thank you Pedro. That's all we have for today's podcast. Be on the lookout for other podcasts on season one of LAN Party Lawyers, and until next time...

Nick: Game on.

Steve: ...game on.

Podcast:

 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins