July 8, 2020

Volume X, Number 190

July 07, 2020

Subscribe to Latest Legal News and Analysis

July 06, 2020

Subscribe to Latest Legal News and Analysis

Water Bears, Artificial Intelligence and Predictable Consequences

Last August, the Israelis crashed a private spacecraft on the moon.

One of the items carried by the craft and now potentially spread across a small region of the moon was a container of tardigrades, tiny animals known as water bears, which can become dormant and live in the most punishing circumstances, including space. Tardigrades have been recorded surviving extreme temperatures, pressures, air deprivation, starvation, dehydration and radiation. It is perfect creature to colonize the moon.

Why hadn’t you heard about this accident before?

There is enough drama in the world right now without thinking about how we just contaminated our only moon with a pile of everlasting life forms. And tardigrades may have already been on the moon without our help.

According to the Wired article about this accident,  “there’s no reason to worry about water bears taking over the moon” which were flown from the earth in a dehydrated state. “Any lunar tardigrades found by future humans will have to be brought back to Earth or somewhere with an atmosphere in order to rehydrate them.” Or so they think.

And this is a concern our forward looking space agencies had not anticipated.  Dropping weapons on the moon is illegal, while dropping animals on the moon is not.

The flight of the water bears made me think about how the endeavors of man seem to often create unintended consequences.

No need to go too deeply into our present national disaster, which could have been at least mostly avoided with adequate foresight and competent leadership. Tens of thousands Americans die while scores of millions suffer due to lack of planning, recognition and imagination. Consequences also arise from the lack of man’s endeavors.

Companies that hoarded terabytes of data believing they were creating a new asset have learned that they have also saddled themselves with enormous liabilities, especially as privacy law develops. Is the data the new oil? Oil can blow up your facility if not handled correctly.  So can data.  Unintended consequences.

And those privacy laws are allowing a right to see all the information a company keeps about you, but the EU is finding that malicious actors are using this right to gather information about third parties. Privacy protection law leads to important lapses in privacy protection. Unintended consequences.

The Illinois biometric privacy act led directly to cancelation in the Land of Lincoln for the Google Arts & Culture App’s Find Me in Art feature, where users could submit a picture to see art works through history that had similar faces. I don’t think this is what the Illinois legislature intended, but it was a predictable result of their broad grant of rights and personal enforcement mechanism.

However, despite the centuries of speculation and conjecture since Mary Shelley gave us Frankenstein, I still think we can never overthink the unintended consequence of human-developed artificial intelligence. The risks are too high and breakthroughs in the technology can come from too many places.

Granted, we have not yet seen general AI as it has been envisioned in fiction, at least as far as I know.  AI and machine learning that exist now are directed toward one function or another, not into broad problem solving and human-like thinking.

But we already see that in the areas where AI excels – memory, sorting, predicting, differentiating – AI can perform orders of magnitude higher than any human. And in tasks that were previously reserved for human intelligence, like chess, go, or weather prediction, AI can be trained to out-compete any human. Tellingly, the go-playing machine that beat humanity’s best champion, then created and trained its own progeny that can beat the original machine regularly.

One of the most concerning platforms for unintended consequences is this simple ability for one machine program to teach others improved methods. We can start a machine intelligence in motion, but never know which direction it will decide to point future versions of itself.  And we may not have a deep understanding of what the second, third or thousandth generation of the machine will choose to do, because humans had little or no input to program it.

In the television show Person of Interest, about how AI can both help and hurt humanity on a grand scale, lead character Harold Finch claims that he created 46 general AI machines, and was forced to kill 45 of them before they escaped the lab or succeeded at killing Harold. He says, “People have morals; machines have objectives.”

I am not predicting that military AI will escape the lab, sneak onto the internet, and try to kill people or take control of our lives. Or if that happened, we may have already set into place firewalls to keep the AI confined and an off-switch to render it harmless.  And yet, these words are written while the writer is quarantined at home due to an easily predictable and much forecast biological emergency. If we don’t plan for worst case scenarios we are helpless when we face them.

Right now we tend to require a human decision be made on the most important AI recommendations, not allowing AIs like military drone targeting systems or consumer lending platforms to make final decisions that will deeply affect people’s lives. But unaccountable decisions are proposed by AI every minute of every day and this situation will only increase with time. We need to figure out how to make sure that AI is serving us, and not the other way around.

Keep in mind that Murphy’s Law, as we currently know it, was not created as a cheeky bumper sticker, but an engineering maxim from rocket scientists repeated to make certain that engineers considered any possible mishap or error before dangerous testing was undertaken. Think through the possible problems and you are less likely to encounter them.

No degree of advanced thinking can rid us of unintended consequences, but exploring a technical problem from every angle and building in safeguards for even the most remote disastrous possibilities can help us create general AI that is useful, but not dangerous. That procedure could also work for building better privacy laws. And for avoiding the release of biological beings on worlds beyond the earth.

Copyright © 2020 Womble Bond Dickinson (US) LLP All Rights Reserved.National Law Review, Volume X, Number 114

TRENDING LEGAL ANALYSIS


About this Author

Theodore Claypoole, Intellectual Property Attorney, Womble Carlyle, private sector lawyer, data breach legal counsel, software development law
Senior Partner

As a Partner of the Firm’s Intellectual Property Practice Group, Ted leads the firm’s IP Transaction Team, as well as data breach incident response teams in the public and private sectors. Ted addressed information security risk management, and cross-border data transfer issue, including those involving the European Union and the Data Protection Safe Harbor. He also negotiates and prepares business process outsourcing, distribution, branding, software development, hosted application and electronic commerce agreements for all types of companies.

...

704-331-4910