Guilty or innocent? In a bench trial, that’s up to the judge. Would it surprise you to know some judges in our country are posing this question to a computer?
Meet COMPAS, a computer-based algorithm that is currently being used in criminal cases to predict a defendant’s likelihood to re-offend. Artificial Intelligence is becoming more commonplace in our everyday lives. Now that it’s being introduced in such an important role in the court system, some commentators in the legal realm have pondered the pros and cons of using artificially-intelligent judges in the courtroom to render, in theory, unbiased opinions. As experts on experts, this got us to thinking about the expanded uses of artificially-intelligent experts in the courtroom, too.
Let’s back up and discuss how Artificial Intelligence is currently being utilized. COMPAS is currently used in criminal cases and serves as a leading example of how artificially-intelligent devices can form decisions based on statistical information. The computer algorithm assesses about 100 factors to determine a defendant’s statistical likelihood of rehabilitation or re-offense.
The algorithm’s assessment based on factors including sex, age, and criminal history has been tested and deemed reliable atidentifying defendants with the highest risk score who went on to re-offend at a rate four times greater than those assigned the lowest risk score.
Some have labeled COMPAS unfair, claiming the algorithm can be biased and, just like humans, the artificial intelligence forms a prejudice based on past case rulings.
Preconceptions, based on experience, can help both humans and computers sort and process information more efficiently. But, it will occasionally cause us to get verdicts or judgments wrong. Particularly, when we apply the prejudice or statistical reasoning to an anomaly case, i.e., the exception that doesn’t follow the rule.
For example, a judge who believes expert witnesses who have been published on a particular topic should have their testimony admitted over an expert who hasn’t. While that rule of thumb, like the COMPAS algorithm, might frequently result in the judge’s proper admission of reliable expert testimony, there will always be an exception where an unpublished expert who is offering a perfectly-reliable opinion will be excluded.
That would be an error but, according to the Washington Post, it’s a statistical error that cannot be avoided. What can be avoided, if we were to embrace artificially intelligent judges rendering opinions from the bench, is human bias. A computer cannot be swayed by emotions, politics, money, or power. Artificially-intelligent judges could be programmed with the knowledge of thousands, or likely millions, of legal opinions which would teach it the law and the systematic application of the law to facts of the same, or at least similar, to those at hand to allow it to render a fair and impartial decision that follows stare decisis.
Analyzing the Artificially-Intelligent
This got us thinking. If an artificially-intelligent judge could be considered an expert on the law because he knows the law and how it has been applied in the past, could we also create AI experts? The input of data, patterns, and millions of real world outcomes sounds a lot like experience. And it is the primary characteristic we look for in experts. If that can be merely inputted, then perhaps the experts of tomorrow, both on the bench or the witness stand, will be artificially intelligent.
Can you imagine, if it doesn’t yet exist, a COMPAS-like algorithm capable of determining whether a merger will have an anticompetitive effect, what the damages are in a copyright royalties dispute, or the most common interpretation of mortgage-backed securities and other complex financial instruments (Statistically speaking)? At IMS, we bring our own bias to the discussion as we absolutely prefer the human intelligence, reasoning and personalities of the thousands of bright minds in our extended family of expert witnesses. For the foreseeable future, machine learning has limited capabilities and functionality related to storytelling, reading and conveying emotions, and specific types of interpretations that are essential to litigation and even more critical to interactive environments in depositions and courtrooms.
What do you think? Can AI serve a valuable role in our legal system and in what way?