HB Ad Slot
HB Mobile Ad Slot
The Nine Greatest Experts on the Internet, NOT! – The Supreme Court Considers the Algorithm in Google and Twitter
Friday, June 16, 2023

“You have the Truman Show versus a horror show,” said litigation legend Lisa Blatt during oral arguments in Gonzalez v. Google. Gonzalez is one of two cases recently decided by the Supreme Court dealing with the imposition of liability on websites that host user-generated content (UGC) for the actions of their users. But more broadly, many viewed the cases as asking the Justices to determine if we can hold the creators of technology responsible for the consequences of their creations.

Blatt was remarking on what could happen to the Internet if the Court finds Google/YouTube liable by way of YouTube’s algorithmic organization of UGC. The Court, she argued, would weaken the platform’s safe harbor protections under section 230 of the Communications Decency Act of 1996 and therefore force the platform and other platforms utilizing UGC to choose between exercising high-touch control over UGC, or callously turning a blind eye to what they host. In general, section 230 protects website hosts, including platforms like Google and Facebook, from legal liability for online information provided by their users and neither imposes an obligation to moderate the content nor does it include a legal consequence for failing doing so.

Regardless of Blatt’s facetiousness, the facts underlying Gonzalez are indeed a horror show: Petitioners are the family of Nohemi Gonzalez, a 23-year-old American college student who was killed while on an exchange program with California State University, Long Beach, along with 130 other people in a bombing in Paris that Islamic State of Iraq and Syria (ISIS) claimed responsibility for. Petitioners sued Google in District Court arguing that Google should be held accountable because (1) it approved ISIS videos for advertisements and shared proceeds with ISIS through YouTube’s revenue-sharing system and (2) YouTube’s algorithm suggested ISIS-recruitment videos to visitors of the site.  

The District Court in California, relying on section 230, dismissed the complaint for failure to state a claim, a ruling the plaintiffs appealed. The Ninth Circuit held that most of the plaintiffs’ claims were barred by section 230, but not the plaintiffs’ direct- and secondary-liability claims based on the advertisements and revenue sharing. The Supreme Court granted certiorari.

The question before the Court was whether section 230 immunizes the platforms when they make targeted recommendations through their algorithms, or only immunizes them when they engage in traditional editorial functions, such as deciding whether or not to provide certain content.

As oral arguments in Google got underway, Justice Kagan conceded, on behalf of the Court, that they are “not like the nine greatest experts on the Internet.” But she also wasn’t intimidated by Blatt’s panic mongering that any attempt at regulation would break the Internet. “I don’t have to accept all Ms. Blatt’s ‘the sky is falling’ stuff,” she remarked before explaining that it might be difficult to draw lines when it comes to the Internet, but that doesn’t mean the Justices will let the tech companies do as they please.

As the Justices struggled to understand the technology at issue, it did not seem like they were getting closer to figuring out what or who “the algorithm” is. Is it neutral, like a room full of “hardworking people” (Petitioner’s attorney Schnapper’s term) that would just as dispassionately find you the cheapest airfare as direct you to ISIS beheading videos? Or is it sinister, a high-tech recruitment agency in “cahoots” (Justice Sotomayor’s term) with enemies of the state to promulgate more enemies of the state? Is it friend or foe, or just a line of code? There was a lot of alluding by the Court to “tomorrow” when maybe we would find out.

The next day SCOTUS heard the case of Twitter, Inc. v. Taamneh. The petitioners in Twitter are the family of Jordanian citizen Nawras Alassaf who was killed in 2017 during an Islamic State-affiliated attack at Reina nightclub in Istanbul. Alassaf’s family sued Twitter, Google, and Facebook in District Court arguing that the companies aided and abetted ISIS by failing to control terrorist content on the platforms. The family won at the District Court and Twitter appealed to the Ninth Circuit arguing that the lower court improperly expanded the scope of the Anti-Terrorism Act, 18 U.S.C. section 2333. On appeal, the Ninth Circuit affirmed the lower court ruling that Twitter, Google, and Facebook are liable, and did not address safe harbor protections under section 230. Twitter sought, and the Supreme Court granted, cert.

The question in Twitter was whether a platform can be liable under the Anti-Terrorism Act for aiding and abetting a terrorist attack because it failed to adequately block UGC valorizing terrorism, even where the platform has a policy barring such content.

The Anti-Terrorism Act allows victims of terrorism to sue those who have provided “substantial assistance” to acts of terrorism in federal court. The family argued, and the lower courts agree, that because the platforms did not act aggressively enough to take down terrorist content, they should be held liable for ISIS acts under the Anti-Terrorism Act.

The issues before the Court in the two cases are distinct: Gonzalez concerns the scope of the section 230 safe harbor, whereas Twitter considers the validity of the family’s theory of liability under the Anti-Terrorism Act. But the cases crucially overlap; both ask: Can you hold a social media platforms responsible for ISIS attacks based upon the platform either (1) algorithmically delivering terrorist-related content or (2) by not moderating such terrorist-related content aggressively enough?

The algorithm is endemic to the Internet. Every time anybody looks at anything on the Internet, there is an algorithm involved, whether it’s the result of a Google search or YouTube recommendations. Everything involves ways of organizing and prioritizing material. The algorithm determines whether to show you results for soccer or the Superbowl when you search for “football” and whether to show you information on the yellow vests or the Eiffel tower when you search for “Paris.” The broad questions posed by these cases are dire as we grapple with the emergence of algorithmic tools, such as artificial intelligence, becoming sophisticated and ubiquitous at dizzyingly exponential speeds. Who is responsible for the consequences of this powerful and pervasive technology? People are frightened, and understandably so—just look at the horrific facts in these cases. The Internet is real, and it has real consequences. With this couple of cases the Justices were tasked with peeking behind the curtain and deciding if the tech whizzes are doing enough to protect us from what they’ve created.

This is how they decided.

The Court issued a one page per curiam decision in Google (in Google’s favor) that does not even mention the word “algorithm.” They didn’t even resolve the section 230 question; instead they declined to answer it. The anonymous Court writer directed the reader over to Twitter, under which they said the challenged claims of the Google complaint fail and the ninth circuit should look to for answers to any open questions.

Over to Twitter.

The Court reversed the decision of the ninth circuit and did not hold Twitter liable.  It was obvious from oral argument in Twitter that the Justices were far more comfortable exploring the nuances of “knowingly providing substantial assistance” as required to impose secondary civil liability for aiding and abetting than they had been applying the pre-algorithm section 230 to a post-algorithm world, so perhaps this result is unsurprising. At its core this was classic SCOTUS stuff: the unanimous opinion, delivered by Justice Thomas, spent most of its 30 pages discussing the three-prong common law test for aiding and abetting established in Halberstam v. Welch, 705 F.2d 472. (1983). The facts of Halberstam are as follows:

Bernard Welch was a burglar who stole and sold antiques. Linda Hamilton lived with Welch and was his girlfriend. Hamilton assisted Welch by keeping track of Welch’s inventory of stolen goods stored in their basement, performing secretarial work, and accepting checks written out to her by Welch’s buyers. Welch was arrested for killing Michael Halberstam while burglarizing Halberstam’s home. Halberstam’s widow filed a wrongful-death lawsuit in federal district court against Welch and Hamilton. Hamilton opposed the lawsuit, saying that she did not know Welch was a burglar, and claiming she thought Welch operated a legitimate antiques business. The district court held that Hamilton was jointly liable with Welch, finding that Hamilton provided substantial aid to Welch’s criminal enterprise, making her a coconspirator and aider and abettor. Hamilton appealed and the D.C. Circuit Court sustained.

This was the decision relied on by the lower courts to find the platforms liable for aiding and abetting ISIS. Don’t get me wrong, this 19-page discussion was engrossing being at the intersection of murder, jewel thievery, common law marriage, and accounting, but ultimately the Court concluded that the lower courts misapplied the law. And, eventually we arrive at the algorithm.  

Justice Thomas begins this part of the opinion by saying that the only “affirmative act” defendants undertook was “creating their platforms and setting up their algorithms to display content relevant to user inputs and user history.” His attitude was clearly that there is nothing inherently conspiratorial about setting up a recommendation algorithm, and the recommendation feature does not turn the defendant’s passive conduct to active assistance. The algorithm, he seems to say, is just following users’ directions. He reminded us that the law has long been leery of imposing aiding-and-abetting liability for “mere passive nonfeasance.” And plaintiffs have not identified any “special duty” that required them to remove the content.

Justice Thomas goes on, “The mere creation of those platforms … is not culpable. To be sure, it might be that bad actors like ISIS are able to use platforms like defendants’ for illegal—and sometimes terrible—ends. But the same could be said of cell phones, email, or the internet generally. Yet, we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large … ISIS’ ability to benefit from these platforms was merely incidental to defendants’ services and general business models.”

Is it the same? Aren’t there myriad choices that the platforms make when writing their algorithms that the phone company doesn’t consider? Is Justice Thomas being wise in not allowing this new online context to undermine well-established principles of culpability, or is he being naïve and dangerously so?

We’ve been hearing a lot lately about the biases of the algorithm, that it isn’t the impartial email courier, but that it is actually racist and sexist, like the material it is trained on (i.e., us). And we’ve all heard about the embarrassing errors linked to AI “hallucinations” and general inaccuracy. So can we really say the acts that underlie these cases are “merely incidental”? Moreover, on a fundamental level and AI chatbot is acting “more affirmatively” to direct users toward content, which may look less like Justice Thomas’ passive algorithms responding to users’ inputs and more like Ms. Hamilton’s assistance to Mr. Welch.

Ironically, at the same time as Twitter, Google, and Facebook are fighting before the Supreme Court to preserve their business models and highly profitable algorithms, elsewhere hundreds of artificial-intelligence experts, as well as tech executives and scholars, are warning against the “risk of extinction from AI.” Two days before Google and Twitter were decided, Sam Altman, the chief executive of OpenAI, the maker of ChatGPT, appeared before legislators practically begging for government regulation because of the risks AI poses to humanity. Altman even voluntarily halted development on Chat GPT-5 because of the threat the system poses to humanity. What is going on here? The lack of consensus among the technocrats alone is unsettling. At best big tech is a cynical reflection of our most depraved curiosities; at worst it’s an existential threat to the human race. If we’re on an inexorable course towards AI domination, should we get to enjoy our nasty videos in the meantime? The lower  courts did not support this jouissance.

Technological progress is always a double edged sword and the answer is not to be a luddite. We must accept that when we invent electricity, we also invent the electric chair; but it’s the law that sentences people to it. The Court got it ultimately right, because who wants the Truman Show Internet if reality is anything but? But as far as Big Tech being “passive,” is anyone convinced? Maybe this branch of government missed an opportunity.

“The algorithms appear agnostic,” concludes Justice Thomas. This remains to be seen.

HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins