SCOTUS Balks at Tech Liability in Oral Argument for First Section 230 Case
Conservative and liberal Supreme Court justices appeared skeptical Tuesday that a social media platform's inaction in removing terrorist content amounts to aiding and abetting terror plots. The court heard oral argument in Gonzalez v. Google (docket 21-1333) (see 2301130028).
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
The 9th U.S. Circuit Court of Appeals dismissed a lawsuit against YouTube in June 2021 for hosting and recommending ISIS proselytizing and recruitment videos. The 9th Circuit affirmed a decision from the U.S. District Court for the Northern District of California shielding YouTube from liability under the Anti-Terrorism Act (ATA). The plaintiff in the litigation and SCOTUS petitioner is the estate of Nohemi Gonzalez, an American student killed in Paris in 2015 during an ISIS attack. The petitioner asked SCOTUS to revisit the 9th Circuit's decision.
Justice Clarence Thomas told Eric Schnapper, attorney for the Gonzalez family, that he doesn’t understand how neutral algorithms -- whether for cooking or terror content -- amount to aiding and abetting under the ATA. YouTube and other tech platforms have been repeatedly told by government officials and the media that there’s terror content on their services, and they have done “almost nothing about it,” said Schnapper.
It’s one thing for a website to collaborate with a terror organization like ISIS and create algorithms that maximize that group’s reach, said Justice Sonia Sotomayor; Communications Decency Act Section 230 probably would not protect the platform from liability in that case, she said. But she questioned whether Schnapper made a persuasive argument showing neutral algorithms amount to aiding and abetting. There has to be “intent” and “knowledge” to prove aiding and abetting under the ATA, she said.
Chief Justice John Roberts, like Thomas, questioned whether a platform can be held liable when the algorithms supposedly function neutrally across subject areas, whether it’s terrorist or cat video content. Justice Neil Gorsuch said it’s possible there’s “residual content” online for which an interactive computer service could be held liable. There are examples in which algorithms might contain a discriminatory point of view, he said.
The thumbnails YouTube creates to get viewers to click on videos are the heart of the issue, said Schnapper. Algorithms are used to create these thumbnails, and the platform is responsible for creating this content, rather than just hosting third-party content, he said. The argument is these thumbnails are the same thing as sending a user an unsolicited email saying, “You might like to look at this new video,” he said. The platform is no longer acting as an interactive computer service but as a publisher, he said.
Everyone is trying to figure out how to apply a pre-algorithm statute like Section 230 to a post-algorithm world, said Justice Elena Kagan. But as Thomas pointed out, algorithms are “endemic” to the internet, she said: Everything online involves some form of organizing and ranking content. “Does your position send us down the road such that 230 really can’t mean anything at all?” she asked Schnapper.
'Economic Dislocation'
It seems the petitioners want to challenge Congress’ broad text that courts of appeals have unanimously read over the years to provide protection in this sort of situation, said Justice Brett Kavanaugh. Some amicus filers argued that to pull back from that interpretation would “create a lot of economic dislocation” or “crash the digital economy,” which are “serious concerns” Congress could address, he said: “We are not equipped to account for that,” said Kavanaugh. “And are we really the right body to draw back from what had been the text and consistent understanding in courts of appeals?”
Justice Amy Coney Barrett asked for DOJ’s position on whether YouTube’s thumbnails are technically Google’s content. The U.S. government doesn't think they are, said Deputy Solicitor General Malcolm Stewart. It’s basically the same content whether Google is creating URLs or thumbnails, he said. However, he said the video rankings are the result of choices made by the platform, not the result of the third party, ISIS in this case, posting the content.
It’s “true that many platforms today are making an enormous number of these choices,” said Stewart. “And if Congress thinks that circumstances have changed in such a way that amendments to the statute are warranted because things that didn't exist or that weren't on people's minds in 1996 have taken on greater prominence, that would be a choice for Congress to make.”
Google attorney Lisa Blatt compared the choices platforms make to the choices book stores make, in terms of product placement and promoting content. Like internet platforms sorting content, a book store might separate children’s books from adult books in a store, she said. Congress intentionally protected platforms publishing other people’s speech, even if it means the publication of harmful speech, she said. That decision was made to stop lawsuits from stifling the internet in its infancy, she said: “Helping users find the proverbial needle in the haystack is an existential necessity on the Internet.”
Section 230 protects “good Samaritan” platforms for blocking and screening of offensive material, said Justice Ketanji Brown Jackson. She questioned Blatt’s argument that Section 230 protection extends to platforms “promoting offensive material.” She questioned whether the statute protects platforms “pushing” offensive material to the front of the stack.
Numerous attorneys and academics said Tuesday’s oral argument led them to believe SCOTUS isn’t likely to cause sweeping changes to the Internet with its ruling in Gonzalez v. Google or Twitter v. Taamneh. “I do not think it was a good day for the Gonzalez side,” said Boston College Law School professor Daniel Lyons in an interview. “It appeared overall there was not a huge appetite to upend the Internet, especially on a case that I believe to them looked rather weak,” said attorney Catherine Gellis in a news conference from advocacy group Chamber of Progress.
'Didn't Hear Five Votes'
“I didn’t hear five votes in support of the petitioner's position,” said Eric Goldman, Santa Clara University School of Law associate dean-research, in the same news conference. “There's some reason to be optimistic that Google will likely prevail.” The justices were “very explicit” that they didn’t hear arguments for a clear “line” between what content platforms are liable for and what they’re not, said TechFreedom Internet Policy Counsel Corbin Barthold.
The justices “really seemed to be looking for something narrower -- where to draw the line in a very complicated area,” said Cornell digital and information law professor James Grimmelmann in an online live chat about the argument. Several justices' lines of questioning showed an understanding that “there is a world of difference between a lack of immunity and the imposition of liability,” said Mary Anne Franks, Cyber Civil Rights Initiative president, in the same live chat. CCRI filed an amicus brief in the case arguing for the scope of Section 230’s liability protections to be narrowed.
Observers told us they saw indications SCOTUS may decide to simply avoid ruling on the Section 230 aspects of the matter by finding in Twitter v. Taamneh that the platforms can’t be held liable for abetting terrorism. “That's historically been the way a court would handle these kinds of cases,” said Lyons. “Mooting this case and waiting for another case where the facts are a little bit more clean and the legal theory more interesting might be the way that the 230 skeptical wing of the court may decide,” Lyons said. It's possible that arguments in Twitter v. Taamneh could affect the justices view of the Gonzalez case, Goldman said.
Several attorneys said they were surprised Thomas appeared skeptical of the petitioners' argument and Jackson seemed receptive to aspects of it. Thomas was thought before oral argument to be eager to rule on Section 230. Thomas’ first question, about the neutrality of algorithms, was “a Google-favorable question,” Goldman said in a blog post. “The justice that sort of left the most leeway for a potential narrowing of Section 230” was Jackson, said Barthold.
A decision for tech platforms in the Twitter and Google cases likely wouldn’t spell the end of threats to Section 230, attorneys told us. A narrowing of 230 would provide more room for states to pass their own laws on internet liability, Barthold said. “Whether it's a direct hit either in court or in Congress. Section 230 has become the target for all of the pro censorship impulses that are across both parties and across a huge swath of American voters,” said Goldman.