Export Compliance Daily is a Warren News publication.
'Gargantuan Liability'

9th Circuit Content Moderation Ruling Will Chill Speech, Amici Tell SCOTUS

Upholding the 9th U.S. Circuit Court of Appeals ruling that Twitter abetted terrorists because the platform was used by ISIS for recruitment (see 2211300073) would have a chilling effect on free speech, open numerous businesses to massive liability, and ignore the difficulties, costs and scale of content moderation, said amicus filings from the U.S Chamber of Commerce, CTA, CCIA and others in Supreme Court case Twitter v. Taamneh (docket 21-1496). “If that is a sufficient basis for liability, intermediaries will no longer be able to function as fora for others’ speech, and free expression will be the loser,” said a joint filing from the ACLU, the R Street Institute,the Reporter’s Committee for Freedom of the Press, the Center for Democracy & Technology, and others.

Sign up for a free preview to unlock the rest of this article

Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.

The 9th Circuit’s interpretation of the Antiterrorism Act is too broad, and SCOTUS should affirm that a defendant must possess actual knowledge that a piece of user-generated content provides substantial assistance to a terrorist attack before being held liable for aiding terrorists, said the joint filing from the ALCU and other groups. “Faced with potential ATA liability, all manner of speech intermediaries -- not only online platforms -- will grow more risk-averse and more susceptible to overly cautious moderation,” said the filing. “Online intermediaries will be forced to suppress protected speech,” The 9th Circuit’s ruling “would open the door to future federal or state legislation imposing liability on online intermediaries for inadvertently hosting other kinds of content,” the amicus filing said.

The complaint doesn’t claim that the defendants’ platforms were used in the planning or execution of the specific 2017 Istanbul terrorist attack at the heart of the case, said a joint amicus filing from the U.S. Chamber of Commerce, the National Foreign Trade Council and others. “Eliminating the link between the alleged aid and the act injuring the plaintiff” would allow plaintiffs’ lawyers “to threaten gargantuan liability, and thereby coerce unjustified settlements,” said the filing. Companies would be forced to either “absorb the high costs of settling unjustified lawsuits or to stop doing business in the conflict-ridden, developing parts of the world in which such claims typically arise.”

The 9th Circuit ruling “opens the door to abusive claims alleging only that somewhere in a company’s customer base -- which for many companies includes tens or hundreds of millions of people -- unidentified individuals are using the business’s products or services in a way that supposedly furthers terrorists’ goals,” said the business groups.

Content moderation at the scale of modern, massive social media platforms is a huge job that's impossible to perform perfectly, said a joint filing from CCIA, Netchoice, CTA, ACT and others. “Although the amounts of harmful and objectionable content are small percentages of all content online, they are large in absolute terms,” the amicus filing said. Over a six-month period in 2020, seven online services removed “nearly six billion pieces of harmful content,” the filing said. “When faced with billions of pieces of content -- each with its own specific context -- online services will make mistakes.”

As platforms develop tech such as algorithms to moderate content, there's always “an escalatory response from those seeking to evade online services’ content moderation,” the filing said. The brief used the example of the livestreamed video of the 2019 terrorist attack on mosques in Christchurch, New Zealand. The video was removed minutes after police alerted the platform, but it still “spawned countless copies” and “remains a constant target of content-moderation enforcement to this day,” the filing said. “Despite the limited reach of the original video, one service alone ‘removed about 1.5 million videos of the attack globally’ within the first 24 hours of the terrorist attack,” the filing said.

Even these imperfect efforts are an important facet of operating modern online services, which all face constant threats from malicious actors and harmful content,” said the joint filing from CCIA and others. “Without such moderation, dangerous and objectionable content accessible on online services would proliferate exponentially.”