Export Compliance Daily is a Warren News publication.
'Significant Amount of Work'

Amid Digital Rights Proposals, Platforms Defend Content Moderation Transparency

Facebook, Google and Twitter executives defended content moderation transparency efforts Monday, the day a coalition of digital rights advocates urged online platforms to release specific information on the scope and reasoning for content removal. The Electronic Frontier Foundation, Center for Democracy & Technology and New America’s Open Technology Institute were among groups unveiling their Santa Clara Principles. The document said platforms should report the volume of removed content, offer detailed explanations for takedowns and give opportunity for appeal.

Sign up for a free preview to unlock the rest of this article

Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.

Twitter Vice President-Trust and Safety Del Harvey said during a Content Moderation at Scale Summit panel in Washington that she would love to see the industry continue to push for content moderation transparency: “I know that Twitter has a significant amount of work planned in that space.” A Twitter spokeswoman said an announcement is coming “soon.” EFF Senior Staff Attorney Nate Cardozo said the document's goal “is to ensure that enforcement of content guidelines is fair, transparent, proportional and respectful of users’ rights.”

At the event, Facebook Policy Manager-Risk Peter Stern discussed the social media platform’s recent decision to publish internal guidelines for content moderation (see 1804240047) and expand the appeals process for takedowns to include individual posts, not just groups, pages and profiles. The decision allows the public to understand “the substance of the lines our reviewers are drawing” on a day-to-day basis, said Stern: “We want people to understand why we’re making the decisions that we are.”

Google Senior Litigation Counsel Nora Puckett said YouTube removed 8.2 million videos for guideline violations in Q4, which had about 28 million videos flagged. In addition to automated and human employee moderators, YouTube relies heavily on users to flag content, she said. Internet Association CEO Michael Beckerman (see 1803140040) told the Senate Judiciary Committee in March user moderation is “essential” in combating harmful content.

Vimeo Director-Legal Affairs and Trust and Safety Sean McGilvray said it’s important to find a proper balance between automation and human review: “We’re sort of in an automation arms race with spam bots and fake follower accounts and really highly automated adversaries.” TripAdvisor Director-Content Becky Foley said the industry doesn't have “unlimited resources,” so it needs “to figure out where we can make compromises, and where we can reduce risk in automation” and identify the most value for users.

These companies are becoming the de facto arbiters of what content is allowed and not allowed on the internet, a dangerous power and an awesome responsibility that requires meaningful checks and balances,” said New America Open Technology Institute Director Kevin Bankston. CDT Free Expression Project Director Emma Llansó cited positive developments from several platforms, but argued more needs to be done, as content moderation has “major consequences for individuals’ rights to freedom of expression.”