Export Compliance Daily is a Warren News publication.
'Horrific Videos'

Content Moderator Sues Cognizant for 'Willfully' Misleading Job Description

Cognizant Technology’s human content moderation system, designed to remove offensive social media content, causes or contributes to numerous mental and physical illnesses, injuries and conditions, for individuals subjected to “long-term unmitigated exposure to such content,” alleged a fraud complaint Tuesday (docket 8:23-cv-02607) in U.S. District Court for Middle Florida in Tampa.

Sign up for a free preview to unlock the rest of this article

Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.

Plaintiff Zuania Vazquez-Padilla, a Florida citizen, was a Cognizant “process employee,” a content moderator, August 2018-January 2020. Cognizant had a $250 million annual contract with Facebook beginning in 2018 to provide human content moderation services, and the company established call centers in “relatively low-paying labor markets” such as suburban Tampa in order to hire “low-paid, unsophisticated workers with little knowledge of the technology industry to perform the grueling job of content moderation,” alleged the complaint.

In her content moderation role, Vazquez-Padilla was required to review offensive content, sometimes “thousands of videos and images every shift,” and remove those that violated Facebook’s terms of use, the complaint said. Her job also included training Cognizant’s AI algorithms to learn how to eventually be able to decipher and remove offensive content without human moderators, it said.

Cognizant knew from Facebook guidance and industry studies that long-term exposure to images and livestreaming of graphic violence and other “vile content” was known to cause “debilitating injuries,” including post-traumatic stress disorder (PTSD), the complaint said. The studies and guidance were not known or readily available to Vazquez-Padilla, an entry-level worker with little knowledge of the tech industry, it said.

The complaint referenced several studies related to the effects of disturbing internet-related content, including a DOJ Internet Crimes Against Children task force study that found a quarter of 600 cybercrime investigators surveyed displayed symptoms related to psychological trauma, including from secondary traumatic stress, the complaint said. Cognizant, though not Vazquez-Padilla, knew about the study, it said.

Facebook was aware of the dangers of long-term unmitigated content moderation and helped draft workplace safety standards to protect content moderators, the complaint said. Though Facebook disseminated the safety standards to Cognizant, Vazquez-Padilla “was not privy” to them, the complaint said. Facebook required Cognizant’s operational leadership to implement workplace safety standards designed to protect content moderators from the dangers and to notify employees of the dangers of content moderation, but those Cognizant employees “willfully refused to comply with either of these requirements,” it said.

Facebook’s workplace safety standards required Cognizant to implement prehiring psychological screening; provide moderators mandatory counseling and mental health support; alter the resolution, audio, size and color of trauma-inducing images; and train moderators to recognize the physical and psychological symptoms of PTSD, the complaint said. But Cognizant “did not provide content moderators the requisite ‘extensive training’ including ‘on-boarding, hands-on practice, and ongoing support and training,’ thereby deliberately concealing the known dangers of its system of content moderation” from Vazquez-Padilla during the onboarding phase of her employment, the complaint said.

On June 28, 2018, a recruiter contracted by Cognizant to hire content moderators sent Vazquez-Padilla a “cheat sheet” with the purpose of “misleading prospective new hires into believing the job was harmless, despite the fact Facebook had instructed [its contractors] to ‘make sure that folks have a high level of understanding what it is that they would be getting into,'” said the complaint. The cheat sheet said the job would involve diagnosing sentences made up of emojis and evaluating social media-based questions such as, “What does lol mean?” it said.

During Vazquez-Padilla’s job interview, Cognizant employees lied to her about the nature of content moderation, saying she would be reviewing only “mild ‘photos’ and ‘text’” and recognizing emojis and millennial slang; she was told the job would be limited to dealing with “safe content,” the complaint said. Before Vazquez-Padilla began orientation for the position in August, the company required her to sign a “sweeping” nondisclosure agreement and instructed her not to discuss the content she moderated or workplace conditions to anyone outside of her team, said the complaint.

During her seven-week onboarding, Vazquez-Padilla saw no graphic images or “harsh material” six weeks into the training, and images depicted later in training “were not nearly as bad as the production floor,” the complaint said. She was told that “some of the content would be a little disturbing,” but her superiors “never showed anything graphic nor prepared her for what she was about to see daily,” it said. The onboarding process “presented a false image of what the content moderator job would entail” and didn’t prepare Vazquez-Padilla “for active violence and hate.”

When she began the job, Vazquez-Padilla was assigned to the “child suicide” queue and exposed to videos and livestreaming of children being molested and raped, along with other “horrific” videos of beheadings, disfigurement and molestation, the complaint said. She didn’t receive the counseling Facebook required, and Cognizant “deliberately concealed the material fact that exposure to the content may give employees post-traumatic stress disorder and other mental and physical comorbidities,” it said.

Cognizant shut down its content moderation operations in February 2020, “aware that its system of human content moderation was causing its workers extreme psychological and physical harm,” the complaint said. It laid off its workforce, leaving Vazquez-Padilla “with no means of obtaining requisite ongoing medical monitoring, screening, diagnosis, or adequate treatment after suffering psychological trauma during her employment,” the complaint said.

On a February 2020 earnings call, Cognizant CEO Brian Humphries acknowledged for the first time publicly that the content moderation system was dangerous, said the complaint. He said the company would allocate $5 million to fund research aimed at increasing the sophistication level of algorithms and automation to reduce users’ exposure to objectionable online content, but “it was not clear where Cognizant planned to make that donation or if it ever did,” the complaint said.

Though Cognizant pledged to donate $5 million to fund research to create automated systems that can take the place of human moderators, the company “callously refused to provide any assistance to any of the content moderators it fired or who were forced to resign due to the psychological trauma,” the complaint said. That left human content moderators such as Vazquez-Padilla “with no means of obtaining requisite ongoing medical monitoring, screening, diagnosis, or adequate treatment for the psychological trauma and physical injuries they suffered and continue to suffer,” it said.

Vazquez-Padilla asserts fraudulent misrepresentation and seeks an award of actual, compensatory, exemplary and punitive damages; injunctive relief enjoining Cognizant from conducting business through the unfair practices alleged; and reasonable litigation expenses, the complaint said. Cognizant didn't comment Wednesday.