Export Compliance Daily is a Warren News publication.
'Bogus Information'

Pro Se Plaintiff Sues Google for 'Inaccurate' AI Answers From Bard

Google’s conversational AI tool, Bard, introduced in March, is “inherently flawed” because data that's used for creating the algorithm isn't "precise," making the answers given to user prompts "inaccurate,” alleged pro se plaintiff Jeffrey Ito in a complaint Monday (docket 24PS-cv-00061) in Los Angeles County Superior Court.

Sign up for a free preview to unlock the rest of this article

Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.

Ito, a Hacienda Heights, California, resident, brings a claim of deceptive trade practices against Google because its Bard service “provides inaccurate information for legal research language model prompt answers,” said the complaint.

Ito, a Google user, began using Bard to help him with his legal research for a civil case involving medical malpractice, said the complaint. On Nov. 16, he asked Bard to give him “all case law related to medical malpractice related to incorrect diagnostic tests,” then to that information for California, then to "incorrectfMRI diagnostic tests" in California and then to “incorrect brain diagnostic tests” in California, the complaint said.

Answers Ito received had “deceptive , inaccurate, and bogus information,” the complaint said. The answers were followed by words that said, “Bard may display inaccurate info, including about people, so doublecheck its responses," it said. "Your privacy & Bard” are located underneath the Google Bard prompt search bar, the complaint noted.

On Dec. 14 and Jan. 4, Ito entered prompts into Bard asking for all case law related to medical malpractice involving “wrong diagnostic tests” in California and “case law in the United States involving medical malpractice specifically medical misdiagnosis,” the complaint said. He received answers consisting of “deceptive, inaccurate and bogus information,” it said.

On Dec. 6, Google introduced Gemini Pro, an update said to improve Bard to be more capable of understanding and summarizing, reasoning, coding and planning, the complaint said. The most recent update, on Dec. 18, allows users “to retrieve their personal data from Gmail, Google Docs and Google Drive into the service and could involve sensitive information and potentially impact the prompt responses given to users,” the complaint said, quoting Google.

The complaint called out a disclaimer in the Google Bard Generative AI Additional Terms of Service saying it “may sometimes provide inaccurate or offensive content that doesn’t represent Google’s views.” The disclaimer also said users should use “discretion before relying on, publishing, or otherwise using content provided by the Services.” And it said users should not rely on Bard “for medical, legal, financial, or other professional advice,” the complaint said. Content on those topics is for “informational purposes only” and not a substitute for “advice from a qualified professional,” the complaint noted.

An FAQ section on why Bard can get things wrong says generative AI “and all of its possibilities are exciting, but it’s still new,” said the complaint. “Bard is an experiment, and it will make mistakes,” including providing inaccurate information or even “offensive statements,” it said. The disclaimer encourages users to double-check information they get from Bard by using Google Search. It also says Bard relies on people to give feedback “on answers that don’t seem right," the complaint said.

Large language model (LLM) experiences, including Bard, “can hallucinate and present information as factual," the complaint said, quoting Google as saying Bard “often misrepresents how it works.” Bard may occasionally claim it uses personal information from Gmail or other private apps for machine-learning training, but “that’s not accurate,” said the complaint, quoting from Google FAQs. “Bard does not have the ability to determine these facts,” it said, referring users to Google’s privacy policy.

LLMs such as Bard use deep learning, including algorithms like generative adversarial networks to understand and generate “human-like text, processing vast datasets to learn language patterns and context,” said the complaint. LLMs are “designed to provide contextually relevant responses," it said. Based on Ito’s use of Bard, “it is apparent that its data source fails to utilize correct, factual information to provide responses, specifically as it pertains to legal research, thus causing significant financial harm to its intended users,” the complaint said.

Google owns Google Scholar, which provides users with articles and case law, so Google has an “appropriate and factually correct case law dataset that should be present within their Google Bard algorithm,” as it provides LLM as a service to its users, the complaint said.

Ito “suffered injury” as a result of using Bard to perform legal research and receiving “inaccurate information from the service,” said the complaint. Since the Google Bard service is used in a consumer context, the product defects “are foreseeable and could have been prevented with proper initial product design,” the complaint said. Google owed a “reasonable duty of care” to Ito and other users to ensure Bard’s product design “meets expectations of a quality product that provides contextually relevant responses to user prompts,” it said. Bard breached the duty of care by presenting inaccurate information to Ito and intended users, it said.

Ito suffered damages including emotional distress, financial loss, reputation damage, wasted time and effort, loss of opportunities, legal consequences, inability to make informed decisions, negative impact on relationships, and “hindered career progression,” it said. He seeks a judgment of $5 million plus interest, legal costs and “attorney’s fees.” Google had no comment Wednesday.