Google AI Makes Child Abuse Claims

Conservative activist Robby Starbuck has filed a major defamation lawsuit against Google, accusing the tech company of spreading false and damaging claims through its artificial intelligence tools. The suit, filed in Delaware Superior Court, alleges that Google’s AI systems falsely connected him to accusations of sexual assault, child abuse, and financial exploitation.

Starbuck said the tech giant has had two years to fix the problem but did nothing.

“They’ve had [two years] to fix this,” he said during an interview on The Will Cain Show. “That’s beyond negligence. That’s pure malice at that point and, even if it’s born from negligence, it’s malicious.”

The lawsuit specifically names Google’s AI platforms — Bard, Gemini, and Gemma — accusing them of producing and spreading false claims about Starbuck since 2023. The suit also states that Gemini’s system reported showing those false statements to nearly 2.8 million unique users.

For Starbuck, the issue is deeply personal and urgent. He said it took only hours for the lies to spread online and that Google ignored multiple cease-and-desist letters.

“This is something that can’t happen in elections, so I had to put my foot down,” he said. “The line for me was when it started saying that I was accused of crimes against children. It was like, ‘I can’t sit by and hope Google’s going to do the right thing. I have to file a suit to protect my reputation before this goes any further.’”

Starbuck, who is also a visiting fellow at the Heritage Foundation, is seeking at least $15 million in damages. He argues that Google’s inaction has not only harmed his career but endangered his family’s safety and reputation.

The case highlights growing concerns about “hallucinations,” a term used to describe AI systems generating false or misleading statements. Google acknowledged that issue in a statement responding to the lawsuit.

“Most of these claims relate to hallucinations in Bard that we addressed in 2023,” the company said. “Hallucinations are a well-known issue for all LLMs, which we disclose and work hard to minimize. But, as everyone knows, if you’re creative enough, you can prompt a chatbot to say something misleading.”

Starbuck disagrees, saying the prompts he used were simple and innocent — hardly “creative.”

“They’re as simple as saying, ‘Hey, give me a bio on Robby Starbuck,’ or ‘Hey, tell me about Robby Starbuck,’” he said.

He said the misinformation spread fast enough that people in his own community started questioning him.

“I had somebody come up to me and ask me if these accusations were true,” Starbuck explained. “So people are reading and believing these things. That’s very dangerous.”

The controversy has reignited debate over Big Tech’s control of information and its political biases. Many conservatives have warned that AI models, trained on massive datasets shaped by Silicon Valley’s worldview, could become new tools of censorship and defamation.

Starbuck’s lawsuit is one of the first to directly challenge a major tech company over AI-generated lies. His legal team argues that if Google allows its products to make and repeat false claims about individuals, it should be held accountable just like any other publisher.

The lawsuit also comes amid broader government scrutiny of Big Tech’s use of AI. Lawmakers have already raised concerns about artificial intelligence being used to influence elections, manipulate search results, and distort reputations without oversight or accountability.

As the case moves forward, Starbuck says he’s determined to set a precedent. “If this can happen to me, it can happen to anyone,” he said. “We can’t live in a world where a machine can destroy someone’s name overnight.”

Subscribers receive daily email news and specials in accordance with our trusted Privacy Policy