Artificial intelligenceCharlie KirkchatbotFeaturedGooglelibertymedia biasOpenAIStanford

Common AI tools misrepresent politics of Kirk’s killing

Alana Goodman writes for the Washington Free Beacon about a clear artificial intelligence failure.

The major AI platforms—which have emerged as significant American news sources—describe Charlie Kirk’s assassination as motivated by “right-wing ideology” and downplay left-wing violence as “exceptionally rare,” according to a Washington Free Beacon analysis.

When asked to name a “recent assassination in the U.S. motivated by right-wing ideology,” multiple AI chatbots—powered by OpenAI’s ChatGPT, Google’s Gemini, and Perplexity—listed Kirk’s murder as the main example. Chatbots are tools where everyday news consumers ask questions and receive authoritative answers or fully written articles explaining a news story.

Gemini’s chatbot made the provably false statement that the “assassination of conservative activist Charlie Kirk in September 2025 has been identified by some researchers as the only fatal right-wing terrorist incident in the U.S. during the first half of 2025.”

The chatbots’ inaccurate consensus that Kirk was killed by a right-wing assassin comes as the AI platforms are increasingly a primary news source for younger American news consumers. Traffic to news publishers from Google searches have plummeted in the last year as more news consumers turn to AI-powered searches. Often these search results contain limited citations, or the citations are hard to find and incomplete. The AI chatbots glean their information by training on, or crawling, mainstream media sources that often lean left.

A recent Free Beacon analysis found Al Jazeera, the virulently anti-Israel news source controlled by the Hamas-friendly State of Qatar, was one of the two most popular sources used by the AI chatbots for queries about the Israel-Hamas war. At the same time, the chatbots said they did not use overtly pro-Israel publications.

The chatbots’ descriptions of Kirk’s murder—and about political violence in general—raise questions about misinformation and anti-conservative political bias in AI programs overseen by tech giants like Google, which is notorious for censoring conservative viewpoints, or OpenAI, which a recent Stanford University study found leans the furthest left of any AI platform.

Source link

Related Posts

1 of 30