Elon Musk Empowers AI to Spot Fakes on X

Elon Musk recently announced plans to deploy artificial intelligence more aggressively on X to verify or flag falsified content posted on the platform.

(cc) Sonny Tumbelaka/AFP

Musk intends to utilize his AI model, Grok—developed by xAI—to automatically analyze posts and call out potential misinformation. This marks a shift toward a more push–button fact-checking approach, supplementing X’s existing Community Notes system.

While Grok is widely available on X and already assists users with inquiries, it’s not without its flaws. In late June, the Atlantic Council’s Digital Forensic Research Lab analyzed 130,000 Grok posts related to the Israel‑Iran skirmish and found that about a third incorrectly verified or misattributed fake videos as genuine footage. These incidents revealed Grok’s struggle to separate genuine from AI‑generated viral content—such as mislabeling scenes from Iran or falsely claiming attacks on Tel Aviv.

In related developments, Musk publicly chastised Grok on June 18 after the AI asserted that right-wing political violence in the U.S. had been more deadly since 2016. Musk termed the claim “objectively false,” underscoring the delicate line between AI‑driven analysis and political interpretation .

These examples highlight a broader debate: can rapidly evolving AI tools act as effective truth‑filters, or will they worsen misinformation? X removed legacy moderation teams post-Musk takeover and leaned heavily on Community Notes, a crowd-sourced fact-checking layer  . Yet, with increasing reliance on AI, the company must strengthen guardrails, model accuracy, and responsiveness.

For users, the promise is compelling: faster detection of fabricated posts and deepfakes. The risk lies in overreliance—missteps by Grok could spread false assurances rather than correct them. Musk’s pivot towards AI-powered moderation underscores both technological ambition and the need for robust oversight. As X moves forward, its test will be ensuring that Grok evolves from a blunt instrument into a trusted guardian of truth—without sacrificing nuance or accountability.