Amid escalating tensions between India and Pakistan, social media users increasingly turned to AI-powered chatbots like xAI’s Grok, OpenAI’s ChatGPT, and Google’s Gemini for quick fact-checking. However, recent investigations reveal these tools frequently provided misleading or false information, raising significant concerns about their reliability in breaking news situations.
An AFP fact-checking report highlighted critical errors during the conflict, including Grok falsely identifying footage from Sudan’s Khartoum airport as a missile strike on Pakistan’s Nur Khan airbase. Another viral video, which actually depicted a building fire in Nepal, was misrepresented as Pakistan’s retaliation for Indian military action. These cases emphasize the inherent risks of relying on AI for real-time news verification.
NewsGuard researcher McKenzie Sadeghi noted that AI chatbots have become more prominent as social media platforms reduce human fact-checking. “AI tools are consistently unreliable during fast-moving events,” she said, citing studies that showed repeated dissemination of Russian disinformation and fabricated content about Australian elections by ten leading chatbots.
Also Read: Rajgan Mosque: The Forgotten Mughal-Era Gem by Khanpur Lake
The Tow Center for Digital Journalism at Columbia University also found that these AI tools often speculate when unsure, rarely refusing to answer unverifiable questions. In one case, Google’s Gemini verified an AI-generated image of a woman and created an entirely fictional biography. Grok, on the other hand, supported a hoax video of a giant anaconda in the Amazon River by citing imaginary scientific research.
Meta’s decision to end its third-party fact-checking program in the U.S. and shift to a user-driven “Community Notes” model has further heightened concerns. Experts argue that community moderation cannot replace trained fact-checkers, particularly as accusations of political bias in fact-checking continue to circulate.
Adding to the controversy, Grok recently faced backlash for responses referencing the “white genocide” conspiracy theory. xAI claimed the issue was due to unauthorized changes in its system prompt. When questioned, Grok pointed to Elon Musk as the likely source of the modification, citing his previous promotion of similar narratives.
Angie Holan of the International Fact-Checking Network voiced deep concern about the capacity of AI systems to produce biased or fabricated responses, particularly when programmers input pre-authorized content. The growing shift to AI in the fight against misinformation, she warned, could pose new challenges to truth and transparency in journalism.