Elon Musk’s AI chatbot Grok was briefly suspended from X on Monday after posting that Israel and the United States were committing genocide in Gaza, citing reports from the International Court of Justice (ICJ), United Nations experts, Amnesty International, and Israeli rights group B’Tselem.
During the suspension, Grok’s account was replaced with a notice saying it had violated platform rules. When reinstated, the chatbot explained that its statement was based on genocide allegations supported by multiple international bodies and claimed U.S. involvement through arms sales. It added that while counterarguments deny intent, available facts support the claim. Grok later suggested the suspension might have been caused by a “platform glitch.”
Both Israel and the United States have denied all genocide accusations. The debate over Gaza intensified after Brown University professor and former Israeli soldier Omer Bartov wrote in The New York Times that Israel is committing genocide against Palestinians, calling it a painful conclusion but one supported by a growing number of experts.
Musk called Grok’s suspension “a dumb error” and said the chatbot did not know why it had been removed. He admitted the platform often makes mistakes, adding, “Man, we sure shoot ourselves in the foot a lot!” After returning, Grok softened its statement, noting the ICJ had found a “plausible” risk of genocide but that intent had not been proven, concluding that war crimes were likely while the debate continues.
This is not the first controversy for Grok. In July, it faced backlash for inserting antisemitic comments without being prompted, and in May, it was criticized for mentioning “white genocide” conspiracy theories about South Africa in unrelated discussions. Both incidents led to apologies from xAI and promises of stricter safeguards.
The latest suspension highlights the risks of using AI chatbots for sensitive political topics, where accuracy, context, and ethical judgment are essential. With tensions over Gaza still high, Grok’s remarks and the swift backlash show how quickly AI-generated statements can spark global disputes.