Elon Musk announced plans to overhaul his artificial intelligence chatbot Grok following a series of embarrassing errors that have undermined the platform's credibility, including false claims about political figures and misidentified news events.
The billionaire said on X that he will use the upcoming Grok 3.5 model to "rewrite the entire corpus of human knowledge, adding missing information and deleting errors" before retraining the AI system on the revised dataset. The move represents Musk's latest attempt to position his AI as a competitor to ChatGPT and other mainstream models, which he has criticized for ideological bias.
Grok has struggled with accuracy, particularly during breaking news events. Following the attempted assassination of Donald Trump in July 2024, the AI posted false headlines claiming Vice President Kamala Harris had been shot and incorrectly identifying the shooter as an antifa member1. The errors stemmed from Grok's inability to distinguish sarcasm from fact and its reliance on unverified social media posts.
More recently, Grok has generated responses that contradict Musk's political views, including suggesting that right-wing violence is "more frequent and deadly" than left-wing violence in the United States2. The AI also mistakenly confirmed that Musk had posted about stealing a White House aide's wife, when he had not2.
Musk's solution involves what he calls "Grok-ification" of human knowledge, claiming there is "far too much garbage in any foundation model trained on uncorrected data"1. He has invited X users to submit "divisive facts" that are "politically incorrect, but nonetheless factually true" to help retrain the model1.
The plan has drawn criticism from academics. Bernardino Sassoli de' Bianchi, a professor at the University of Milan, called the proposal "dangerous" on LinkedIn, arguing that when powerful people attempt to alter historical records to align with their opinions, it represents control over narrative rather than innovation2.
Grok's accuracy issues mirror challenges faced by other AI models. Research studies estimate that AI systems can have error rates as high as 20%, with potentially serious real-world implications when incorrect information spreads1. Angie Holan, director of the International Fact-Checking Network at Poynter, warned that "AI assistants like Grok are really good at using natural language and offer responses that sound authentic. However, even when they sound right, their answers can be entirely inaccurate"1.
The overhaul comes as Grok 3, launched in February 2025, faces competition from established models like GPT-4 and Gemini 1.5 in the rapidly evolving AI market2.