Home » Technology » ChatGPT isn’t the only chatbot pulling answers from Elon Musk’s Grokipedia

Share This Post

Technology

ChatGPT isn’t the only chatbot pulling answers from Elon Musk’s Grokipedia

ChatGPT isn’t the only chatbot pulling answers from Elon Musk’s Grokipedia

Google’s Gemini, AI Mode, and AI Overviews, Perplexity, and Microsoft are starting to cite Musk’s Wikipedia knockoff.

Google’s Gemini, AI Mode, and AI Overviews, Perplexity, and Microsoft are starting to cite Musk’s Wikipedia knockoff.

STKB380_GROKIPEDIA_B
STKB380_GROKIPEDIA_B
Robert Hart
is a London-based reporter at The Verge covering all things AI and Senior Tarbell Fellow. Previously, he wrote about health, science and tech for Forbes.

ChatGPT is using Grokipedia as a source, and it’s not the only AI tool to do so. Citations to Elon Musk’s AI-generated encyclopedia are starting to appear in answers from Google’s AI Overviews, AI Mode, and Gemini, too. Data suggests that’s on the rise, heightening concerns about accuracy and misinformation as Musk seeks to reshape reality in his image.

Since the warped Wikipedia-clone launched late last October, Grokipedia technically remains a minor source of information overall. Glen Allsopp, head of marketing strategy and research at SEO company Ahrefs, told The Verge the firm’s testing found Grokipedia referenced in more than 263,000 ChatGPT responses from 13.6 million prompts, citing roughly 95,000 individual Grokipedia pages. By comparison, Allsopp said the English-language Wikipedia showed up in 2.9 million responses. “They’re quite a way off, but it’s still impressive for how new they are,” he said.

Based on a dataset tracking billions of citations, marketing platform Profound researcher Sartaj Rajpal said Grokipedia received around 0.01 to 0.02 percent of all ChatGPT citations per day — a small share but one that has steadily increased since mid-November.

Semrush, which tracks how brands show up in Google tools’ AI answers with its AI Visibility Toolkit, found a similar step-up in Grokipedia’s visibility in AI answers from December, but noted it’s still very much a secondary source compared to established reference platforms like Wikipedia.

Grokipedia citations appear on ChatGPT more than on any other platform that analysts The Verge spoke to are tracking. However, Semrush found a similar spike in Google’s AI products — Gemini, AI Overviews, and AI Mode — in December. Ahrefs’ Allsopp said Grokipedia had been referenced in around 8,600 Gemini answers, 567 AI Overviews answers, 7,700 Copilot answers, and 2 Perplexity answers, from around 9.5 million, 120 million, 14 million, and 14 million prompts, respectively, with appearances in Gemini and Perplexity down significantly from similar testing the month before. None of the firms The Verge spoke to track citations for Anthropic’s Claude, though several anecdotal reports on social media suggest the chatbot is also citing Grokipedia as a source.

In many cases, AI tools appear to be citing Grokipedia to answer niche, obscure, or highly specific factual questions, as The Guardian reported late last week. Analysts agree. Jim Yu, CEO of analytics firm BrightEdge, told The Verge that ChatGPT and AI Overviews use Grokipedia for largely “non-sensitive queries” like encyclopedic lookups and definitions, though differences are emerging in how much authority they afford it. For AI Overviews, Grokipedia tends not to stand alone, Yu said, and “typically appears alongside several other sources” as “a supplementary reference rather than a primary source.” When ChatGPT uses Grokipedia as a source, however, it gives it much more authority, Yu said, “often featuring it as one of the first sources cited for a query.”

Even for relatively mundane uses, experts warn using Grokipedia as a source risks spreading disinformation and promoting partisan talking points. Unlike Wikipedia, which is edited by humans in a transparent process, Grokipedia is produced by xAI’s chatbot Grok. Grok is perhaps best known for its Nazi meltdown, calling itself MechaHitler, idolizing Musk, and, most recently, digitally stripping people online, including minors. When it launched, a bulk of Grokipedia’s articles were direct clones of Wikipedia, though many others reflected racist and transphobic views. For example, articles about Musk conveniently downplays his family wealth and unsavory elements of their past (like neo-Nazi and pro-Apartheid views) and the entry for “gay pornography” falsely linked the material to the worsening of the HIV/AIDS epidemic in the 1980s. The article on US slavery still contains a lengthy section on “ideological justifications,” including the “Shift from Necessary Evil to Positive Good.” Editing is also overseen by Grok and is similarly flawed. Grokipedia is more susceptible to what is known as “LLM grooming,” or data poisoning.

In a comment to The Verge, OpenAI spokesperson Shaokyi Amdo said: “When ChatGPT searches the web, it aims to draw from a broad range of publicly available sources and viewpoints relevant to the user’s question.” Amdo also said that users can see the sources and judge them themselves: “We apply safety filters to reduce the risk of surfacing links associated with high-severity harms, and ChatGPT clearly shows which sources informed a response through citations, allowing users to explore and assess the reliability of sources directly.”

Perplexity spokesperson Beejoli Shah would not comment about the risks of LLM grooming or citing AI-generated material like Grokipedia, but said the company’s “central advantage in search is accuracy,” which it is “relentlessly focused on.” Anthropic declined to answer on the record. xAI did not return The Verge’s request for comment. Google declined to comment.

The point is that Grokipedia can’t be reliably cited as a source at all, no matter how infrequently and despite Musk taking an unsubstantiated victory lap about the encyclopedia’s alleged wild success in Google Search results. It is an AI-generated system, lacking in human oversight, and often reliant on opaque, hard-to-verify material like personal websites and blog posts, and questionable, potentially circular, sourcing. There’s a real risk of reinforcing various biases, errors, or framing issues if it cites something like Grokipedia, said Taha Yasseri, chair of technology and society at Trinity College Dublin, adding that “fluency can easily be mistaken for reliability.”

“Grokipedia feels like a cosplay of credibility,” said Leigh McKenzie, director of online visibility at Semrush. “It might work inside its own bubble, but the idea that Google or OpenAI would treat something like Grokipedia as a serious, default reference layer at scale is bleak.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Most Popular

Share This Post

Leave a Reply