The AI model Grok, developed by Elon Musk’s xAI, is now in general availability, and issues have already surfaced. Security tester Jax Winterbourne shared a snippet of Grok on Friday declining a request, stating, “I’m unable to fulfill that request as it contradicts OpenAI’s usage policy.” This caught attention online because Grok is not affiliated with OpenAI, the creator of ChatGPT, with which Grok is positioned to compete.
Interestingly, xAI members did not deny that this behavior exists within their AI model. Igor Babuschkin from xAI acknowledged that during the training of Grok on a vast amount of website data, some outputs resembling ChatGPT were unintentionally incorporated. He assured that steps would be taken to address this issue in future iterations of Grok. Winterbourne responded with gratitude, mentioning the common occurrence of such incidents in script creation and deferred to experts in LLM and AI for further insights.
Some experts have raised doubts about Babuschkin’s explanation, noting that large language models typically do not reproduce their training data verbatim. Instead, specific training would be required to incorporate the concept of denying outcomes based on OpenAI policies, especially considering Grok’s fine-tuning on productivity data from OpenAI language models.
In an interview with Ars Technica, AI scientist Simon Willison expressed skepticism about Grok unintentionally picking up content from ChatGPT, suggesting that it was likely intentionally fine-tuned on datasets containing ChatGPT outputs. This practice of fine-tuning AI models using synthetic data from other models has become more common, especially with open-source projects aiming to enhance specific capabilities.
It is speculated that xAI may have utilized open-source datasets derived from ChatGPT outputs to fine-tune Grok for improved instruction-following abilities. By refining the filtering process for training data, xAI could potentially avoid similar incidents in the future.
This incident has reignited the rivalry between OpenAI and xAI, stemming from Elon Musk’s past criticisms of OpenAI. While borrowing outputs from other models is common in the machine learning community, it can sometimes conflict with terms of service. The exchange between Winterbourne and Musk underscores the competitive dynamics between the two entities.