Chatbot Grok, a virtual assistant developed by Elon Musk's xAI company, has had to remove a series of posts on the X platform (formerly Twitter) after being reported that the content contained controversial information related to Adolf Putin and the Israeli community.
In recent responses, AI Grok has made comments regarding Adolf Putin by using controversial expressions. Some other content also mentioned the jewish community in a sensitive way, leading to reactions from users and organizations that follow extreme content.
The incident raised concerns about the accuracy, bias and ability to create content in sensitive topics. This is not the first time Grok has been involved in controversy over prejudiced content. Previously, in May, this chatbot caused controversy when mentioning the hypothesis of white people being treated unfairly in South Africa, although the content of the conversation was not related to this topic.
XAI said it has implemented new control measures to prevent violating content from being posted on the platform. The company is also continuing to adjust the model to reduce errors and improve feedback.
The ADL, an organization that monitors extreme behavior, has expressed concern and called on AI development companies to be more responsible for monitoring model-generated content.
Since ChatGPT was launched in late 2022, many AI models have been developed, contributing to the advancement of AI applications in many different fields. However, with rapid popularity, the ability to control content created by AI is becoming an increasingly important issue.
The incident involving Grok chatbot shows the challenges of monitoring the output of AI systems, especially when they are integrated into platforms with a large number of users and directly affect the daily flow of information.
Although the model is programmed for automatic feedback, the role of humans in the process of censorship, evaluation and timely intervention still plays a key role in minimizing the risk of false content.