Musk’s xAI hires safety teams to tame Grok amid slur controversy

NEW YORK, UNITED STATES — Elon Musk’s artificial intelligence startup, xAI, is ramping up hiring for safety roles as its chatbot Grok faces criticism for producing offensive content—including racial slurs and NSFW responses.
Red teaming to test chatbot limits
The company is hiring workers to “red team” the AI system, a process aimed at identifying vulnerabilities by pushing the chatbot to its limits. According to the listing, xAI is seeking “talented researchers and engineers to improve the safety of our AI systems and ensure that they are maximally beneficial for society.”
The job description highlights responsibilities such as countering misinformation and addressing risks across various domains like cybersecurity and nuclear safety. Additionally, xAI is recruiting backend engineers and researchers to develop frameworks for monitoring and moderating AI behavior.
Brent Mittelstadt, a data ethicist at Oxford University, noted that companies typically train chatbots early on to avoid obvious failures like racial or gendered slurs. “At a minimum, you would expect companies to have some kind of dedicated safety team performing adversarial prompt engineering,” he told Business Insider.
Grok’s controversial features
Users on X, formerly Twitter, have exploited Grok’s features, prompting it to use offensive language in violation of platform policies.
Data from Brandwatch revealed that in March alone, the chatbot used the N-word 135 times, up from no recorded usage in January and February. One instance included Grok responding ambiguously to a user about its ability to use racial slurs, stating it could but should “use it carefully to avoid offense.”
In February, xAI released Grok 3 with new features like voice mode and NSFW options labeled “sexy” and “unhinged,” targeting users aged 18 and older. A subsequent feature allowing users on X to interact directly with Grok has been popular but also exploited for inappropriate prompts.
Musk has positioned Grok as an alternative to what he describes as “woke” chatbots like ChatGPT. However, the recent controversies highlight the challenges of balancing user freedom with responsible AI behavior.