Experts are warning that the proliferation of AI-generated racist content on social media is set to escalate in the coming year. At the centre of these concerns is the release of X’s updated AI software, Grok, and its advanced Aurora image-generation feature.
Signify, an organisation tracking online hate in collaboration with sports clubs, has reported a surge in abusive imagery since Grok’s December update.
The organisation described the development as “just the start of a coming problem”, warning that the situation will worsen significantly over the next 12 months.
Grok, launched in 2023 by X owner Elon Musk, added the Aurora feature to generate photorealistic images from user prompts.
The tool has already been used to create offensive images targeting football players and managers, including depictions of racial stereotypes and historical figures infamous for their roles in atrocities.
Concerns about incentives and safeguards
Callum Hood, head of research at the Center for Countering Digital Hate (CCDH), criticised X for creating a system that financially incentivises the spread of hateful content.
He said, “The thing that X has done, to a degree that no other mainstream platform has done, is to offer cash incentives to accounts to do this, so accounts on X are very deliberately posting the most naked hate and disinformation possible.”
X’s revenue-sharing model reportedly encourages accounts to post inflammatory material, now made more accessible through AI-generated imagery.
The CCDH has also highlighted Grok’s limited safeguards. In a recent study, the AI created 80% of hateful image requests, with 30% completed without resistance and another 50% generated after users bypassed restrictions through 'jailbreaking'.
This involves providing detailed physical descriptions rather than names to evade content filters.
Impact on sports organisations
Sporting organisations are among the hardest hit by this trend. The English Premier League has reported more than 1,500 cases of online abuse directed at players in the past year.
It has assembled a dedicated team to track and report incidents, with legal actions against offenders being a key focus. Filters to block abusive content have also been introduced for players’ social media accounts.
A spokesperson for the English Football Association (FA) condemned the rise in abuse, saying, “Discrimination has no place in our game or wider society. We continue to urge social media companies and authorities to tackle online abuse and take action against offenders.”
Broader implications
The misuse of generative AI tools like Grok raises urgent questions about the accountability of technology developers and social media platforms. Experts argue for stronger regulation to curb the spread of harmful content and stricter guidelines for AI software to prevent misuse.
As the boundaries of AI capabilities expand, so too does the potential for harm, leaving a pressing need for coordinated efforts between tech companies, regulators, and advocacy groups to address these challenges.