Description
The use of generative AI by political and extremist groups has captured focus in contemporary discussions of disinformation, radicalisation, and hate-based harms. For instance, recent research indicates that AI-generated content helped construct and amplify hateful rhetoric in far-right communities online following the Southport stabbings in 2024, while also inciting riots and acts of targeted, hate-motivated violence offline (Buarque, 2025a, 2025b). Critical feminist scholars have argued that the biases inherent in generative AI seep into both its text- and image-based outputs, consolidating stereotypes and discourses about diverse identities and social worlds in ways that (re)produce the existing structural injustices underpinning hate-based victimisation (Browne, 2023). Image-based outputs are a particularly important contributor to hate: synthetic images can reduce identities and groups to caricatures, reinforcing narratives of subordination, objectification, and otherness that can be quickly understood by members of hateful groups in digital environments.Situated within burgeoning work exploring how genAI intersects with hate speech and harm, this research illuminates how synthetic images created and shared in far-right circles visually (re)produce and amplify existing hateful messaging used to sow division and strengthen in-group cohesion. Drawing on an analysis of 43 AI-generated images shared by @EuropeInvasionn on X/Twitter in the aftermath of the Southport riots in the summer of 2024, I explore how hateful rhetoric adopted by the far-right is visually (re)constructed through AI depictions of white victimhood juxtaposed against ‘monstrous’ and ‘threatening’ racial and religious subjectivities. Underpinned by narrative criminological and intersectional feminist approaches to understanding harm and hate-based victimisation, I demonstrate how stereotypes about gender, race, and religion remain salient in AI-produced visuals, engendering differential harm through depictions of ‘acceptable’ nationalist femininities, masculinities, and ‘threatening’ others. I end on a consideration of how AI-generated hate further legitimises retributive action to ‘protect’ in-group purity and cohesion, linking these visual tropes to the increasing visibility of hateful rhetoric and offline hate-based mobilisation across both the UK and mainland Europe.
| Period | 2025 |
|---|---|
| Event title | Echoes of Hate: Digital Communication, Populism, and the Regulation of Hate Speech |
| Event type | Conference |
| Location | BelgiumShow on map |