More than 1,100 people have been charged in connection to the summer 2024 riots. A small number of them were charged for offences related to their online activity.
Their jail sentences – which ranged from 12 weeks to seven years – became a flashpoint for online criticism. The people behind the posts were variously defended, held up as a cause célèbre and cast as “political prisoners”; their posts minimised and repeated; their prosecution misrepresented as an attack on free speech (the majority of those prosecuted for online offences faced charges of stirring up racial hatred).
The posts themselves didn’t appear on corners of the internet more readily associated with the far right such as Telegram, Parler, Gettr, 4chan and 8kun – but on mainstream social media platforms including X and Instagram and the largest of them all, Facebook. And while most were posted on these individuals’ personal pages, others were posted in public groups.
Which brought us to thinking: what kind of online communities did these individuals – and the people defending them – belong to? What was being shared in these spaces? And were the views so normalised in such environments that people felt emboldened to post material which the UK authorities and judiciary had deemed met the criminal bar?
Our starting point was to trace the Facebook profiles of the people charged who we had identified for an earlier project, using publicly available information (police and media reports). Of around two dozen people we knew to have been charged with online offences in connection with the 2024 summer riots, we traced five to three public Facebook groups. We also found dozens of visually similar or duplicated posts defending those charged cross-posted in these groups.
From there we mapped the wider network of other Facebook groups, connected not only by these posts, but by a common membership as well as shared moderators and admins.
In doing so we uncovered a thriving ecosystem that is growing in membership and an online community bound together by a deep distrust of government and its institutions, trading in anti-immigrant sentiment, nativism, conspiracy and misinformation including content which experts have identified as far-right.
But we also found people deeply disillusioned, who hold deep-seated concerns about the society of which they form a part and a genuinely held conviction that their freedom of expression is under threat.
Identifying the groups
Why these groups?
The three groups that form the basis of our main analysis were chosen because they contained one or more current or past members charged in connection with the 2024 summer riots or had comments supportive of those involved in the riots, be that in-person or online.
We subsequently found links between these and 13 other groups – all but three of which are also public groups – which have at least one shared moderator or administrators, both of which play important roles in these groups. Moderators can manage membership by approving requests or issuing bans and have the authority to remove posts or comments. Admins possess expanded powers, including the ability to adjust group settings, modify its description, and appoint additional administrators or moderators.
What posts did we concentrate on?
To get a sense of the kind of material people who join such groups share and consume, we set about capturing all posts hosted on three of the largest identified groups between the date the group was established and mid May 2025.
We captured the links and text associated with 123,000 posts in all. However, as our categorisation process (below) was necessarily restricted to just text-based posts, the subsequent analysis concentrated on 51,000 of them.
How many people are in these groups?
We did not capture the names of individual group members (aside from moderators, admins and top posters); therefore when we speak about groups’ combined membership we are almost certainly double-counting individuals who are members of more than one group.
Categorisation
We first established that the posts contained far-right content using established academic methods, categorising the posts via keywords and combinations of words that signal radicalisation. We then supplemented this with generative AI tools which had recently become available to the data team due to changes in the Guardian’s editorial policy around the use of such tools for the purposes of journalism, categorising the posts into anti-establishment, anti-immigration, demonisation of migrants, nativism and far-right identity/denial.
To classify these 51,000 social media posts we used ChatGPT 4.1 via OpenAI’s API, iteratively testing prompts on a randomised sample of posts over 12 rounds until we consistently reached agreement of more than 90% between the model and three human reviewers, at least two of whom had to be in agreement with the model to progress.
Having satisfied ourselves with the model’s reliability in smaller batches, we based our final evaluation on a wider sample of randomly selected posts at which point the agreement between the human reviewers and the model stood at 93%.
A sample of the final output was carried out on a statistically determined number of posts again reviewed by the same separate annotators.
Tests concluded that the model performed very well – matching or even improving the consistency of human reviewers in most categories:
-
Accuracy (percentage of correctly classified instances): 94.7%.
-
Precision (percentage of times the True labels assigned by GPT are correct): 79.5.
-
Recall (percentage of instances classified as True by the humans that were also classified as True by GPT): 86.1%.
-
F1 (a single percentage that combines precision and recall, higher when GPT both finds the True cases and avoids false alarms): 82.6%.
The model’s performance was evaluated by in-house Guardian statisticians who concluded that the results were robust outcomes as benchmarked against similar academic studies.
Despite the robust performance of the model, it is inevitable that some posts will have been misclassified within our analysis.
We believe that the process of categorisation via the OpenAI API – while imperfect – is rigorous, defensible, transparent, and supports robust journalism.
Quick Guide
Contact us about this story
Show
The best public interest journalism relies on first-hand accounts from people in the know.
If you have something to share on this subject, you can contact us confidentially using the following methods.
Secure Messaging in the Guardian app
The Guardian app has a tool to send tips about stories. Messages are end to end encrypted and concealed within the routine activity that every Guardian mobile app performs. This prevents an observer from knowing that you are communicating with us at all, let alone what is being said.
If you don’t already have the Guardian app, download it (iOS/Android) and go to the menu. Select ‘Secure Messaging’.
SecureDrop, instant messengers, email, telephone and post
If you can safely use the Tor network without being observed or monitored, you can send messages and documents to the Guardian via our SecureDrop platform.
Finally, our guide at .com/tips lists several ways to contact us securely, and discusses the pros and cons of each.