How a few Twitter posts may have inflamed the violence hitting the UK (2024)

Support truly
independent journalism

Support Now

Our mission is to deliver unbiased, fact-based reporting that holds power to account and exposes the truth.

Whether $5 or $50, every contribution counts.

Support us to deliver journalism without an agenda.

Even before much at all was known about the killings in Southport, online speculation had began. Much of it was false: posts about the attacker’s name, and supposed Muslim identity, were trending long before people could know the actual facts.

But they had already done what they needed to. A vast and violent far-right response had been mobilised, and almost a week later has seen towns across the country overtaken by violent riots and disorder.

Experts agree that it is impossible to say how much of that unrest is the result of those false posts and information. But it is clear that they helped inflame a narrative that was untrue but nonetheless helpful to far-right communities that were already organised.

How a few Twitter posts may have inflamed the violence hitting the UK (2)

Follow our live coverage of UK riots

“The riots are a response to the accusation that the killer in the Southport stabbings was a Muslim, and that is categorically false. So it’d be hard to say that misinformation didn’t have a role in the current moment,” says Marc Owen Jones, an associate professor of digital humanities at Northwestern University in Qatar. “Secondly, much disinformation – though not all –is spread on social media, and that is increasingly how people communicate, and that’s how much of the particular disinformation in these cases was spread, so I think again it’s very hard to say that social media doesn’t play a key role in this.

“But it’s not the only thing; it’s not only the immediate moments before and after the killings that are the issue. There has been a diet of disinformation and xenophobic, anti-migration misinformation going on for months, targeting the UK, so I think we have to consider how all of this plays a role in what happened.”

Recommended

  • UK riots live: Fireworks thrown and police van damaged in Plymouth after nearly 400 arrested across country
  • How likely is it that the army will be called in to deal with the riots?
  • More countries issue warnings against UK travel amid riots

It is not necessarily possible to say how much the misinformation is to blame compared with more longstanding motives, because that misinformation works precisely by spreading among people who are ready to receive it. Sander van der Linden, a professor of social psychology and disinformation expert at the University of Cambridge, likens it to a virus –and misinformation experts often use models from epidemiology to track the spread of false information –in the way that it needs to find a “susceptible host”. Just as people may be more at risk of falling sick, they may be more at risk of falling for lies.

“I think that’s where it intersects with the far-right communities that have been spreading anti-immigrant, anti-Muslim misinformation for a long time,” he says. “They have a playbook, they have a long history of doing this, so they can easily leverage their networks and their rhetoric around it.”

Much of the false information about the attack seemed to come from a website called Channel 3 Now, which generates video reports that look like mainstream news channels. But its video and its false claims about the name of the attacker might have stayed relatively obscure if they were not highlighted by larger accounts.

On X, users with considerable followings quickly shared that video and spread it across the site. And on other platforms such as TikTok –where videos can go viral quickly even if the accounts posting them do not have large followings, because of the app’s algorithm –they racked up hundreds of thousands of views. At some point, the false name of the attacker was a trending search on both TikTok and X, meaning that it showed to users who might otherwise have shown no interest in it at all.

A large part of the inflammatory misinformation flared up on X, the social media site that is now owned by Elon Musk. In the time since he bought what was then called Twitter, at the end of 2022, both the site and his personal account have been repeatedly criticised for allowing both false and dangerous content to flourish.

Social networks have long built their systems to encourage engagement, which helps bring money, and inflammatory and antagonistic posts have long been a quick way to encourage that engagement. But experts agree that it is very unlikely that misinformation would have spread in the same way as it has done on the old Twitter, before Mr Musk took it over.

He straight away fired many of the staff in its safety team, for instance, which helped work against misinformation, and that came with a weakening of both the rules on misinformation and their enforcement. (Twitter’s rules do still officially ban “inciting behaviour that targets individuals or groups of people belonging to protected categories”.) He also changed the Twitter API in a way that made it usually prohibitively expensive for researchers to gather tweets, making tracking that misinformation and its spread nearly impossible. Experts also say that the relative lack of enforcement on X has made it easier for other social networks to lower enforcement on misinformation and hate speech; there are few real legal requirements on those companies, and much of the pressure to do so is social.

There are some relatively easy measures that all companies could take to limit the spread of this information, researchers note –and point out that X was previously doing at least some of them. It could more fully enforce its rules against hate speech, to at least temporarily limit the accounts that are posting it, or it could rethink its verification tools that mean anyone can pay to have their content boosted. It may eventually be forced to do some of those things, with an increased focus on its policies from the European Union that could lead to more scrutiny from other governments.

But X is just one site in a vast and active media ecosystem that helps propel information quickly, with little concern for whether it is real or not. Posts that begin on X will make their way to other social media platforms as well as chat apps such as WhatsApp and Telegram –which are private and therefore difficult to track –and information will also flow the other way. But X is notable because it allows people to build a large platform quickly, and spread information to mainstream audiences and people who might not otherwise follow such personalities.

X did not respond to a request for comment on any of these issues from The Independent. The company fired much of its press team in the wake of the takeover from Elon Musk.

Elon Musk’s personal account has often been used to draw attention to other controversial accounts that have been linked with the unrest. Mr Musk often replies to interesting posts with exclamation marks or eyes emoji –and did so last week, in response to posts from Tommy Robinson.

Those replies appear short, but they can help push the posts into other people’s feeds, who may not have opted to follow the original accounts. Professor Jones has shown, for instance, that similar replies from Mr Musk have helped rapidly increase the engagement on posts that may otherwise have stayed relatively obscure.

For the most part, Mr Musk has simply engaged with those controversial posts, rather than sharing them himself –but it nonetheless means that many people on the site will see it, especially given he is its most followed user by far. But occasionally he has seemed to help boost misinformation of his own.

Over the weekend, Mr Musk posted in response to a video of violent riots in Liverpool. “Civil war is inevitable,” he wrote, one of just a number of inflammatory posts he has written in response to the UK’s ongoing unrest. It drew condemnation from the UK government and others.

Mr Musk’s posts on these topics and similar ones have increased in the wake of his takeover on the site. Before he took it over, he had remained largely resistant to endorsing any political view. But in the time since, he has explicitly backed Donald Trump as well as posting more generally about right-wing talking points.

He has said that his focus on what he called the “woke mind virus” arose in part because of his trans child, whose transition he objected to. But Professor van der Linden also noted that Mr Musk may be “radicalising himself on his own platform”.

“If you look at it over time –he was somebody who was talking about solving climate change and working on big things, and then all of a sudden has drifted into this echo chamber of conspiracy theories and science denial on extremism and racism. And how do you find yourself in such an environment? What’s changed between then and now is that he’s spending a lot of time on X.

“It’s not a causal experiment, and some people disagree with the echo chamber hypothesis. But I think if there’s one example where it’s pretty clear that somebody’s spending a little too much time on X, maybe it’s Elon Musk.”

How a few Twitter posts may have inflamed the violence hitting the UK (2024)

FAQs

What is the twitter policy on violence? ›

We prohibit Violent Speech that we consider high in severity and likelihood of harm. Such content must be removed and subsequent violations may result in the account being placed in read-only mode or suspended.

Why did the UK riots start? ›

Initially, the unrest was triggered by misinformation spread about the identity of a teenager who has been arrested over the deaths of three children in a mass knife attack in the northern town of Southport on July 29. It was incorrectly suggested that he was a Muslim and an immigrant.

Are violent videos allowed on Twitter? ›

Twitter does not allow graphic content that depicts death, violence, medical procedures, or serious physical injury in graphic detail; however, exceptions may be made for documentary or educational content. Report violent content on Twitter here.

What is considered abuse on Twitter? ›

On Twitter, what counts as abuse must fit one or more of these criteria: An account sending harassing messages. One-sided harassment that includes threats. Incitement to harass a particular user.

How do I report violence on Twitter? ›

To report a tweet, click the arrow icon in the top right and choose 'Report'. Select the reason for reporting (eg. 'it's abusive or harmful' or 'it's spam'). You'll be asked to provide more information and your report will be sent to Twitter.

What content is not allowed on Twitter? ›

Under the policy, Twitter doesn't allow the following sensitive media content under any circumstances: Graphic content. Adult nudity and sexual behavior. Violent sexual conduct.

What is the jail for tweets? ›

Among them were Tyler Kay, 26, and Jordan Parlour, 28, who were sentenced to 38 months and 20 months in prison respectively for stirring up racial hatred on social media. In all, 118 people have now been jailed for their involvement in some of the worst unrest the UK has seen in over a decade.

Is Screenshotting Twitter illegal? ›

However, the act of freely screenshot quoting someone else's original tweet could potentially infringe on copyright laws. Furthermore, if a screenshot quote is deemed to infringe on copyright laws, the quoted tweet may be removed based on Twitter's 'Copyright Policy[ja]'.

What is the Twitter policy on inappropriate content? ›

Under the policy, Twitter doesn't allow the following sensitive media content under any circumstances: Graphic content. Adult nudity and sexual behavior. Violent sexual conduct.

What is the new Twitter policy? ›

New Twitter policy is freedom of speech, but not freedom of reach. Negative/hate tweets will be max deboosted & demonetized, so no ads or other revenue to Twitter. You won't find the tweet unless you specifically seek it out, which is no different from rest of Internet.

How long is Twitter's suspension for violent speech? ›

X may suspend your account permanently for severe violations. This will happen if you repeatedly break the rules. Breaking certain rules for the first time can also result in an immediate, permanent suspension—for example, this could happen if you post threatening or illegal content.

What laws did Twitter break? ›

The Commission alleged that Twitter's deceptive use of user email addresses and phone numbers violated the FTC Act and the 2011 Commission order, which stemmed from FTC allegations that the company deceived consumers and put their privacy at risk by failing to safeguard their personal information, resulting in two data ...

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Prof. Nancy Dach

Last Updated:

Views: 5259

Rating: 4.7 / 5 (57 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Prof. Nancy Dach

Birthday: 1993-08-23

Address: 569 Waelchi Ports, South Blainebury, LA 11589

Phone: +9958996486049

Job: Sales Manager

Hobby: Web surfing, Scuba diving, Mountaineering, Writing, Sailing, Dance, Blacksmithing

Introduction: My name is Prof. Nancy Dach, I am a lively, joyous, courageous, lovely, tender, charming, open person who loves writing and wants to share my knowledge and understanding with you.