As the riots raged in the U.K., Elon Musk began making incendiary comments about the situation, including the statement: “Civil war is inevitable.” Musk is the owner of X, the social media platform formerly known as X.
Aytug Can Sencar | Anadolu | Getty Images
LONDON — Prime Minister Keir Starmer’s Labour government is considering ways to toughen up internet safety regulations in the U.K. after misinformation sparked a spate of anti-immigration protests and X owner Elon Musk made incendiary comments in posts that were viewed by millions of people.
Two industry sources with knowledge of the matter told CNBC that following the events of the past two weeks, Labour is considering a review of the Online Safety Act — legislation that requires tech giants to prevent the spread of illegal and harmful content on their platforms.
These sources were not authorized to speak publicly about the proposed changes, as the conversations surrounding revamped online safety laws are ongoing.
Top officials have made comments in recent days saying that the government may review the Online Safety Act to make it tougher on disinformation, hate speech and incitement to violence.
“There are obviously aspects of the Online Safety Act that haven’t come into effect yet. We stand ready to make changes if necessary,” Nick Thomas-Symonds, minister for the Cabinet Office, told CNBC sister network Sky News.
Media and telecommunications regulator, Ofcom, has been unable to act against social media platforms for allowing hate speech and other content that would violate the law, because of the fact that the legislation hasn’t fully come into force yet.
What is the Online Safety Act, exactly? And what could it mean for tech firms like Elon Musk’s X? CNBC runs through all you need to know.
What is the Online Safety Act?
The Online Safety Act is a landmark piece of legislation in the U.K. that seeks to force social networks and video streaming media companies to rid their platforms of illegal content.
The regulation contains new duties which would require tech companies to actively identify, mitigate and manage the risks of harm from such material appearing on their platforms.
There are several examples of content that, if reported, could make a company liable for criminal sanctions. These include child sexual abuse, fraud, racially or religiously aggravated offenses, incitement to violence, and terrorism.
Once the rules take effect, Ofcom would have the power to levy fines of as much as 10% of companies’ global annual revenues for breaches. In cases where repeat breaches occur, individual senior managers could even face jail time.
Ofcom has said the new duties on tech firms won’t fully come into force until 2025, once it’s finished consulting on codes of conduct for the companies.
Why are there calls for the law to change?
Two weeks ago, a 17-year-old knifeman attacked several children attending a Taylor Swift-themed dance class in the English town of Southport in Merseyside. Three girls were killed in the attack.
Shortly after the attack, social media users were quick to falsely identify the perpetrator as an asylum seeker who arrived in the U.K. by boat in 2023.
Posts on X sharing the fake name of the perpetrator were actively shared and were viewed by millions. That in turn helped spark far-right, anti-immigration protests, which subsequently descended into violence, with shops and mosques being attacked and bricks and petrol bombs being hurled.
Riot police hold back protesters near a burning police vehicle after disorder broke out on July 30, 2024 in Southport, England. Rumours about the identity of the 17-year-old suspect after deadly stabbing attack in Southport sparked a violent protest with unrest spreading across England and Northern Ireland.
Getty Images | Getty Images News | Getty Images
As the riots raged on, Musk, who owns X, began making comments about the situation in the U.K. He suggested the riots could end up resulting in a civil war, saying in an X post: “Civil war is inevitable.” His comments have been condemned by the U.K. government.
When questioned during a press briefing about Musk’s remarks, the official spokesperson for Prime Minister Keir Starmer said that there was “no justification” for such statements.
Musk also shared an image of a fake headline that was made to look like it had come from “The Telegraph” newspaper’s website, falsely claiming the U.K. was building “detainment camps” on the Falkland Islands for rioters. He has since deleted it.
Riot police officers push back anti-migration protesters outside on Aug. 4, 2024 in Rotherham, U.K.
Christopher Furlong | Getty Images
These events have sparked calls for the government to revisit the Online Safety Act to ensure it is implemented faster and that there are provisions to ensure it is more effective to prevent such events from happening in future.
How could the law change?
So far, it is not yet clear how — or even when — the Online Safety Act will be revisited. One industry source told CNBC that the government is “trying to work out what has happened over the last few days and focused on the response.”
“I don’t think there is much policy thinking has been done yet here,” the source added.
New measures on disinformation are likely to be looked at, among a few other options — however, the government hasn’t come to any “concrete views” on how the legislation should change yet.
A second industry source said that the government is likely to review the legislation only once it is in force, likely in spring 2025. “I think this is a way of sounding tough but putting off a difficult decision,” they told CNBC. “It’s by no means an easy fix. It’s incredibly hard to do.”
A spokesperson for the the Department for Science, Innovation and Technology — which is responsible for overseeing online safety regulations — told CNBC: “The internet cannot be a haven for those seeking to sow division in our communities.”
“Once fully implemented, the Online Safety Act will require platforms to take action to address illegal content. The Act will also require the biggest platforms to enforce their own terms of service, including where these prohibit the spreading of misinformation,” the spokesperson said.
“Our immediate focus is getting the Online Safety Act implemented quickly and effectively. However, our message to social media companies remains clear: there is no need to wait—you can and should take immediate action to protect your users,” the spokesperson added.
It’s worth noting that Labour had already committed to toughening the Online Safety Act in its election manifesto. Proponents for a review say the act needs to be stricter on social media platforms to ensure they implement a robust response to misinformation, hate speech, and incitement to violence.
“I think what the government should do very quickly is check if it is fit for purpose. I think it’s not fit for purpose,” Mayor of London Sadiq Khan told the Guardian newspaper last week.
Joe Ondrak, research and tech lead for the U.K. at tech company Logically, told CNBC that there are aspects of the Online Safety Act that address disinformation — but they’re far from perfect.
While the law “does have some very specific provisions about certain types of disinformation in it,” including disinformation spread by by foreign state actors, it “doesn’t cover really comprehensively domestic disinformation,” Ondrak told CNBC.
– CNBC’s Sophie Kiderlin contributed to this report