Social media companies will have one hour to delete terrorist content or face hefty fines under new rules being drawn up by the European Union.
Platforms like Facebook, Youtube and Twitter are in the sights of the legislation which is set for debut in the next month, The Financial Times reports.
Previously. Brussels allowed the social media giants to take a voluntary approach to the identification and removal of terrorist propaganda and material of extreme violence. The new rules however will give each company only one hour after the material has been reported to carry out the removal. Companies who do not adhere to the time-limit could then face fines from Brussels.
According to the EU’s commissioner for security, Julian King, there had not been enough progress from the tech companies when it came to the speedy removal of terrorist content. King said it meant the EU needed to “take stronger action in order to better protect our citizens”.
“We cannot afford to relax or become complacent in the face of such a shadowy and destructive phenomenon,” he said.
As we remember those who lost their lives in the Catalonia attacks a year ago, we can’t afford to become complacent in the face of the terror threat – we continue to work on all fronts to tackle terrorism in Europe, including by countering terrorist content online #SecurityUnion
— Julian King (@JKingEU) August 16, 2018
Legislation on the issue had been through the European Parliament in March, complete with the one-hour window for content deletion. However, those guidelines were only voluntary. The updated rules are likely to include the threat of fines for companies who fail to meet the target.
The draft regulation, which will need approval from the majority of EU member states to take effect, is hoped to create legal standardization for websites and their dealings with terrorist content.
“The difference in size and resources means platforms have differing capabilities to act against terrorist content and their policies for doing so are not always transparent. All this leads to such content continuing to proliferate across the internet, reappearing once deleted and spreading from platform to platform,” King said.
The issue of terrorist content on social media has reared its head in recent years following events in London, Paris and Berlin.
For its part, Google reported that more than 90 per cent of the terrorist material removed from YouTube was flagged automatically, with half of the videos having fewer than 10 views. Meanwhile, Facebook said it had removed the vast majority of 1.9 million examples of Isis and al-Qaeda content that was detected on the site in the first three months of this year.
It should also be noted: The EU continues to opt for self-regulation by social media platforms on the subjects of hate speech and fake news. Time will tell if regulation surrounding these sensitive subjects will change.