TikTok has written to social media companies asking them to hitch collectively to take away content material that depicts self-harm or suicide extra rapidly.
It comes after a clip of a person killing himself was extensively circulated on its platform and seen by younger youngsters.
Theo Bertram, Europe’s public coverage head, mentioned the sharing of the video advised a co-ordinated assault, presumably from bot accounts.
He declined to debate ongoing negotiations on the way forward for TikTok.
Mr Bertram was being grilled by MPs on the Department of Digital Culture, Media and Sport who’re investigating how social media platforms cope with on-line harms.
They had been additionally eager to listen to extra about the way forward for the corporate outdoors China, in wake of President Donald Trump’s menace to ban the app within the US except a deal is struck with American companies.
Owner ByteDance is at present in talks with Oracle and Walmart over its future, however studies counsel that China is unlikely to approve what it sees as an unfair deal.
Mr Bertram mentioned he was not capable of touch upon the small print of the continued negotiations.
“I think there are broader concerns around China and China’s role in the world. And I think that these concerns are projected on to TikTok and don’t think they are always fairly projected,” he advised MPs.
When pressed on how the platform handled content material delicate to the Chinese authorities, equivalent to protests in Hong Kong and the remedy of the Uighur Muslims, he advised MPs: “TikTok is a business outside of China and is led by European management that have the same concerns and the same world view that you do and we care about our users.”
Some of these customers have not too long ago been traumatised by a clip circulating on the platform exhibiting a US man killing himself, and Mr Bertram acknowledged that the agency needed to “do better”.
Mr Bertram defined that the agency had seen an enormous spike within the sharing of the clip every week after the printed came about on Facebook Live.
“Following an internal review, we found evidence of a co-ordinated effort by bad actors to spread this video across the internet and platforms, including TikTok.
“And we noticed individuals trying to find content material in a really particular method. Frequently clicking on a profile of individuals as in the event that they’re type of anticipating that these individuals had uploaded a video.”
He said the firm had written to the chief executives of Facebook, Instagram, Google, YouTube, Twitter, Twitch, Snapchat, Pinterest and Reddit.
“What we’re proposing is that, the identical method these firms already work collectively round baby sexual imagery and terrorist-related content material, we must always now set up a partnership round coping with any such content material.”
And for TikTok itself, he promised “modifications to machines studying and emergency methods” as well as how algorithms that detect such content can work better with the firm’s content moderators.
He was also asked about reports that TikTok had removed content around disabilities or LGBTQ.
He explained that “sadly” there had been a policy around not promoting content that might encourage bullying, which limited content from people with disabilities and LGBTQ content.
“That is not our coverage,” he said.
He was less clear on whether the firm restricted the promotion of LGBTQ hashtags in Russia, saying: “Not so far as I’m conscious… The solely time we are going to take away that content material is when we’ve got a authorized requirement to take action.”