Social media companies are removing videos and images that according to a new report from Human Rights Watch could be vital in prosecuting serious crimes.

Facebook, YouTube and Twitter through Artificial Intelligence algorithmic formulas to remove videos that are considered inappropriate or illegal. Human Rights Watch says evidence that could be vital is being lost or destroyed.

This is the Russian system from which the missile was launched, which prosecutors say caused the crash of the Malaysian Airlines MH17 plane in 2014. The videos were collected from social media sites by the Bellingcat organization – whose evidence was used by a multinational team leading the investigation. Russia denies responsibility.

“What we do know from Bellingcat is that in the later stages of the investigation, they went back to look at the sources, some of the social media posts they would have used to support their investigations, in order to present them to them.

Judicial authorities in the Netherlands. But they said they had been removed, “said Belkis Wille of Human Rights Watch.

Evidence from social media plays a central role in many investigations, such as these videos leaked by Amnesty International claiming to show the abuses committed by the Nigerian military during its offensive against the terrorist group Boko Haram. In its new report – ‘Unavailable Video’ – Human Rights Watch warns that vital evidence is being erased.

“What we have started to notice in recent years, especially since 2017, is that we can see a video where let’s say soldiers execute someone, or an ISIS propaganda video, and if fifteen minutes or an hour more “later we would come back to see the video again, it has suddenly disappeared,” says Belkis Wille.

Social media companies told Human Rights Watch that they are required by law to remove material that could be offensive or incite terror, violence or hatred. Like human moderators, many also use artificial intelligence algorithms to delete materials.

“What we have started to notice in recent years, especially since 2017, is that we can see a video where let’s say soldiers execute someone, or an ISIS propaganda video, and if fifteen minutes or an hour more “later we would come back to see the video again, it has suddenly disappeared,” says Belkis Wille.

Social media companies told Human Rights Watch that they are required by law to remove material that could be offensive or incite terror, violence or hatred. Like human moderators, many also use artificial intelligence algorithms to delete materials.

“What we are looking for is the creation of a kind of global mechanism, a kind of archive or library,” says Belkis Wille.

Human Rights Watch says it is in dialogue with social media companies to create such an archive.

Twitter said it is unable to provide civil society organizations with access to users’ content without a proper legal order. Facebook and YouTube did not respond to VOA requests for comment as of this writing.

error: Content is protected !!