Social Media Platforms Deleting Potential Evidence of Human Rights Abuses, Risking Loss of Crucial Information
Tech companies are deleting content that could serve as evidence for potential human rights abuses. Using artificial intelligence (AI), platforms like Meta (formerly Facebook) and YouTube remove graphic videos, often without archiving them, which could hinder future prosecutions.
While Meta and YouTube claim to strike a balance between bearing witness and protecting users from harmful content, Alan Rusbridger of Meta’s Oversight Board criticizes the industry for being excessively cautious in its moderation efforts.
Although the platforms have exemptions for graphic material in the public interest.
AI algorithms can efficiently eliminate harmful and illegal content on a large scale. However, when it comes to moderating violent images from war zones, machines struggle to distinguish human rights violations due to the lack of nuanced understanding.
In an interview Zakharenko, a former travel journalist in Ukraine, who has been documenting attacks on civilians since the Russian invasion. When he tried to upload videos showing the aftermath of Russian troops shooting civilians attempting to escape occupation, Facebook and Instagram promptly took them down.
Ihor’s footage was then uploaded on Instagram and YouTube using dummy accounts. Instagram removed three out of the four videos within a minute, while YouTube initially applied age restrictions to three videos but eventually removed all of them after 10 minutes.
Tech Companies Fail to Upload Videos with Evidence of War Crimes, Appeal Rejected
Despite attempts to upload the videos containing evidence of war crimes, the content failed to be published altogether. An appeal made to reinstate the videos on the grounds of their importance in documenting war crimes was turned down.
Prominent individuals within the industry emphasize the pressing need for social media companies to prevent the disappearance of such critical information.
“It is understandable why they have trained their machines to swiftly take down anything that appears distressing or challenging,” stated Mr. Rusbridger in an interview. He is a member of the Meta Oversight Board, established by Mark Zuckerberg, which functions as an independent “supreme court” for Meta, the parent company of Facebook and Instagram.
Mr. Rusbridger, a former editor-in-chief of The Guardian, further adds, “The next crucial step for them is to develop mechanisms, whether through human intervention or AI, that can enable more balanced decision-making.” US Ambassador for Global Criminal Justice, Beth Van Schaak, acknowledges the right of tech firms to moderate content but raises concerns when vital information suddenly disappears.