In the three weeks since war began between Israel and Hamas, social media has been taken over with images and stories of attacks, many of which proved false. For example, within hours of Hamas’ surprise attack on October 7, 2023, screen grabs from a popular video game were shared by thousands of social media users as if depicting real scenes of violence against Israeli troops in Gaza. Five days later, a real explosion at a hospital in Gaza spurred further sharing of such spurious images to buttress various claims and counterclaims about responsibility for the casualties. It’s not just this war.In the context of some ongoing wars, they have even been relaxed – with Facebook temporarily allowing posts calling for violence against Russian troops and paramilitary groups occupying parts of Ukraine, for example. Taken together, these processes and policies have opened the door to substantial misinformation and disinformation about armed conflict.Hiding, reporting or simply disengaging with violent content, by contrast, tends to lead to fewer such messages coming in. It may also reduce the odds that such content will reach others. If one knows that a Facebook friend or TikTok content creator has shared false information before, it is possible to block that friend or unfollow that creator. Because users have these means of influencing the images they receive, it is reasonable to assign them some responsibility for algorithmically generated misinformation and disinformation. Altering patterns of engagement with digital content can decrease users’ exposure to misinformation in wartime. But how can users verify the images they do receive before directing others to them? One simple protocol, promoted by educators and public health groups, is known by the acronym SIFT: stop, investigate, find, trace. The four stages of this protocol ask users to stop, investigate the source of a message, find better coverage, and trace quotes and claims back to their original contexts. Images, like quotes, can often be traced to their original contexts. Google makes available its reverse image search tool, which allows users to select an image – or parts of it – and find where else it appears online. No technique or protocol will give users absolute control of the images they see in wartime or provide complete assurance against sharing false information. But by understanding users’ power to influence content, it may be possible to mitigate these risks and promote a more truthful future.
The Ethics of Consuming and Sharing False Information During Times of Conflict
- October 31, 2023
TIS Staff
wp_ghjkasd_staff