In recent weeks, a wave of conflict between Israel and Hamas has gripped the world’s attention, but it’s not just the physical battlegrounds where this struggle is unfolding. The online arena has seen a parallel war of a different kind, where Palestine and Hamas and its sympathizers continue to disseminate content across social media platforms despite bans and moderation efforts by tech giants. This persistent digital presence raises questions about the efficacy of content moderation and freedom of expression on the internet.
Hamas, a Palestinian political and military organization, has faced restrictions on major social media platforms such as Facebook, Instagram, TikTok, YouTube, and the platform formerly known as Twitter. These bans aim to prevent the spread of extremist content and accounts affiliated with the group. However, reports suggest that this has not deterred Hamas and its supporters from gaining a significant online following.
According to a review by The New York Times, several social media accounts with a sympathetic stance towards Hamas have seen a surge in their number of followers since the outbreak of hostilities on October 7. Telegram, a messaging app known for its limited content moderation, witnessed one account, “Gaza Now,” associated with Hamas, increase its follower count to over 1.3 million this week, up from around 340,000 prior to the recent attacks.
What’s even more alarming is the content being shared on these platforms. Jonathan A. Greenblatt, CEO of the Anti-Defamation League, notes, “We’ve seen Hamas content on Telegram, like bodycam footage of terrorists shooting at Israeli soldiers. We’ve seen images not just on Telegram but on the other platforms of bloodied and dead soldiers.”
This persistent online presence of pro-Hamas content poses a challenge to technology companies trying to balance their responsibility to curb false or extremist content while respecting freedom of expression. In previous conflicts, such as the Rohingya genocide in Myanmar, social media platforms struggled to strike the right balance between moderating content and allowing legitimate discourse.
Experts argue that Hamas and its affiliated accounts are capitalizing on these challenges to evade moderation and disseminate their messages to a broader audience. Most social media platforms have long-standing policies against hosting content from terrorist organizations and extremist groups, and they have enforced these policies by banning Hamas-related accounts and content.
Also Read: AI Device Revolution: Sam Altman and Jony Ive’s Quest to Redefine Tech
For example, Gaza Now, a Facebook account with over 4.9 million followers, was banned shortly after The Times contacted Meta, Facebook’s parent company. Gaza Now shared accusations against Israel and encouraged its followers to subscribe to its Telegram channel, where much of the gruesome content was posted. Similar Hamas-affiliated accounts on other platforms also faced removal.
Telegram stands out as a significant platform for pro-Hamas messaging. It hosts an official account for Al-Qassam Brigades, the military wing of Hamas, which saw its follower count triple since the recent conflict began.
Pavel Durov, CEO of Telegram, acknowledged the presence of harmful content but chose not to outright ban Hamas-related accounts, citing their utility for researchers and journalists. He stated, “While it would be easy for us to destroy this source of information, doing so risks exacerbating an already dire situation.”
In contrast, X, a social media platform owned by Elon Musk, faced an influx of falsehoods and extremist content during the conflict. Researchers noted that posts supporting terrorist activities on X received over 16 million views in a single day. The European Union expressed concern over whether X violated European laws regarding harmful content spread on social networks but did not receive a response from the platform.
Interestingly, it is not only Hamas-affiliated accounts that have faced bans. Some pro-Palestinian users have reported that Facebook and Instagram suppressed or removed their posts, even when they did not violate platform rules. Meta, the parent company of Facebook, acknowledged that some content was inadvertently removed due to an accidental bug in Instagram’s systems.
These challenges underline the difficulties of content moderation, a task that relies on a mix of human moderators and algorithms, often without consistent coordination between platforms. Kathleen Carley, a researcher and professor at Carnegie Mellon University, emphasized the need for consistent content moderation across all major platforms to prevent a “Whac-a-Mole” scenario.
As the conflict between Palestine and Hamas continues, the online battleground remains as contentious as ever. Tech giants find themselves walking a fine line between preventing extremist content and preserving free expression on their platforms, all while facing criticism from various quarters. The evolving dynamics of the online world raise pressing questions about the responsibilities and limitations of social media companies in an era where information dissemination knows no borders.