Twitter won fame in the Arab uprisings nearly a decade ago as a pivotal source for real-time crisis information, but that reputation has withered after the platform's transformation into a magnet for hate speech and disinformation under Elon Musk.
Historically, Twitter's greatest strength was as a tool for gathering and disseminating life-saving information and coordinating emergency relief during times of crisis. Its old-school verification system meant sources and news were widely trusted.
Now the platform, renamed X by new owner Musk, has gutted content moderation, restored accounts of previously banned extremists, and allowed users simply to purchase account verification, helping them profit from viral -- but often inaccurate -- posts.
The fast-evolving Israel-Gaza conflict has been widely seen as the first real test of Musk's version of the platform during a major crisis. For many experts, the results confirm their worst fears: that changes have made it a challenge to discern truth from fiction.
"It is sobering, though not surprising, to see Musk's reckless decisions exacerbate the information crisis on Twitter surrounding the already tragic Israel-Hamas conflict," Nora Benavidez, senior counsel at the watchdog Free Press, told AFP.
The platform is flooded with violent videos and images -- some real but many fake and mislabeled from entirely different years and places.
Nearly three-fourths of the most viral posts promoting falsehoods about the conflict are being pushed by accounts with verified checkmarks, according to a new study by the watchdog NewsGuard.
In the absence of guardrails, that has made it "very difficult for the public to separate fact from fiction," while escalating "tension and division," Benavidez added.
- 'Fire hose of information' -
That was evident on Tuesday after a deadly strike on a hospital in war-ravaged Gaza, as ordinary users scrambling for real-time information vented frustration that the site had become unusable.
Confusion reigned as fake accounts with verified checkmarks shared images of past conflicts while peddling hasty conclusions of unverified videos, illustrating how the platform had handed the megaphone to paying subscribers, irrespective of accuracy.
Accounts masquerading as official sources or news media stoked passions with inflammatory content.
Misinformation researchers warned that many users were treating an account of an activist group called "Israel war room," stamped with a gold checkmark –- indicating "an official organization account," according to X –- as a supposedly official Israeli source.
India-based bot accounts known for anti-Muslim rhetoric further muddied the waters by pushing false anti-Palestinian narratives, researchers said.
Meanwhile, Al Jazeera warned that it had "no ties" to a Qatar-based account that falsely claimed affiliation to the Middle East broadcaster as it urged its followers to "exercise caution."
"It has become incredibly challenging to navigate the fire hose of information -- there is a relentless news cycle, push for clicks, and amplification of noise," Michelle Ciulla Lipkin, head of the National Association for Media Literacy Education, told AFP.
"Now it's clear Musk sees X not as a reliable information source but just another of his business ventures."
The chaos stands in sharp contrast to the 2011 Arab uprisings that prompted a surge of optimism in the Middle East about the potential of the platform to spread authentic information, mobilize communities and elevate democratic ideals.
- 'Break the glass' -
The breakdown of the site's basic functionality threatens to impede or disrupt the humanitarian response, experts warn.
Humanitarian organizations have typically relied on such platforms to assess needs, prepare logistical plans and assess whether an area was safe to dispatch first responders. And human rights researchers use social media data to conduct investigations into possible war crimes, said Alessandro Accorsi, a senior analyst at the Crisis Group.
"The flood of misinformation and the limitations that X put in place for access to their API," which allow third-party developers to gather the social platform's data, had complicated those efforts, Accorsi told AFP.
X did not respond to AFP's request for comment.
The company's chief executive Linda Yaccarino has signaled that the platform was still serious about trust and safety, insisting that users were free to adjust their account settings to enable real-time sharing of information.
But researchers voiced pessimism, saying the site has abandoned efforts to elevate top news sources. Instead, a new ad revenue sharing program with content creators incentivizes extreme content designed to boost engagement, critics say.
Pat de Brun, head of Big Tech Accountability at Amnesty International said X should use every tool available, including deploying so-called "break the glass measures" aimed at dampening the spread of falsehoods and hate-speech.
"Platforms have clear responsibilities under international human rights standards," he told AFP.
"These responsibilities are heightened in times of crisis and conflict."
ac/sms/tjj