Cyabra’s AI tackles fake news, disinformation on social media channels

Social media is a hotbed of false information, fake accounts and bots. Unique artificial intelligence software from Israeli startup Cyabra quickly identifies fake news and its purveyors, Sara Miller reports in NoCamels.
Cyabra is a social threat intelligence firm that works to expose online risk to individuals, institutions, corporations and even governments by monitoring and analyzing billions of interactions in real time.
The debate over bots on Twitter escalated last year with Elon Musk’s $44 billion purchase of the social media platform. Musk turned to Cyabra for an assessment of that number, and the startup discovered that fully 11% of Twitter users were bots, which lend credibility to fake posts by sharing them multiple times.
In one notorious case of fake posts that Miller recounts, a Twitter account claiming to be owned by pharmaceutical giant Eli Lilly posted that it was giving away a life-saving diabetes treatment for free. But the tweet, which went viral, did not come from Eli Lilly at all, and the drugmaker wasn’t handing out free insulin. The fake news, designed to draw attention to high US health care costs, was so convincing, though, that it wiped 4.5% off of the company’s share value.
Cyabra’s algorithm uses as many as 600 different behavioral parameters to weed out problematic social accounts and others that follow or engage with them. Unlike other companies that look at social media activity, it identifies not only the account where the fake news originates, but also the fake accounts that promote the disinformation.
"We’re looking for the threats, and then seeing which accounts are associated"
By understanding these online narratives, trends, and communities, the company provides users with the information required to stop the spread of disinformation.
“We’re looking for the threats, and then seeing which accounts are associated,” says Rafi Mendelsohn, Cyabra’s vice president for marketing.
One way to detect whether an account is authentic is to tally the amount of times it posts in a single day.
“If you have posted 23 of the 24 hours of the day, that’s an indication that there’s non-human behavior,” Mendelsohn says. “We can say, okay for that parameter, that’s a bit of a red flag.”
“I suppose it’s useful to think of it as a social media search engine”
In examining malicious social activity surrounding Russia’s February 2022 invasion of Ukraine, Cyabra identified as many as 60 fake accounts on Twitter that it believes originated from Russia. The accounts were presented as originating from Poland and were posting and amplifying negative content in Polish about Ukraine, trying to create divisions between the nations at a time when Poles were welcoming Ukrainians fleeing the fighting.
“I suppose it’s useful to think of it as a social media search engine,” Mendelsohn says. “It’s very difficult to do that manually, because it’s like a fire hose of information coming your way.”
Cyabra’s clients have included the US State Department, government agencies in Israel, Singapore, South Africa, and the UAE, as well as commercial clients like the global advertising agency TBWA, WarnerMedia, HBO Max, CNN’s Newsroom, and several large financial institutions.
The US State Department worked with Cyabra to track foreign interference in elections, while the Taiwanese government used the company’s technology to battle vaccine disinformation during the Covid pandemic.
Two of Cyabra’s founders served in information warfare units in the Israel Defense Forces. The company is currently funding on the OurCrowd platform.
