In February of this year, reports surfaced on Twitter and Facebook that the Ukrainian government was undertaking a mass genocide of civilians. around the same time, conspiracy theorists began to say that Ukrainian President Volodymyr Zelenskyy was an agent of the “New World Order”.
These claims have been thoroughly debunked, but not before attracting millions of views and offering a supposed justification for Russia’s invasion of Ukraine. More recently, Russian and Chinese officials have claims the United States funded biological weapons research in Ukraine.
Social media has played a crucial role in spreading these and other false claims. We have identified a network of dozens of Russian government Twitter accounts using a loophole in the platform’s rules to run a coordinated program of disinformation.
The dangers of misinformation
By “misinformation” we mean factually incorrect material distributed with the intent to disrupt or harm something or someone: a politician, a political party or system, or a way of life.
Since 2016 US Electionsmisinformation has been recognized as a growing threat to democracy.
Democracy relies on the ability of citizens to make informed decisions about policies, politics and world affairs. This ability is severely compromised when false and (deliberately) misleading claims are presented as fact.
As we saw during the Covid-19 pandemicmisinformation can also pose a serious threat to public health and safety.
Misinformation itself is not new, but over the past decade it has found a perfect place to grow on social media platforms.
Why Misinformation Loves Social Media
Facebook, Twitter, YouTube and many other platforms are designed as amplification systems. They’re built to be open to everyone and drive volume on any type of content.
Anyone with an internet connection can access social media, where all kinds of content can be shared with a speed and reach not possible with traditional media.
The speed at which disinformation is disseminated – particularly via “bot accounts” – makes it difficult for content moderators to follow. the sensitivethe partisan nature of much online misinformation also means that internet users and journalists are more likely to spread it without checking it too closely.
Russian accounts on Twitter
Russian government Twitter accounts have played a key role in spreading pro-Russian disinformation. While Twitter has fewer users than Facebook or Instagram, it is a hub site for the production and dissemination of news.
We tracked the Twitter activity of 75 official Russian government accounts and found them to be a major source and amplifier of misinformation. At the time of writing, these accounts have a total of 7,366,622 followers. They were retweeted 35.9 million times, received 29.8 million likes and 4 million replies.
Between Feb. 25 and March 3, 2022, about these accounts posted 1,157 tweets — and about three-quarters were about Ukraine. The accounts attempted to spread false narratives to justify the invasion.
The tweets below show accounts of the Russian government spreading disinformation stories: delegitimizing Ukraine as a sovereign state, sowing doubt and falsehood about the Ukrainian government and neo-Nazi infiltration, spreading “whataboutisms” downplaying the invasion of Ukraine by drawing attention to alleged war crimes committed by other countries, and spreading conspiracy theories on biological weapons research between Ukraine and the United States.
A loophole for governments
Twitter has acknowledged the opportunities for state-affiliated media to misinformation, putting warning labels on their content and not recommend or amplify their.
However, these rules do not apply to government-controlled accounts that are not labeled as media, such as foreign embassies.
As a result, these accounts flood the platform with propaganda. This is a critical flaw in Twitter’s moderation practices that has received little attention.
A coordinated network
The 75 Russian government accounts we studied also work together to amplify disinformation. We analyzed their tweets and found that they often retweeted the same content around the same time.
This is a well-known coordinated disinformation tactic or “astroturf”, where a network of accounts retweets the content together repeatedly to amplify it and maximize its reach.
The image above shows a network visualization of coordinated retweeting behavior among the 75 Russian government accounts. Larger nodes coordinate more often, links indicate retweeting within 60 seconds of each other, and colors represent “communities” of accounts that tend to co-retweet particularly frequently.
The most important accounts are the two accounts of the Russian Ministry of Foreign Affairs (@mfa_russia and @mid_rf), the Russian mission in Geneva (@mission_russian) and the Russian embassy in the United States (@rusembusa).
What can be done?
Twitter must do more to protect the platform from harmful content from state actors. Government accounts are always free to flood the space with false information.
Twitter’s policies and rules must be modified to adapt to special circumstances such as war. They also need to adapt to non-Western contexts where misinformation is easily ignored by automated moderation tailored to English language and US and Western European standards.
Platforms have traditionally drawn inspiration from the techno-libertarian adage that “Information wants to be free”. It turned out to be a disaster for liberal democracy and public health.
Some positive changes have been made, especially after the Capitol Riots of January 6 in the United States, but the platforms are still built on the principle that the other side should always be heard.
This conception is not simply the result of an impoverished understanding of political theory by young white Silicon Valley entrepreneurs. It’s good for business: Blocking government disinformation could lead to governments block platforms in retaliationcutting off valuable users.
Do your homework
Individual Twitter users can also help stem the spread of state-issued disinformation by doing exactly what conspirators and disinformation actors have long encouraged: their own research.
Users can and should ask themselves: how accurate is this claim? How can the claim be verified? Who publishes this information about Russia? What interest does this person or these people have in the affairs of the Russian state? How to amplify this content, even criticize it, spread it unintentionally?
If any information cannot be verified or appears to be driven by bias or prejudice, it is in everyone’s interest not to tweet or retweet.
Article of Timothy GrahamLecturer, Queensland University of Technology and Jay Daniel ThompsonLecturer (Early Career Development Fellow) and Program Manager, Professional Communication Program, RMIT University
This article is republished from The conversation under Creative Commons license. Read it original article.