Technology

'Bots used to bias online political chats'

Twitter logo reflected in an eye Image copyright Reuters
Image caption Bots are pervading many conversations about politics on social media, researchers say

If you've been chatting about politics on social media recently, there's a good chance you've been part of a conversation that was manipulated by bots, researchers say.

The Oxford Internet Institute (OII) has studied such discussions related to nine places - US, Russia, Ukraine, Germany, Canada, China, Taiwan, Brazil and Poland - on platforms including Twitter and Facebook.

It claims that in all the elections, political crises and national security-related discussions it looked at, there was not one instance where social media opinion had not been manipulated.

Bots in propaganda

Bots - programs that perform simple, repetitive tasks - are integral to what the OII calls "computational propaganda" - instances of people deliberately distributing misleading information on social media by various means.

Bots can communicate with people - retweeting fake news, for example - but they can also exploit social network algorithms to get a topic to trend.

They can be fully or only partly automated. A single individual can use them to create the illusion of large-scale consensus. They can also be used to stifle critics by mobbing individuals or swamping hashtags.

The methods the OII used for identifying bots in each country study varied.

The institute has, however, been criticised in the past for identifying social media accounts as being "bots" whose owners insisted they were nothing of the kind.

'Anyone can launch a bot on Twitter'

Bots are built by authoritarian governments, by corporate consultants who hire out their expertise, or by individuals who have the know-how, says the OII.

"Because the Twitter API [application programming interface - the means by which one bit of software can talk to another] is open, anyone can launch a bot on Twitter," explained director of research for the project, Samuel Woolley.


See also:

One in eight UK election Twitter links is 'junk'

Massive networks of fake accounts found on Twitter

Clinton bots 'hit back in second debate'


While bot and other propagandistic behaviour was specific to the political context of each country, the study also identified several trends.

In every country, it said, civil society groups struggled to protect themselves against misinformation campaigns.

And in authoritarian countries, it added, social media was one of the key ways the authorities had tried to retain control during political crises.

The frontline of disinformation

Computational propaganda has been particularly prevalent in Ukraine, the research suggests.

There had been "significant Russian activity... to manipulate public opinion" the report said, adding that Ukraine had become "the frontline of numerous disinformation campaigns" since 2014.

The typical way this worked, it explained, was that a message would be placed in an online news outlet or blog's article.

This was possible, it said, "because a large number of Ukrainian online media... publish stories for money".

These would then be spread on social media via automated accounts and potentially picked up in turn by "opinion leaders", with large followings of their own.

With enough attention, the message would ultimately be picked up by mainstream media, including TV channels.

The study provides an example related to the shooting down of Malaysian Airlines flight MH17 in 2014 to illustrate how such campaigns work.

Image copyright AFP/Getty
Image caption Russia has heatedly disputed official investigations into the downing of flight MH17

A conspiracy theory claiming that the plane was shot down by a Ukranian fighter jet originated with a tweet from a non-existent Spanish air traffic controller, called Carlos (@spainbuca).

The post was then retweeted by others and was picked up by Russia's RT television network as well as other Russian news outlets.

Ukraine's information ministry later revealed the account had been used to retweet pro-Russian messages earlier in the year.

In Russia itself, the OII suggested that about 45% of politics-focused Twitter accounts were highly automated, "essentially reproducing government propaganda".

'Tools against democracy'

It remains difficult to quantify the impact such bots have had.

But the OII's researchers believe that "computational propaganda is now one of the most powerful tools against democracy".

They have called on social media firms to do more to tackle the issue.

Lead researcher Prof Philip Howard proposed several steps that could be taken by the tech firms, including:

  • making the posts they select for news feeds more "random", so as not to place users in bubbles where they only see likeminded opinions
  • giving news organisations a trustworthiness score
  • allowing independent audits of the algorithms they use to decide which posts to promote

Prof Howard cautioned, however, that governments must be careful not to over-regulate the technology for fear of suppressing political conversation on social media altogether.

Image copyright Reuters
Image caption Facebook founder Mark Zuckerberg has pledged to reduce the sharing of fake news on the platform

In response, Twitter reissued a statement saying that third-party research into bots on its platform was "often inaccurate and methodologically flawed".

It added that it strictly prohibited bots and would "make improvements on a rolling basis to ensure our tech is effective in the face of new challenges".

A spokeswoman from Facebook was unable to provide comment.

More on this story

Related Internet links

The BBC is not responsible for the content of external Internet sites