This Week in Info War

What to expect when you're expecting bots

  • Share on Facebook
  • Share on Twitter
Late last week, NATO’s Strategic Center of Excellence in Latvia released an illuminating report on “robotrolling.” The findings are worth considering. Its authors report that two of every three Twitter users writing in the Russian language about NATO’s presence in Eastern Europe are “bot” or robotic accounts. (Robotic trolling or “robotrolling” is the coordinated use of fake accounts on social media). The high number is partially explained by the fact that Russian-language bots mostly repost traditional pro-Kremlin media content that is controlled by the state. “By implication, even automatically generated Russian news-spam echoes state-sanctioned content,” said the report, which surveyed 32,000 tweets mentioning NATO and at least one of the following countries—Estonia, Latvia, Lithuania and Poland—between 1 March and 30 August 2017.

The report’s other key findings:

  • Only 16 percent of all Russian-language tweets surveyed were published fully by humans (not automated), while 70 percent of accounts active in Russian were automated, compared to only 28 percent of English-language accounts.
  • Estonia, followed by Latvia, were the countries most heavily targeted by bots. Penetration into Lithuania and Poland was significantly lower. 
  • Russian-language tweets focused primarily on “news coverage of military exercises, troop deployments and minor incidents involving army personnel.”
  • English-language tweets focused primarily on “U.S. domestic and foreign policy issues.”

The report’s findings contrast with previous analyses that employed a larger geographic focus and suggested that the primary pro-Russian bot function is to “to mask or dilute inconvenient trending topics.” Instead, the NATO report authors find that, “this has not happened for our area of interest. The ‘Twitter conversation’ about NATO-related news is mainly bots talking to other bots, bots promoting third-party content and bots incrementally building more believable profiles.”

The report exposes a crucial point CEPA’s StratCom program has previously isolated: the key difference between the Russian disinformation machine’s deployed resources and tactics in countries with a sizeable Russian-speaking minority such as Estonia and Latvia, and those with smaller such communities. As the NATO Strategic Center of Excellence report shows, automated Russian-language content is primarily focused on bridge-building between bot networks with the goal of increasing market share and eyeballs—further strengthening and isolating the information bubble that Russian minorities inhabit in Estonia and Latvia. Conversely, automated English-language content in countries without a notable Russian minority focuses on polluting the information marketplace, often by injecting tailored narratives to dilute negative narratives and reports about the Kremlin and amplify positive narratives about Russia and stories that criticize the West.

This issue is confounded on the alliances’ digital frontline by Twitter’s inconsistent application of its terms of service between English and non-English bot content. Russian-language bots are less likely to get banned, according to the report, and more likely to build large narrative dissemination networks. It is also cheaper to cultivate convincing bot networks in Russian—with a focus on Russian minorities in places like Estonia and Latvia—than in English with a Western focus.

The Kremlin seems to be doing two things well, according to this report: muddying the Western info well to deconstruct worldviews, and further isolating Russian minority groups in the East. Its findings also suggest that the Western think tank/NGO/government response may be misallocating resources in the fight against Russian information operations. More resources need to be poured into producing pro-Western narratives rather than simply fact-checking Russian narratives—especially for the demographics most vulnerable to Russian narratives such as the ethnic Russian populations of Eastern Europe. The first step, therefore, should be to understand these cultural, historical and environmental differences and then build a response based on that data.

As this and other research shows, the Kremlin has already mobilized its digital troops in the West, where bots are actively driving new trending hashtags, modifying existing trending hashtags and injecting narratives in a targeted fashion into the info ocean.

This is not the case in the East. There, the Kremlin may be strengthening its position on the frontline but the Russians may not be fully using their assets in the field. Several scenarios exist in which they might, especially to act as a catalyst for a hybrid action (a combination of conventional warfare, irregular warfare and cyberwarfare). These scenarios must be explored. 

Finally, Twitter, and most likely other social media platforms, has a Russian-language content problem. As the report points out, “the democratizing possibilities of social media appear—at least in the case of Twitter in Russia—to have been greatly undermined." The Kremlin’s infowar machine has crowded out Western penetration into these markets in part thanks to Twitter’s inaction.   

Twitter can do a few things to help:

  • Produce and publish analytical tools to identify bot networks based on account creation, activity, location and language.
  • Transparently and equally enforce its terms of service to target all bots, not just English-language ones.
  • Publish a transparency report on the entire user database, shedding light on data of bots versus humans. At the moment, Twitter is perversely motivated to keep that data private. The solution to this is to leverage the voice of the public and the government in the name of transparency and democracy.