Decade of Russian Bots

This blogpost is a work of over a week’s worth of full-time research using publicly available sources of information. I’m not associated with the CIA and my sources are limited to Google, news media and leaks. That said, even with a limited number of sources, there is too much information on the topic and it’s impossible to include all of it without making the blogpost impossibly long. I tried to condense the material as much as I could and provide as many references as I could, so that you could all examine the information I used to draw conclusions, if you would like to. Many of the sources I list are in Russian, but you should still be able to examine most of them with Google’s page translation. Please report any broken links and I’ll do my best to fix them. Overall, the blogpost describes how Russia’s disinformation and bot infrastructure came to be. Due to its nature, we may lack crucial information that will come out in later years and if it does, I’ll correct my article. The blogpost isn’t intended to instill some kind of hate against Russians, in fact, if you read through it, you’ll realize that Russian citizens are oppressed and put down by Kremlin’s propaganda infrastructure in a way that people in the West will never be. If you have any feedback, I’ll be happy to hear it out.

Table of Contents:

  1. Before 2010
  2. 2011
  3. 2012
  4. 2013
  5. 2014
  6. 2015
  7. 2016
  8. 2017
  9. 2018
  10. 2019
  11. 2020
  12. Conclusion

2010 and Prior:

Russian leadership had always been wary of the internet and Vladimir Putin went as far as to say that the internet has been and always will be “a project of the CIA”. By 2008 Vladimir Putin’s administration had largely taken control of public television, with federal news channels promoting messages favorable to the Kremlin, but mostly left the internet to its own devices. A shift in policy occurred when president Dimitry Medvedev was elected. Medvedev was seen by many as a technologically progressive politician who recognized the importance of the internet and social media. He toured Twitter headquarters in Silicon Valley, got himself an iPhone and signed an order creating the Federal Service for Supervision of Communications, Information Technology and Mass Media (shortened to “Roskomnadzor”). Its creation in 2008 signified a change in how the Russian government viewed the Internet, which was largely unregulated before that. Roskomnadzor was formally tasked with carrying out censorship in mass media and policing the processing of personal information on the internet. It would later become famous for its failed attempts at blocking the Telegram messenger between the years of 2016 and 2020, but at the time of its creation, it was not particularly involved in controlling the Russian internet. One important organization that came into existence around the same time was “Nashi” (Translated: “Ours”), lovingly called the “nascists” by Russian opposition. Nashi were running a summer camp for Russian youth that fostered a network of pro-government political activists and was given generous grants for it by the Kremlin. Starting from 2008, their summer camp became open to the public and was funded directly by the Russian government. Nashi recognized early that the internet will be the next big battleground for hearts and minds, and they began to employ their activists online.

Key Takeaways:

  • Russian government takes interest in the internet’s utility
  • Pro-government activists look to increase their online presence


Bots were often involved in various apolitical activities, mainly spam and mass-account compromise, but 2011 saw their wide employment in various domestic political matters. Novaya Gazeta, a Russian opposition news outlet wrote in 2011 that “Nascists have occupied the top spots in LiveJournal and Yandex” (both popular blogging platforms) and proceeded to explain that nearly half of all top blogs are in that position thanks solely to botting. “There is a huge number of previously registered accounts, all of which are empty. They only have 1 post so that search engines index them. Whenever there is a post that needs to be promoted, a certain number of these robots is activated and they automatically repost that in their own journal” - explains one blogger. In December of 2011, many Russians went to protest the disputed parliamentary elections and Russian police ended up arresting many, including the head of Russian opposition - Alexei Navalny. His supporters took to Twitter to express their displeasure and propelled several election-related hashtags into popularity. Brian Krebs reported the observations of threat researchers at Trend Micro: “if you currently check [Navalny’s] hashtag on Twitter you’ll see a flood of 5-7 identical tweets from accounts that have been inactive for months and that only had 10-20 tweets before this day. To this point those hacked accounts have already posted 10-20 more tweets in just one hour”. Trend Micro identified several thousand accounts and according to Krebs “Some of the bot messages include completely unrelated hashtags or keywords, seemingly to pollute the news stream for the protester hashtags”. The coordinated spam-campaign was accompanied by DDoS attacks on several liberal and opposition newspapers. The Guardian reports that the DDoS attacks were coupled with phone bombing of the same newspapers - employees of Novaya Gazeta were getting non-stop calls that would all deliver similar messages on repeat: “Putin is very good”, “Putin loves you”, “Putin makes your life happy”, etc. One website that was being DDoSed that year was Kommersant. Kommersant ran an article harshly criticizing the Nashi movement earlier that year and since attacks on the paper came after it, they suspected that Nashi were responsible for the denial of service and they were correct. They would be proven right next year, when Nashi’s emails were hacked and posted online by the Russian “Anonymous”.

Key Takeaways:

  • Pro-government activists begin to employ a botnet to silence dissenting opposition voices with DDoS and social media spam


The year of 2012 was important in the evolution of Russian bots because their methods and operations were exposed by hackers, shining a light on the scope of their involvement. In 2007 the Nashi movement saw an ambitious young woman named Kristina Potupchik become their press secretary after graduating their ideological training camp and in 2008 she also became the press secretary for the Russian Ministry of Youth Affairs. Her activities were laid bare in 2012 when Russian “Anonymous” leaked emails of the heads of the Ministry of Youth Affairs. In her email Kristina writes: “We have to create unbearable conditions for the Kommersant. Torture them psychologically and physically. Revenge is absolutely necessary” and suggests a 5-hour DDoS attack on their website, as well as general harassment. Further emails describe that their Twitter bots work by identifying keywords and then unleashing pre-written responses on the posts identified and boasted having a bot population of 16,000 on Twitter. Leaked emails reveal that the organization had great success in their propaganda efforts on social media, especially on YouTube, where they were able to consistently put out videos and bot them into trending. They would also bot dislikes on videos criticizing Vladimir Putin to make sure they don’t end up trending and to create a false perception of widespread outrage at the opposition. Despite the wide adoption of automation and botting by Russian propaganda operators, most work was being done by real people who were employed to run tens and hundreds of sock-puppet accounts on different social media platforms. Hundreds of people were paid to express pro-government sentiments and the following year liberals from Russian news media managed to infiltrate an office where such people were employed. Seeing potential in social media botting, Russian SVR (Foreign Intelligence Service) put out a government tender for software that would allow for intelligence collection and analysis, as well as automated spread of information to selected audiences on social media, aimed at influencing public opinion.

Key Takeaways:

  • Pro-government activists begin to employ a botnet to silence dissenting opposition voices with DDoS and social media spam
  • Russian foreign intelligence service orders botting software


As companies became more security conscious, botting has become harder for some Russian propaganda outlets and government demands that were growing exponentially prompted them to increase the number of employees working on-premises, writing comments and polluting the internet on a full-time basis. A Russian advertisement on social media read: “Internet Operators Wanted! Work in a luxurious office in Olgino. Responsibilities: posting comments, writing thematic posts and blogs, social media”. A reporter for Novaya Gazeta went to investigate and pretended to be an interested applicant. The organization was reportedly founded in July of 2013 and was named the “Internet Research Agency”. The operations manager that was doing the onboarding for the reporter was an individual closely tied to the summer camp run by Nashi and was himself a very prominent “nascist”. “[When] we need to improve [a] website’s traffic, we can do it with robots, but robots are very mechanical and sometimes systems like Yandex ban them. That is why we have decided to use real people for that. You will write comments with the vectors that we set . . . 100 comments a day will be required” - explained the manager. Earlier that year Forbes aired an article regarding a prominent Russian businessman and Russian propaganda operators were put into action. The article resulted in a denial of service to Forbes’ website and in endless angry comments, many of which the Novaya Gazeta reporter spotted on the office computers of the Internet Research Agency. This year marked a slow expansion of the target audience of the Russian social media propaganda infrastructure. Foreign audiences would end up in their sights increasingly often, likely in preparation for what the Russian government planned next year.

Key Takeaways:

  • Russian government establishes several organizations for the sole purpose of social media operations, such as the Internet Research Agency
  • Propaganda infrastructure expands its operations to reach foreign audiences


Russian invasion of Ukraine wouldn’t go unnoticed by Western powers and the propaganda infrastructure was tasked with sowing confusion and doubt in what would otherwise be a unified Western response. Internet Research Agency’s proprietary documents were leaked to the press and give an insight into their operations. Their strategy document as quoted by Buzzfeed says that “Foreign media are currently actively forming a negative image of the Russian Federation in the eyes of the global community,” and “The main problem is that in the foreign internet community, the ratio of supporters and opponents of Russia is about 20/80 respectively”. Immediately following the invasion of Ukraine Russian bots were unleashed on foreign media outlets and social media platforms to shift the balance of supporters and opponents. Guardian’s Ukraine coverage was so plagued by bots and trolls that they had to address their own readers’ complaints - “Guardian moderators, who deal with 40,000 comments a day, believe there is an orchestrated campaign. Zealous pro-separatist comments in broken English claiming to be from western counties are very common”. The founder of a Russian communications research and development lab Vasily Gatov gives further clarification on the purpose of the bot campaign: “Western media, [that] which specifically [has] to align [its] interests with [its] audience, won’t be able to ignore saturated pro-Russian campaigns and will have to change the tone of [its] Russia coverage to placate [its] angry readers”. In July when MH17 was shot down by what later was proven to be pro-Russian separatists using the Russian BUK missile system, bots went to work on Twitter and produced 111,486 tweets in just 3 days following the tragedy, according to the Dutch journalists who investigated the matter. The campaign was of unprecedented scale, never before seen on the internet, but it was just the tip of the iceberg. Internet Research Agency employees visited the United States to perform reconnaissance on social and political issues. Leaked files from the IRA show folders named “Migration Policy”, “Don’t Shoot”, “Air Strike costs”, etc. According to the special counsel Robert Mueller’s report, the Internet Research Agency sought to spread division and animosity among Americans based on the issues they identified in their reconnaissance. One contentious issue the recon operation was able to identify was the potential removal of confederate monuments. IRA’s recon team visited Atlanta in particular to gather information on the topic in preparation for what they would pull off in 2016.

Key Takeaways:

  • Kremlin attempts to shift the balance of supporters and opponents of Russia among foreign audiences
  • Bots are employed to confuse, divide and distort the creation of uniform foreign responses to Russia’s aggression
  • Internet Research Agency does reconnaissance in the United States in preparation for the presidential election of 2016


As Kremlin bots continued to pollute and distort the news from all over the world, independent journalists achieved significant success in identifying them with social network analysis. Following a year that demonstrated to the West just how numerous and well-funded Russian propaganda systems are, many journalists and researchers sought to investigate its bot network in an attempt to learn more about the patterns that could be used in identifying and dismantling it. One such journalist was Lawrence Alexander, a writer for the Global Voices. In 2015, a prominent opposition leader Boris Nemtsov was assassinated a mere 100 meters away from the Kremlin. Boris was a prominent critic of Vladimir Putin and had a run-in with the Nashi operatives that threw chemicals in his face in an attempt to intimidate him for his political position. It is highly likely that he was assassinated by Kremlin’s associates, because Vladimir Putin’s administration is notorious for making journalists and activists critical of him disappear. Mere hours after his death was announced, bots took to Twitter once again, spamming an accusation that it was Ukraine who had Nemtsov killed. Lawrence Alexander managed to identify a network of 2900 accounts and made a discovery that could help distinguish them from real users: “Out of the 2,900-strong network, 87% of profiles had no timezone information and 92% no Twitter favorites. But in a randomized sample of 11,282 average Twitter users only 51% had no timezone and tellingly, only 15% had no favorites (both traits of what could be classified as “human” behavior)”. Merging the network with others identified by independent journalists, Lawrence ended up analysing nearly 20,500 Russian bots from Twitter, a population that has grown since the days of Nashi in 2012. Nashi had only 6000 accounts that were populated with content to mimic real humans and their remaining 10,000 were blank. What Lawrence’s research has shown is that their population has nearly quadrupled in 3 years. Aggressive and rapid expansion of the Kremlin’s bot army was in preparation for the 2016 presidential election in the United States, which has largely ignored the massive disinformation efforts, a lapse in foresight that would come to haunt them next year. The European Union however recognized the threat posed by the Kremlin’s disinformation and formed the East StratCom Task Force to combat it, formally tasked with “Reporting on and analyzing disinformation trends, explaining and exposing disinformation narratives, and raising awareness of disinformation coming from Russian State, Russian sources and spread in the Eastern neighbourhood media space.”

Key Takeaways:

  • Kremlin’s bot population quadruples in preparation for US presidential elections
  • European Union recognizes the threat and forms a task force to combat Kremlin’s disinformation


The Kremlin’s social media bot network was expanded to accommodate the propaganda needs for the US presidential election and its application was seen as a success in Russia. The groundwork that the Internet Research Agency created in 2014 and 2015 in the United States was put to great use during the presidential election. IRA operatives ran thousands of accounts by hand and tens of thousands of bots, spanning all social media platforms. When Twitter appeared before congress, it revealed that in addition to over 3,000 accounts belonging to IRA operatives, it had identified over 50,000 bots linked to Russia. Between September and November of 2016, these accounts have produced over 2 million tweets regarding the election. CNN reports that “The 50,000 accounts retweeted Wikileaks almost 200,000 times during the ten-week period” and that Russian-linked automated Twitter accounts retweeted Donald Trump almost half a million times in the final weeks before the election. The Kremlin’s bot army more than doubled since 2015, demonstrating Russia’s interest in influencing the American public. In addition to polluting the internet, IRA operatives would organize rallies for and against both political parties. In 2014 they did reconnaissance in Atlanta and in 2016 they acted upon the intelligence collected and managed to stage a clash between white supremacists and counter-protestors on the issue of confederate monuments. It was not an isolated occurrence either, as these efforts span all contentious issues of the United States, especially in battleground states. Despite the fact that the Kremlin sought to help elect Donald Trump, the overall operation aimed at bigger things. It sought to sow division, confusion, distrust and animosity in Western democracies, of which America is the leader. According to the Guardian, Facebook estimated that as many as 126 million Americans had been exposed to Russian-backed material on its platform during the 2016 election campaign. It was a rude awakening for the American public to the existence of the Kremlin’s bot army and it prompted congress to step in and investigate, allowing the above mentioned numbers to become public.

Key Takeaways:

  • Russian government orchestrates a massive influence campaign in the United States to help elect Donald Trump and divide Americans on controversial issues*
  • Twitter informs congress that it identified more than 50,000 bots, more than double the number from 2015


Informed by the understanding of Kremlin’s operations gained by the United States, Europe managed to ensure their own elections’ relative security. US warnings from Senators like Richard Burr who said “it’s safe to say by everybody’s judgment that the Russians are actively involved in the French elections” according to NBC News, did not fall upon deaf ears. French intelligence recognized the impending threat early and announced that it believes “Russia will help Le Pen by way of bots that will flood the internet with millions of positive posts about Le Pen — and by publishing her opponents’ confidential emails” months before the French presidential election. They were proven right by the Digital Forensics Research Lab, who analyzed the audience of Kremlin-sponsored media in France. While Twitter accounts of established French news media had over 30 times the followers of Sputnik and Russia Today and barely differed in their daily number of tweets, RT’s and Sputnik’s audiences were discovered to be suspiciously active. The number of retweets and mentions of RT France was nearly triple that of BBC World over the same period of time, while BBC World had more unique users. A month’s worth of tweets was analyzed by DFRLab to calculate the average number of tweets per person from the audiences of established French news media and Kremlin’s outlets. French regional newspaper Midi Libre returned a rate of 1.8 tweets per user and BBC World returned a rate of 1.5 tweets per user. RT and Sputnik showed a whopping 4.8 and 4.7 average tweets per person. These Kremlin-backed outlets, that are on the very fringe of French media and have nowhere near the audiences of established outlets, have an unusually dedicated and active following, top 50% of which is suspected to be automated bots due to their lack of information and hyperactivity with posts and retweets, with some producing nearly a thousand tweets per day. Many accounts that follow these outlets also have an unrealistically high percentage of retweets, with many accounts retweeting over 90% of RT’s tweets over a month’s time. One was even spotted by DFRLab to be promoting the German account of Sputnik, leading them to conclude that that account “is a multilingual amplifier of the Kremlin’s outlets”. DFRLab also identified the Kremlin’s influence on Germany elections in 2017, namely a Russian-language botnet that “combines commercial and pornographic material with support for the [Alternative For Germany party] and attacks on Russian anti-corruption campaigner Alexey Navalny”. Despite the Kremlin’s efforts, according to German government officials and political experts, “The sharing of false or misleading headlines and mass posting by automated social media “bots” have had little influence in Germany’s quiet campaign”.

Key Takeaways:

  • As Russian bots attempt to influence the European elections, members of the EU recognize the threat early and manage to thwart it
  • Kremlin bots are mostly unsuccessful in their 2017 influence campaign


With the realization of the utility of social media and bots, Russian intelligence, police and military all sought new software, while continuing to use them as a means of breaking up the uniform response of the West to its enabling of chemical attacks in Syria and the attempted assasination in Britain. The Guardian reports that the British government “uncovered an increase of up to 4,000% in the spread of propaganda from Russia-based accounts since the [Salisbury] attack,– many of which were identifiable as automated bots”. One account in particular has “reached 23 million users, before the account was suspended. It focused on claims that the chemical weapons attack on Douma had been falsified, using the hashtag #falseflag”. The Salisbury poisoning was attributed to Russian intelligence and the Douma chemical attack was enabled by Russian military that is supposedly overseeing and supporting the Syrian army that carried it out. No longer are bots being used to only suppress domestic opposition or pollute the civil society of Western countries during their elections, they are now also clearly being used to support military and intelligence operations abroad. One organization that was sanctioned by the United States for its support of the FSB (Russian Federal Security Service) was the “Kvant” Scientific Research Institute. In 2018 “Kvant” was hacked by a Russian group named “Digital Revolution” that opposes the Kremlin, the FSB and anyone they accuse of “turning the internet into a prison”. Documents reveal that since 2008, “Kvant” has been under direct ownership of the FSB, with their director being appointed by it. One of their main responsibilities is to develop new electronic capabilities for the FSB, namely surveillance and exploitation. Russian SVR has been pursuing social media “automatization” software since at least 2012, when they put out a government contract for it. It is safe to assume that it was shared with FSB, because no bill or order is needed to do so, just the agreement of their heads. Possibly due to the software’s diminishing effectiveness, FSB was contracting with several private firms to develop new technologies, as well as working on them in-house. Documents about project “Avenir” describe a self-learning neural network for automatic analysis of the most popular social media platforms such as Facebook, VK, Instagram, etc. According to BBC Russia, it’s main goal is to detect public unrest. Documents provide opposition leader Alexey Navalny’s last name as an example of a keyword to be monitored. Digital Revolution’s next victim was a private contractor with the FSB named Sytech. Sytech had worked on software tasked with social media data collection as far back as 2010, though it is unclear if they ever found a buyer for it. Russian presidential election of 2018 saw a widespread use of bots in Vladimir Putin’s reelection campaign, with social media managers finally earning senior roles, many of which would go on to help elect mayors and governors in next year’s elections.

Key Takeaways:

  • Russian government uses social media to support its military and intelligence operations
  • Russian intelligence seeks new software after an unsuccessful year of botting
  • Botting becomes more popular among domestic electoral campaigns in Russia


A flourishing market arises to accommodate the government’s social media demands in Russia and more of it’s new technology is revealed to the public, as efforts continue to mislead and divide foreign audiences. Several people that came out of the Nashi movement came back to support the electoral campaigns of several governors during the elections of 2019, namely Kristina Potupchik. Vladimir Putin’s 2018 campaign earned a permanent spot for social media managers on electoral campaigns all over the country. In one election in particular according to Project Media, Kremlin delegated a social media manager to the campaign they wanted to support and the campaign itself hired 3 different firms to do its social media promotion. All 3 firms hired their own bots and the Internet Research Agency’s bots also joined in on the fray, confusing the campaign leaders. According to a person familiar with the campaign, they encountered bots supporting their candidate, but not the ones they paid for and struggled to identify whom they belong to, in the end concluding that it was the IRA, which they promptly banned due to their “crude methods”. Meanwhile, Potupchik was responsible for promoting several candidates in different regions and independent journalists identified bots in each and every one of them. In addition to a flourishing SMM market, Kremlin’s demands created a steady supply of surveillance and exploitation software from private vendors. One such vendor is “0-Day Technologies”, composed entirely of professors and researchers at Moscow State University’s information security lab. Hackers from Digital Revolution managed to compromise their systems and steal confidential documents regarding their work with the government, shedding light on some of the software offered to the Kremlin. One such software is named “Fronton”. According to the hackers, Fronton is an Internet of Things botnet with a diverse set of features. Digital Revolution posted a YouTube video showcasing some of its functionality. The video shows a web application that allows its operator to define topics, groups, models of behavior and reaction models. The topics menu allows the operator to set the keywords and the scope of a project by selecting social media platforms and websites with which to interact with. Behavior models allow operators to define patterns of bot activity that would mimic real users, such as time of activity, number of friends and number of likes, as well as preferred websites and their operating system. Reaction models are assigned to bot groups based on topics defined by the operator, allowing for approval, disapproval and custom responses. The video then shows a topic being defined, namely Cyberpunk 2077, an upcoming game from CD Projekt Red. Once bot creation process is complete, approval comments can be seen on the social media of the operator’s choosing. On the foreign front, the Kremlin’s bot army didn’t cease its activity. According to the East StratCom Task Force “Between May and July 2019, bots accounted for a staggering 55% of all Russian-language Twitter messages about the NATO presence in the Baltic States and Poland”. In their segment titled “What’s new in the bot-world?”, the task force states that “In the past two years, there has been an increase of Russian “news-bots”. This type of automatic posting is actually used by many legitimate news outlets whose accounts cross-post content from their websites. But in the disinformation ecosystem, “news-bots” are used to spread content from fringe or fake-news websites. The increase of this type of bot is likely connected to the fact that accounts pretending to be news outlets are less likely to be removed from the platform”. Also, according to Ukrainian news media, The Transatlantic Commission on Election Integrity reported a quarter of all 2019 Ukraine presidential election interactions to be from Russian bots.

Key Takeaways:

  • A highly saturated domestic private bot market arises in Russia
  • Private vendor developing new botting software is breached, revealing new tech


At the time of this article’s writing, there is enough evidence to suggest that the Kremlin will try to interfere in the US presidential elections in 2020. This blogpost was written at the end of April and thanks to CNN’s investigation, there already is enough evidence that the Kremlin’s 2020 interference campaign is underway. An African organization named “Eliminating Barriers for the Liberation of Africa” is based out of Ghana and Nigeria and was recently raided by Ghana’s security services based on their Cyber Security Unit’s suspicions of EBLA “receiving funding from an anonymous source in a European country”. The organization’s founder is Seth Wiredu, an African that speaks fluent Russian and has significant ties to Russia. EBLA operated in the same way as the Internet Research Agency, spreading misinformation in the United States, while pretending to be Americans. According to CNN’s investigation, Facebook said that although the people behind the campaign had attempted to conceal their purpose and coordination, its investigation had found links to both EBLA and “individuals associated with past activity by the Russian Internet Research Agency”. The Kremlin is attempting to disperse its propaganda machine to make attribution even harder for Western countries. It is a game of cat and mouse and only future will tell if the West can defend itself from this new campaign.

Key Takeaways:

  • Russia attempts to disperse its propaganda infrastructure to make it harder to detect and dismantle in preparation for the 2020 US presidential elections

Final Words:

In conclusion, Russia’s bot army and social media propaganda infrastructure didn’t arise overnight. It was deliberately expanded every year and was only recognized as a significant threat after the 2016 US presidential election. Western powers missed its growth over the years and have to be cautious going forward, as to not forget about its existence, because it isn’t going anywhere, in fact it’s expanding every year, improving its technology and refining its tactics.