The Rise of AI-Enabled Disinformation

photo-1613905780946-26b73b6f6e11.jpeg

What’s this about: Cyber attacks, information operations, deepfakes, political and social subversion, financial influence, and the exploitation of social tensions are quickly becoming the new way to target nations in recent years. With the rise of artificial intelligence (AI), these become even more threatening. AI-enabled disinformation will become one of the top tools available to any individual or nation with the right capabilities. While AI can be weaponized and used to attack, it also plays a key role in combating such tactics. 

Go Deeper to Learn More → 


Before we move on, it is important to define both “misinformation” and “disinformation,” which can often be confused for one another. 

  1. Misinformation: “False information that is spread, regardless of intent to mislead.”

  2. Disinformation: “Deliberately misleading or biased information; manipulated narrative or facts; propaganda.” 

Misinformation is a more general term for false information, while disinformation is more intent-based with tools like deepfakes. AI can help create disinformation when bad actors use the technology to do things like develop realistic photographs, control fake social media profiles, and target individuals. 

Foreign actors have deployed disinformation campaigns to manipulate public opinion, degrade trust in institutions, damage political leaders, and worsen the rifts between groups within society in various countries. All of this can be used to influence citizens at the voting booth, and it will only become more effective as time goes on. 

Most people don’t realize the impact disinformation has had globally. According to a 2019 report by researchers at Oxford University, there have been organized disinformation campaigns in 70 countries. Some of the countries that have been impacted the most include:

  • Russia: During the 2020 U.S. election, Russia embarked on a disinformation campaign to undermine the election process. 

  • China: In August 2019, Facebook, Twitter, and Youtube suspended Beijing-linked accounts that were spreading disinformation about the Hong Kong protests. 

  • Vietnam: Citizens were enlisted to post pro-government messages on personal Facebook pages. 

  • Guatemala: The Guatemalan government silenced dissenting opinions by hacking and stealing social media accounts.

  • Ethiopia: The ruling party hired people to influence social media conversations. 

Image: Ways countries use computational propaganda Credit: Computational Propaganda Research Project, University of Oxford

Image: Ways countries use computational propaganda 

Credit: Computational Propaganda Research Project, University of Oxford

Much of this can be owed to the growth of the digital economy in recent years, which took place just as new technologies, such as the Internet of Things (IoT), robotics, augmented and virtual reality (AR/VR), 5G, mobile capabilities, social media, and AI all became increasingly integrated into governments and private sectors. Another driving factor is the rise of open source tools and content, especially those that are used as a base for creating deepfakes. 

Disinformation campaigns, in one form or another, have been around for all of recorded history. However, these new technologies will advance them like we have never seen.

AI-Enabled Campaigns

State and non-state adversaries could take the technological edge as breakthroughs in AI continue to occur across the globe. There are a wide variety of tools available to almost anyone, and they can be used to construct complex and effective campaigns. 

Here are some of the main threats posed by AI-enabled campaigns: 

  • User profiling and micro-targeting: Advances in AI have led to the ability for bad actors to identify unique characteristics of an individual’s beliefs, needs, and vulnerabilities. These are used to develop and deliver highly-personalized content. 

  • Deepfakes: Through the use of AI, digitally manipulated audio or visual material referred to as deepfakes can be created. These are highly-realistic and are constantly evolving to the point at which they will become almost indistinguishable from reality. 

  • Total AI-controlled systems: Because there are constantly new tools being developed that can better understand human language, context, and reasoning, AI-enabled bots will eventually become highly-capable of generating content, persuading individuals, and targeting without the help of human oversight. 

  • Natural Language Generation (NLG): NLG enables “news” to be manipulated at scale by creating machine-generated texts and articles. Prior to it becoming available to the public in 2019, the authors of OpenAI’s GPT-2 language model deemed it “too dangerous to be released.” 

The Risk of AI-Enabled Disinformation

The risk of AI-enabled disinformation continues to increase dramatically as technologies evolve, and a successful AI-enabled campaign can cause many issues within society. 

There are a few main areas where such a campaign can have a drastic impact, such as: 

  • Elections: The previous two U.S. presidential elections have witnessed many forms of AI-enabled disinformation, which will be more effective in future elections. 

  • Financial markets: AI-enabled disinformation can subject financial markets to short-term manipulation. 

  • Foreign affairs: On digital platforms, disinformation can spread quickly and cause tensions between nations’ leaders. 

  • Social movements: The dissemination of false information can deepen social tensions as it is used against both supporters and opponents of a cause.

  • Fake news: The vehicle for AI-enabled disinformation is often fake news, which is a real phenomenon that is impacting every corner of the globe.

AI to Counter Misinformation

AI can go both ways. While it can be deployed as a disinformation tool, it is also effective at combating misinformation in general. Automated fact-checking was first proposed over 10 years ago, but recent elections have caused an increase in both interest and funding. 

According to Duke Reporters’ Lab, there are over 300 active fact-checking projects in over 50 countries. This task has been traditionally based on manual human intervention, but since that is becoming more difficult with the non-stop creation of content, focus is being turned on automation. 

In 2016, London-based fact-checking charity Full Fact began developing automated fact-checking (AFC) tools with a  €50,000 grant from Google. Other organizations like the Argentinian nonprofit Chequeado, and Duke Reporters’ Lab, rely on similar tools for scanning media transcripts. AFC has been led mostly by independent, nonprofit fact-checking organizations. 

Image: The increase in fact-checking projects globally between 2016-2020Credit: Duke Reporters’ Lab


Image: The increase in fact-checking projects globally between 2016-2020

Credit: Duke Reporters’ Lab

Detection by Algorithms

AI has demonstrated an effective ability to detect and remove illegal or other undesirable content online. It can be used to screen and identify fake bot accounts, which is called bot-spotting or bot-labeling. Companies like Google, Twitter, and Facebook all use machine-learning algorithms to identify and remove these types of accounts.

In the specific case of Facebook, the company used AI to detect and remove 99.5% of terrorist-related content, 98.5% of fake accounts, and 86% of graphic violent-related content. 

AI still has a long way to go before being able to fully automate fact checking, and social media platforms still rely on human review for many cases. In 2018, Facebook relied on 7,500 human moderators to review content. 

In 2020, a team of researchers from Microsoft and Arizona State University developed new techniques for detecting fake news. According to the team, traditional approaches “rely on large amounts of labeled instances to train supervised models. Such large labeled training data is difficult to obtain in the early phase of fake news detection.”

However, the newly developed method only requires “a small amount of manually-annotated clean data,” which can then be used to automatically label a larger set of data based on the comments from social media users on a news article. 

Along with some of the new methods being developed, there are various existing data sets to help train AI. Frameworks such as these will be crucial as misinformation continues to expand and become consumed by more people. 

Many of these issues have been around for a while. For example, bad actors rapidly spread false information about publicly traded companies during the early days of the internet. However, today’s tools are far more sophisticated, and social media platforms have an unprecedented reach across the globe. AI-enabled disinformation is one of the greatest threats we face today, as it further worsens a digital environment that is already full of misinformation. And whether or not there is an intent to spread false information, the outcome is often the same. 

To learn more about artificial intelligence and its global impact, make sure to follow our blog, Twitter, and LinkedIn accounts.  You can also sign up to receive our monthly AI newsletter below.







Giancarlo Mori