Deepfakes and Disinformation in Cyberspace

Cyberspace greatly enhances an actor's ability to engage in activities that can yield geopolitical and economic gains without resorting to the use of force. With the advent of cyberspace, various actors, including both state and non-state entities, have taken advantage of its vast interconnectedness to partake in a wide range of activities, including malicious behavior.

Cybercriminals and state-sponsored adversaries are always looking for new ways to exploit vulnerabilities and use new tools to target individuals and organizations. For example, in recent years, there has been an increase in the use of deepfake technology to boost the execution of advanced disinformation campaigns.

The manipulation of visual media is enabled by the availability of sophisticated image and video editing applications and automated manipulation algorithms that create content that can be difficult to distinguish from real footage. While deepfake technology can be used for fun or for artistic value, some actors could use it for malicious purposes, such as spreading propaganda or disinformation.

Advanced disinformation campaigns are one of the most sophisticated and persistent cybersecurity threats. Disinformation is defined as the dissemination of false, erroneous, or misleading information designed to deliberately cause public harm. The integration of deepfake technology in these campaigns allows adversaries to target the very fabric of society.

Future trends and challenges

(Geo)political objectives constitute the primary driving force behind disinformation campaigns that utilize deepfakes, frequently carried out by state actors and other politically motivated groups. While governments and intelligence agencies have historically used deception and media manipulation, the accessibility to create realistic images, videos, or audio has now extended to almost any actor.

Deepfakes require a wide dataset containing images, videos, or voice recordings of the targeted person. The data is fed into a program that learns the victim's key features and uses the learned information to modify an existing photo or video. Essentially, deepfake technology can create a synthetic video or image that realistically represents anyone in the world.

Deepfakes can be used to create false content of political figures, candidates, or public figures engaging in controversial or damaging behavior. As a result, this can lead to the spread of false information, conspiracy theories, or sensationalized stories that undermine trust, sow division, and evade accountability.

In addition, advanced disinformation campaigns can use deepfakes to rewrite history or manipulate cultural narratives. For example, tampering with historical footage or altering famous speeches can distort the public's understanding of significant events, leading to confusion, denial, or the promotion of alternative narratives.

Advanced disinformation campaigns can also use deepfakes to create sophisticated financial scams or fraud schemes. Fraudsters and cyber criminals could use voice manipulation techniques to impersonate someone in a phone call, tricking individuals or organizations into providing sensitive information or authorizing financial transactions.

How to move forward?

Detecting and countering deepfakes, along with promoting media literacy and critical thinking, are essential to mitigating the impact of advanced disinformation tactics. Countermeasures against deepfakes have been developed to help identify and alleviate their impact. Here are some common approaches:

  • Advanced detection technology: Implementing sophisticated detection algorithms and AI-driven tools can help identify altered content.
  • Source verification: Encouraging users to verify the sources of information they share and promoting credible news outlets can help reduce the spread of false information.
  • Digital watermarking: Embedding digital watermarks in original content can enhance traceability and accountability.
  • Public awareness and education: Promoting media literacy and critical thinking skills among the public can empower individuals to question the authenticity of information they encounter.
  • Collaboration with social media platforms: Establishing trust between tech companies and the public sector is essential to combat deepfakes and disinformation campaigns.
  • Government Regulations: Creating new regulations and legal frameworks can act as a deterrent and provide consequences for malicious actors.

As rogue nations continue to foster an environment for cybercriminals to thrive, individuals and organizations need to be prepared and build a strong cybersecurity foundation. It's important to note that the field of deepfakes is evolving rapidly, and countermeasures need to continually adapt to new techniques and advancements. Therefore, a multi-faceted approach that combines technical solutions, user education, and policy development is necessary to mitigate the risks associated with deepfakes.

Deepfakes and disinformation campaigns are in the focus of KuppingerCole’s cyberevolution event in Frankfurt from 14-16 November which will look at tackling future cybersecurity threats. Presentations will focus on a range of topics, including how AI enables cyber attackers and defenders, securing the autonomous world and the cloud, advancements in quantum computing, detecting deepfakes, securing IoT devices, securing business communication channels and supply chains, protecting digital identities, and the security challenges of Web3 and the metaverse.