Recently deepfakes made a splash in the headlines: Vitali Klitschko, the mayor of Kiev held one-on-one virtual interviews with several European leaders – or so they thought. In the days following the interviews, it came to light that the Ukrainian official was not actually conducting the interviews, but an unknown party had presented a false representation of Klitschko – with possibilities ranging from a real-time deepfake to a re-edited version of existing video footage called “shallow fakes” – that lasted approximately 15 minutes. One of the interviewees became suspicious of Klitschko’s authenticity and broke it off early, but others had no suspicions and continued the interview until its conclusion.

Deepfakes pose risks for businesses

More detailed information is not available about the falsified Klitschko, but it opens up many questions about the implications of shallow and deepfakes in politics and across all industries. It adds to the ongoing discussion: how do we know who we interact with digitally, be it our customers, employees, partners, or board members?

Access being granted to an impostor poses severe risk to an organization, be it in this case an impostor gaining audience with mayors of major European cities, or in countless other scenarios such as impersonating a CEO to authorize a funds transfer. Risks like these are clear calls for Zero Trust architectures, described at its simplest as “trust nothing, verify everything”. What options do organizations have to verify the identity of those they interact with digitally?

Remote identity verification is popular in industries like financial services, retail, and health care for consumer-facing interactions, and is gaining traction in many others to cover use cases such as external partner/organization onboarding, employee onboarding, and authentication bound to a verified identity. These solutions typically onboard the person in question by scanning a government-issued identity document (with OCR, NFC, or other means) and compare it against government registries and a real-time selfie of the individual for both biometric measurement and liveness detection. Although this is a market segment that is still maturing, the technologies are well established and continuously being improved. But are they robust enough to detect and reject deepfakes?

Deepfakes vs. liveness detection

The biometric onboarding and liveness detection portions of remote identity verification solutions are designed to address presentation attacks, but are not yet always equipped to fend off deepfake attacks.

The concept of detecting liveness is to confirm that the correct individual is physically present at the time of the transaction, and is not misrepresented by a photo, video, or mask, and sometimes synthetic representations (like a deepfake). Usually, the individual is asked to take a live selfie with their mobile device providing real-time evidence that the correct individual is conducting the transaction. Checks such as audio and facial synchronization, consistent blinking, and biometric inconsistencies help to indicate if a synthetic video or deepfake has been presented.

A deepfake could be injected via a faulty API, bypassing the device camera, or infecting the computer with malware. In this case, a selfie (injected by the attacker) may not be interpreted as fraudulent. For additional security against such deepfakes, either active or passive liveness checks should be completed. The individual may be asked to follow directions like blinking, turning their head, saying a series of random numbers, and so on to detect if the person is real and can respond to spontaneous instructions. This is referred to as active liveness, since the individual is asked to actively do something to prove their presence, to which a deepfake may not always be able to dynamically respond. Passive versions exist as well such as flashing a random sequence of colors on the individual’s face as they take a selfie. If a deepfake video was injected into the communication stream, this series of colors would be interrupted and indicate an attack.

Both deepfakes and liveness detection are improving, making it a game of chess to stay ahead of attacks.

Do deepfakes require adoption of identity verification?

The simple answer to this is yes, the more deepfake attacks increase in volume and sophistication, the more businesses can counteract with identity verification. However, the reality depends on factors:

  • The high cost of producing deepfakes
  • The scenarios deepfakes will be used in
  • The willingness of normal, non-fraudulent users to verify their identity
  • The different circumstances that non-fraudulent users are willing

Currently, believable deepfakes are labor-intensive to create, and do not scale well compared to other presentation attacks like static images, videos, 2D or 3D masks, injection attacks, or replay attacks. Deepfakes will likely play an increasingly threatening role in social engineering attacks and spear-phishing, since the resources required to develop it can be focused on a particular target. But largescale attempts to onboard or authenticate fraudulent users may not be the most common use case for deepfakes. Additional protection against deepfakes for high-profile people, politicians, celebrities, and C-level members of organizations may be considered, but may not be necessary when general public onboards or authenticates.

It may be prudent to ask if a political leader would subject themselves to remote identity verification conducted by another sovereign nation. Would it be viewed as a security risk for country that they represent, or seen as a gesture of goodwill? The same could be said for C-level members being verified by an external organization that they wish to do business with. And if these individuals refuse to agree to an identity verification step, will the other party rely on increasingly passive methods, which are arguably more invasive as collecting appropriate consent becomes more difficult?

While certainly not a complete answer to these questions, interoperability and standard credential formats can help ease the reluctance of one party being verified by the system of the other, mainly by allowing the identity verification step to be conducted internally, and shared with the other party in a standardized format. Initiatives like GAIN and Verifiable Credentials are making steps towards establishing globally interoperable reusable verified identity.

What changes do we need?

As the quality and scalability of deepfake attacks increase, we must increase the defensive detection measures. While some current identity verification solutions claim to detect deepfake attacks, most are geared towards deterring more scalable attacks like spoofing, fraudulent documents, or masks. Especially notable is synchronous video identification, where the individual joins a video call with a live agent. These solutions rely on manual verification only, and while agents are trained in verifying documents and passports, they are not trained to detect a deepfake. Since these attacks look more and more believable to the human eye, video identification in particular would benefit from additional algorithmic checks for anomalies. We should expect remote identity verification solutions to invest in the ability to better detect deepfake attacks.

The best weapon against attacks that use AI is defense that uses AI. Adding algorithmic anomaly detection for deepfake tell-tales like inconsistent or nonexistent blinking, inconsistent background and inconsistent video quality will be a meaningful step forward towards deterring more deepfake attacks. Deepfakes already have and will undoubtedly continue to address these inconsistencies, and therefore it becomes paramount to develop the capabilities to detect anomalies that have never been seen (or trained for) before.

KuppingerCole continues to research the identity verification space, and will publish a series of reports in the later half of 2022. Keep an eye out for this research to stay up to date on topics like liveness detection.

See also