Much is written about the growth of AI in the enterprise and how, as part of digital transformation, it will enable companies to create value and innovate faster. At the same time, cybersecurity researchers are increasingly looking to AI to enhance security solutions to better protect organizations against attackers and malware. What is overlooked is the same determination by criminals to use AI to assist them in their efforts to undermine organizations through persistent malware attacks.
The success of most malware directed at organizations depends on an opportunistic model; sent out by bots in the hope that it infects as many organizations as possible and then executes its payload. In business terms, while relatively cheap, it represents a poor return on investment and is easier for conventional anti-malware solutions to block. On the other hand, malware that is targeted and guided by human controllers at a command and control point (2C) may well result in a bigger payoff if it manages to penetrate privileged accounts, but it is expensive and time consuming for criminal gangs to operate.
Imagine if automated malware attacks were to benefit from embedded algorithms that have learned how to navigate to where they can do the most damage; this would deliver scale and greater profitability to the criminal gangs. Organizations are facing malware that learns how to hide and perform non-suspicious actions while silently exfiltrating critical data without human control.
AI powered malware will change tactics once inside an organization. It could, for example, automatically switch to lateral movement if it finds its path blocked. The malware could also sit undetected and learn from regular data flows what is normal, and emulate this pattern accordingly. It could learn which devices the infected machines communicate with, its ports and protocols, and the user accounts which access it. All done without the current need for communication back to 2C servers – thus further protecting the malware from discovery.
It is access to user accounts that should worry organizations – particularly privileged accounts. Digital transformation has led to an increase in the number of privileged accounts in companies and attackers are targeting those directly. The use of intelligent agents will make it easier for them to discover privileged accounts such as those accessed via a corporate endpoint. At the same time, malware will learn the best times and situations in which to upload stolen data to 2C servers by blending into legitimate high bandwidth operations such as such as videoconferencing or legitimate file uploads. This may not be happening yet but all of this is feasible given the technical resources that state sponsored cyber attackers and cash rich criminal gangs have access to.
To prove what’s possible IBM research scientists created a proof-of-concept AI-powered malware called Deep Locker. The malware contained hidden code to generate keys which could unlock malicious payloads if certain conditions were met. It was demonstrated at a Las Vegas technology conference last year, using a genuine webcam application with embedded code to deploy ransomware when the right person looked at the laptop webcam. The code was encrypted to conceal its payload and to prevent reverse engineering for traditional anti-malware applications.
IBM also said in its presentation that current defences are obsolete and new defences are needed. This may not be true. AI is not yet magic. As in the corporate world, much AI assisted software, benefits from the learning capabilities of its algorithms which automate the tasks that humans have previously held. In the criminal ecosystem this includes directing malware towards privilege accounts. Therefore, it makes sense that if Privileged Access Management (PAM) does a good job of defecting human led attempts to hijack accounts then it should do the same when confronted with the same techniques orchestrated by algorithms. Already the best PAM solutions are smart enough to monitor M2M communications and DevOps that need access to resources on the fly.
But we must not stop there. Future IAM and Pam solutions must be able to detect hijacked accounts or erroneous data flows in real time and shut them down so that even AI cannot do its work. Despite the sophistication that AI will bring to malware, its target will remain the same in many attacks: business critical data that is accessed by privileged account users, which will include third parties and machines. It is one more way in which Identity – of people, data and machines - is taking centre stage in securing the digital organizations of the future. For more on KuppingerCole’s research into Identity and the digital enterprise please see our most recent reports.
Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]