KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
This presentation first to mention it has been put together by John Tolbert, my colleague from Seattle and by me. So that, that is the joint effort. So there's lots of content. I will speak lots. I will speak fast. And I hope there is a lot of content for you to take away.
The focus, as I explained to Christopher is the recent cyber attacks that we've seen just to make sure where are the reasons where are the root causes, what can be done better, where it's room for improvement and how can zero trust contribute to that? And we will have more on that later in that event. And first of all, if we talk about cybersecurity, everybody who learns this, this for the first time they learn CIA, they learn confidentiality, integrity and availability. And these are the, the aspects that usually are under attack when the cybersecurity is under attack.
And we want to focus on it, supply chain risks. And this is the reason why I want to focus on the top cybersecurity risks that we have seen as we start 2021. So there were some threats to confidentiality, confidentiality, and integrity. So the CI part of CIA, we have seen the solar wins incident. We have seen the ticket master events going on, and just quite recently, the old small water treatment plant event, which is also around confidentiality and integrity. And to mention that briefly as well, there have been quite some threats to availability.
In the recent days, we have seen the global Gmail outage in December. We have the outage of the north American region of AWS in November. And the Microsoft Azure outage in September are most probably related to an Azure ad outage way back then, but that has of has improved since then dramatically. So if we look at what was going on right there, so we have been seeing malware in it tools, so that hints at the solar solar winds incident, we have seen insider threats that come from people like former employees, contractors, and partners, and in general, too much access.
And on the availability side, we have seen distributed denial of service attacks and just the usual, the regular cloud service provider outages. And if you look at that list, that is really nothing new. So these are incidents that have been around before and they are expected to be around in the future as well. So we have to take care of that. And that's the main thing I want to focus here. I want to dive a bit deeper into the solar wind incident, just to learn from that. It's not about finger pointing. They have improved very much and things have changed dramatically.
In the meantime, it's just to understand where we could get better from our side. So everything that we have here is an overview of published research. There's no secret knowledge from cooking a call right here. It's just looking at the news, looking at blogs, looking at information, being readily available first, and this is readily available. The attack timeline as published by solar winds themselves in their official block. I want to go, I don't want to go into all details here because that would take 10 minutes of my full presentation. I want to focus on a few aspects here.
First of all, the overall time that this took place, it started in September, 2019, and that's quite quite astonishing. And we really can get to the knowledge again, that there, that we usually have this six month to detect the actual attack and the two month period to remove that. So when we start with the notification from, from, with the deployment in somehow around June in 2020, then this is really something very interesting. And the other aspect that I really want to focus on is the highly professional way of how this actually was executed.
If you look at the beginning, there was actually on the 12th of September, 2019, there was a test code being injected. So they real had like a proper software development program. They had a, they had a testing period, which ended then by early November, 2019. So they had time to test the platform later on with, with late February, 2020, the actual malware workload has been compiled and deployed and then rogue havoc in the systems. So that's the overall timeline to look at.
And of course investigation is still going on and we are still learning just right now, as we are doing this event, that things have changed and then what needs to change and what were the reasons behind all of that we're talking about the cybersecurity supply chain. And so there are some published and known supply chain impacts. So one step up the supply chain. We can see impact on the solar winds platform themselves, but also in solutions by companies and vendors like FireEye or Microsoft or Cisco duo.
And the recent days we have added the latest info for as published by trust bay and Nozomi hinting at malware by and Mimecast, but also at the Microsoft MQ server, which also has been impacted by this event. And if we go up one more step, the supply chain to the end users, we see quite some, some heavy names here in this list.
So we see government agencies around the world from the us to the UK, to, to Europe and the United Arab Emirates and companies up to 18,000 are potential targets, but we know of at least 250 plus, including the names given here, the three stages of the attack, just to give a short introduction and also to use this later on for understanding what needs to be done, if you are under attack, or if you suspect to be under attack, we have three phases.
This stage one was actually the implantation of the malware, and that means installing the stuff and getting it under control with the command and control center and the C2, the command and control center was starting to signal to the initial domain. So to be under control main, mainly. So that was the first stage. The second stage was the C2 communicating, aiming at identifying and, and leveraging already some assets though. The directing of the reconnaissance of the assets was stage two and also using more than one C2 domain. And at that stage already multiple IOCs have been made available.
So there were indicators of compromise as provided by threat intelligence at that early stage, but they were not necessarily taken into account stage three, was the attack being in false swing. So there were signs of threat actor, activities, including data exfiltration and very important and very noteworthy, the manipulation of ad accounts and SAMO tokens. And we will talk about that later and a very bold move, the unauthorized addition of federated trusts.
So what we usually tell our customers to, to use trusted IDPs for adding trusted identities to your overall ID fabric, that is something that was used here in a malicious way. So the added federated trusts to give a group of people, unauthorized access to the systems, which is quite astonishing, a few words about the solo winds poster. I don't look into all the details, but we had everything that proper cyber hygiene usually should avoid. So weak passwords passwords attacked based on password spraying.
So using existing passwords and leaked passwords just to, to more or less apply a, a, a massive attack towards the systems. More interesting that there have been internal accounts for sale on the dark web and the recommendation that customers were advised to exclude solar winds file from scans, because there were positives here. Interesting, also to see that there was no real strong focus on security at that time. So no C in place. And just quite recently, there have been some developments around Microsoft messaging queue and SQL express with UN unauthenticated messaging.
Although these TTPs, these are not necessarily associated with the actual attack and have not yet been used by malicious actors yet and have, have been removed in the mean meantime, but there was quite some, some attack vector being available. So that was the part when it comes to where could have been things better at the actual side there, but everybody's talking about how proper this attack has been executed. And we want to look at a few of these aspects.
First of all, the concealment techniques that have been applied when it, when this attack has been executed, just to go quickly through it, the, the attack used encrypted tunnels for data exfiltration. So you could not identify the actual data streams by their content by just scanning them because they have been encrypted to avoid suspicious communication. The IP addresses for the C2 components were in the same country.
So a decrease suspicion through that, very interesting from a technological point of view, they have been using stick ethnography for hiding data in nontraditional file types like video, like audio and embedding the data in there. The software, the malware was sophisticated enough to abort its execution, if it detected that it was actually being under under supervision. And so there was a sandbox detection. So you could not have a look at them well, where, when it has been run within a, a Petri dish approach, and also just so such simple things like changing the communication delay.
So to make sure that the communication with the C2 does not show out of the list of all the communication by making sure that it behaves like every other by slowing and randomizing communi communication here, that's quite interesting to see, and they were hiding the, the software also by using spoofed Sam tokens. When there have been SAMO tokens issued without a corresponding local Logan event.
Another aspect to look at is persistence to make sure that this software keeps on running even across a reboot of the affected system, for example, and that has been achieved by installing as a native windows app task. So that could be enabled to load it every boot up time, which is convenient for a software to run it every time. And unless it is malware, it was monitoring its own processes, though. There was only one instance at a time, which usually is a sign for an infection when one process shows up many times, and this also made sure that the proper infected version was running.
So version monitoring and the amount of instances running the, the for persistence, additional accounts have been added to the victim's a ADSS. So they were logins for the threat actors. And that also was additionally enhanced by adding tokens and certificates to the target services. So that also this could be used for the threat actor to yeah. To reconnect to the systems. And we've mentioned already the, the creation of ADFS trusts to outside domains. So that's the, the way how this actually was executed.
If organizations suspect that they might be a, a victim of this attack, there are quite some detection technique techniques around, and I want to quickly mention them. The links are also in the PDF version of this presentation, if you download that. So there's, first of all, a so-called Sparrow script PowerShell script that it has been opensourced by the us C I S a that can be downloaded and can be leveraged for identifying whether or not you have been yeah. Affected by this attack. Another aspect to look at is what we call geo velocity. So the impossible travel in logging events.
So if authentication occurs from different source IPS around the world, where it's just really impossible to travel that fast, that might be a thing to look at. Of course, if you have a deeper look at your semi token attributes, there are changes in the systems, as we have mentioned before. So Lang long validity duration, missing authentication levels, and again, sample assertions without a local login event that are typical detection, hints, indicators of compromise and all, and many others have been provided in the meantime from threat intelligent providers. So use these services.
There are more being discovered over time. As I said, we are still in the phase that we understand more and more about this actual event recommendations. Then we go back to the stages that we've mentioned before. If you identify yourself as being in stage one or stage two, there are four key recommendations. First rebuild.
The whole, there is probably no way around that create new accounts for solar wins use usage because all the others are of course, compromised use MFA, really nothing new, but use MFA for solar wins, user authentication and something that we preach every day is use privileged access management for solar winds, users and services, stage three, then things have gone worse. So rebuild the solar winds environment that you're using execute a full IM audit exercise.
And that covers ad Azure ad ADFS, everything that's around that really make sure that you have a look at the full IM including a full access reconciliation, and there might be other response action required as needed. And that goes far beyond such a 20 minute keynote can cover, but there's lots of work to do, and it should be done and it should be done diligently. That's it for the solar winds incident, because there's lots to take away from that to improve your own or your own cybersecurity as well. And when we're talking about zero trust where that would have helped as well.
And we will see that soon one slide about the Ticketmaster at the Oldsmar incidents, just to see what went wrong there and how we can, can prevent that. That's the main aim that we have here in mind. So there were different actors and different motives, and just a quick look at that. First of all, the Ticketmaster incident that where massive data leak breach has been occurred, and that was just executed by a former employee, confidential information has been copied and has been copied for a competitor.
And what's the reason behind that that's too much access former employee means the user should have been deprovisioned should have been removed access should have been removed. The identity should have been removed and every way of using this identity and the access for getting access to the confidential information. So that is really user life cycle management, management, least privileged and principle, and not just the provisioning of access.
The Oldsmar incident quite recently has been to towards the, to a water treatment facility in the states where an unknown attacker gained access to the actual configuration of the water treatment facility, which is really frightening. And the way that happened was using an, a dormant and still installed and weekly protected team viewer account.
And when you, when we look into the root cause or in what we could have been done to prevent that it's mainly about removing unneeded software, this allowing inappropriate remote access, this team view account maybe was there before for some good reason, but it has not been removed. And it has not been properly protected by for example, MFA, which would of course have discouraged password guessing. And in the end, we talk about zero trust today. So we talk about zero trust architectures.
And just to mention it very, very quickly, zero trust architecture would have prevented many of these aspects that we have just seen. We want to protect users. We want to protect devices. We want to protect communication between users, devices, and resources. We will hear much more about that later in that event today. And I think many of the speakers will drill deeper into these aspects where that really could have protected all the events that have presented from happening final slide for me, how can we reduce the supply chain risk?
How can we improve CIA and the supply chain and just these as a takeaway, going back to my first slide, how can we mitigate these risks to confidential, confidentiality, integrity, and availability, seven takeaways implement a zero trust architecture. That's why we're here today. Use right sized access, especially when it comes to remote access and deprovisioning users look at your vendors and apply rigorous scrutiny.
And when you look at availability distribute and orchestrate your services to edge services, to multi-cloud infrastructure as a service, and if there are service software as a service providers around, which have a multi-cloud hosting choose them. And if there is a chance that you look at identity as a service, make sure that this is multi-cloud hosted as well. This is on the horizon, this is a new development, and that would have helped also in the availability issues that we've seen. That's it from my side.
And if we find the time and if we have the time, I will be happy to answer some questions.