Blog posts by Alexei Balaganski

Free Tools That Can Save Millions? We Need More of These

When IT visionaries give presentations about the Digital Transformation, they usually talk about large enterprises with teams of experts working on exciting stuff like heterogeneous multi-cloud application architectures with blockchain-based identity assurance and real-time behavior analytics powered by deep learning (and many other marketing buzzwords). Of course, these companies can also afford investing substantial money into building in-depth security infrastructures to protect their sensitive data.

Unfortunately, for every such company there are probably thousands of smaller ones, which have neither budgets nor expertise of their larger counterparts. This means that these companies not only cannot afford “enterprise-grade” security products, they are often not even aware that such products exist or, for that matter, what problems they are facing without them. And yet, from the compliance perspective, these companies are just as responsible for protecting their customer’s personal information (or other kinds of regulated digital data) as the big ones and they are facing the same harsh punishments for GDPR violations.

One area where this is especially evident is database security. Databases are still the most widespread technology for storing business information across companies of all sizes. Modern enterprise relational databases are extremely sophisticated and complex products, requiring trained specialists for their setup and daily maintenance. The number of security risks a business-critical database is open to is surprisingly large, ranging from the sensitivity of the data stored in it all the way down to the application stack, storage, network and hardware. This is especially true for popular database vendors like Oracle, whose products can be found in every market vertical.

Of course, Oracle itself can readily provide a full range of database security solutions for their databases, but needless to say, not every customer can afford spending that much, not to mention having the necessary expertise to deploy and operate these tools. The recently announced Autonomous Database can solve many of those problems by completely taking management tasks away from DBAs, but it should be obvious that at least in the short term, this service isn’t a solution for every possible use case, so on-premises Oracle databases are not going anywhere anytime soon.

And exactly for these, the company has recently (and without much publicity) released their Database Security Assessment Tool (DBSAT) – a freeware tool for assessing the security configuration of Oracle databases and for identifying sensitive data in them. The tool is a completely standalone command-line program that does not have any external dependencies and can be installed and run on any DB server in minutes to generate two types of reports.

Database Security Assessment report provides a comprehensive overview of configuration parameters, identifying weaknesses, missing updates, improperly configured security technologies, excessive privileges and so on. For each discovered problem, the tool provides a short summary and risk score, as well as remediation suggestions and links to appropriate documentation. I had a chance to see a sample report and even with my quite limited DBA skills I was able to quickly identify the biggest risks and understand which concrete actions I’d need to perform to mitigate them.

The Sensitive Data Assessment report provides a different view on the database instance, showing the schemas, tables and columns that contain various types of sensitive information. The tool supports over 50 types of such data out of the box (including PII, financial and healthcare for several languages), but users can define their own search patterns using regular expressions. Personally, I find this report somewhat less informative, although it does its job as expected. If only for executive reporting, it would be useful not just to show how many occurrences of sensitive data were found, but to provide an overview of the overall company posture to give the CEO a few meaningful numbers as KPIs.

Of course, being a standalone tool, DBSAT does not support any integrations with other security assessment tools from Oracle, nor it provides any means for mass deployment across hundreds of databases. What it does provide is the option to export the reports into formats like CSV or JSON, which can be then exported into third party tools for further processing. Still, even in this rather simple form, the program helps a DBA to quickly identify and mitigate the biggest security risks in their databases, potentially saving the company from a breach or a major compliance violation. And as we all know, these are going to become very expensive soon.

Perhaps my biggest disappointment with the tools, however, has nothing to do with its functionality. Just like other companies before, Oracle seems to be not very keen on letting the world know about tools like this. And what use is even the best security tool or feature if people do not know of its existence? Have a look at AWS, for example, where misconfigured permissions for S3 buckets have been the reason behind a large number of embarrassing data leaks. And even though AWS now offers a number of measures to prevent them, we still keep reading about new personal data leaks every week.

Spreading the word and raising awareness about the security risks and free tools to mitigate them is, in my opinion, just as important as releasing those tools. So, I’m doing my part!

Spectre and Meltdown: A Great Start Into the New Year!

Looks like we the IT people have gotten more New Year presents than expected for 2018! The year has barely started, but we already have two massive security problems on our hands, vulnerabilities that dwarf anything discovered previously, even the notorious Heartbleed bug or the KRACK weakness in WiFi protocols. Discovered back in early 2017 by several independent groups of researchers, these vulnerabilities were understandably kept from the general public to give hardware and operating system vendors time to analyze the effects and develop countermeasures for them and to prevent hackers from creating zero-day exploits.

Unfortunately, the number of patches recently made for the Linux kernel alone was enough to raise suspicion of many security experts. This has led to a wave of speculations about the possible reasons behind them: has it something to do with the NSA? Will it make all computers in the world run 30% slower? Why is Intel’s CEO selling his stock? In the end, the researchers were forced to release their findings a week earlier just to put an end to wild rumors. So, what is this all about after all?

Technically speaking, both Meltdown and Spectre aren’t caused by some bugs or vulnerabilities. Rather, both exploit the unforeseen side effects of speculative execution, a core feature present in most modern processors that’s used to significantly improve calculation performance. The idea behind speculative execution is actually quite simple: every time a processor must check a condition in order to decide which part of code to run, instead of waiting till some data is loaded from memory (which may take hundreds of CPU cycles to complete), it makes an educated guess and starts executing the next instruction immediately. If later the guess proves to be wrong, the processor simply discards those instructions and reverts its state to a previously saved checkpoint, but if it was correct, the resulting performance gain can be significant. Processors have been designed this way for over 20 years, and potential security implications of incorrect speculative execution were never considered important.

Well, not any more. Researchers have discovered multiple methods of exploiting side effects of speculative execution that allow malicious programs to steal sensitive data they normally should not have access to. And since the root cause of the problem lies in the fundamental design in a wide range of modern Intel, AMD and ARM processors, nearly every system using those chips is affected including desktops, laptops, servers, virtual machines and cloud services. There is also no way to detect or block attacks using these exploits with an antivirus or any other software.

In fact, the very name “Spectre” was chosen to indicate that the problem is going to haunt us for a long time, since there is no common fix for all possible exploit methods. Any program designed according to the best security practices is still vulnerable to a carefully crafted piece of malicious code that could extract sensitive data from it by manipulating the processor state and measuring the side effects of speculative execution. There is even a proof-of-concept implementation in JavaScript, meaning that even visiting a website in a browser may trigger an attack, although popular browsers like Chrome and Firefox have already been patched to prevent it.

The only way to fully mitigate all variants of the Spectre exploit is to modify every program explicitly to disable speculative execution in sensitive places. There is some consolation in the fact that exploiting this vulnerability is quite complicated and there is no way to affect the operating system kernel this way. This cannot be said about the Meltdown vulnerability, however.

Apparently, Intel processors take so many liberties when applying performance optimizations to the executed code that the same root cause gives hackers access to arbitrary system memory locations, rendering (“melting”) all memory isolation features in modern operating systems completely useless. When running on an Intel processor, a malicious code can leak sensitive data from any process or OS kernel. In a virtualized environment, a guest process can leak data from the host operating system. Needless to say, this scenario is especially catastrophic for cloud service providers, where data sovereignty is not just a technical requirement, but a key legal and compliance foundation for their business model.

Luckily, there is a method of mitigating the Meltdown vulnerability completely on an operating system level, and that is exactly what Microsoft, Apple and Linux Foundation have been working on in the recent months. Unfortunately, to enforce separation between kernel and user space memory also means to undo performance optimizations processors and OS kernels are relying on to make switching between different execution modes quicker. According to independent tests, for different applications these losses may be anywhere between 5 and 30%. Again, this may be unnoticeable to average office users, but can be dramatic for cloud environments, where computing resources are billed by execution time. How would you like to have your monthly bill suddenly increased by 30% for… nothing, really.

Unfortunately, there is no other way to deal with this problem. The first and most important recommendation is as usual: keep your systems up-to-date with the latest patches. Update your browsers. Update your development tools. Check the advisories published by your cloud service provider. Plan your mitigation measures strategically.

And keep a cool head – conspiracy theories are fun, but not productive in any way. And by the way: Intel officially states that their CEO selling stocks in October has nothing to do with this vulnerability.

For Oracle, the Future Is Autonomous

Recently, I have attended the Oracle OpenWorld in San Francisco. For five days, the company has spared no expenses to inform, educate and (last but not least) entertain its customers and partners as well as developers, journalists, industry analysts and other visitors – in total, a crowd of over 50 thousand. As a person somewhat involved in organizing IT conferences (on a much smaller scale, of course), I could not but stand in awe thinking about all the challenges organizers of such an event had to overcome to make it successful and safe.

More important, however, was the almost unexpected thematic twist that dominated the whole conference. As I was preparing for the event, browsing the agenda and the list of exhibitors, I found way too many topics and products quite outside of my area of coverage. Although I do have some database administrator (DBA) experience, my current interests lie squarely within the realm of cybersecurity and I wasn’t expecting to hear a lot about it. Well, I could not be more wrong! In the end, cybersecurity was definitely one of the most prominent topics, starting right with Larry Ellison’s opening keynote.

The Autonomous Database, the world’s first database, according to Oracle, that comes with fully automated management, was the first and the biggest announcement. Built upon the latest Oracle Database 18c, this solution promises to completely eliminate human labor and hence human error thanks to complete automation powered by machine learning. This includes automated upgrades and patches, disaster recovery, performance tuning and more. In fact, an autonomous database does not have any controls available for a human administrator – it just works™. Of course, it does not replace all the functions of a DBA: a database specialist can now focus on more interesting, business-related aspects of his job and leave the plumbing maintenance to a machine.

The offer comes with a quite unique SLA that guarantees 99.995% availability without any exceptions. And, thanks to more elastic scalability and optimized performance, “it’s cheaper than AWS” as we were told at least a dozen times during the keynote. For me however, the security implications of this offer are extremely important. Since the database is no longer directly accessible to administrators, this not only dramatically improves its stability and resilience against human errors, but also substantially reduces the potential cyberattack surface and simplifies compliance with data protection regulations. This does not fully eliminate the need for database security solutions, but at least simplifies the task quite a bit without any additional costs.

Needless to say, this announcement has caused quite a stir among database professionals: does it mean that a DBA is now completely replaced by an AI? Should thousands of IT specialists around the world fear for their jobs? Well, the reality is a bit more complicated: the Autonomous Database is not really a product, but a managed service combining the newest improvements in the latest Oracle Database release with the decade-long evolution of various automation technologies, running on the next generation of Oracle Exadata hardware platform supported by the expertise of Oracle’s leading engineers. In short, you can only get all the benefits of this new solution when you become an Oracle Cloud customer.

This is, of course, a logical continuation of Oracle’s ongoing struggle to position itself as a Cloud company. Although the company already has an impressive portfolio of cloud-based enterprise applications and it continues to invest a lot in expanding their SaaS footprint, when it comes to PaaS and IaaS, Oracle still cannot really compete with its rivals that started in this business years earlier. So, instead of trying to beat competitors on their traditional playing fields, Oracle is now focusing on offering unique and innovative solutions that other cloud service providers simply do not have (and in the database market probably never will).

Another security-related announcement was the unveiling of the Oracle Security Monitoring and Analytics – a cloud-based solution that enables detection, investigation and remediation of various security threats across on-premises and cloud assets. Built upon the Oracle Management Cloud platform, this new service is also focusing on solving the skills gap problem in cybersecurity by reducing administration burden and improving efficiency of cybersecurity analysts.

Among other notable announcements are various services based on applied AI technologies like intelligent conversation bots and the newly launched enterprise-focused Blockchain Cloud Service based on the popular Hyperledger Fabric project. These offerings, combined with the latest rapid application development tools unveiled during the event as well, will certainly make the Oracle Cloud Platform more attractive not just for existing Oracle customers, but for newcomers of all sizes – from small startups with innovative ideas to large enterprises struggling to make their transition to the cloud as smooth as possible.

Cryptography’s Darkest Hour

For anyone working in IT security, this week surely did not start well. Not one, but two major cryptography-related vulnerabilities have been disclosed, and each of them is at least as massive in scale and potential consequences as the notorious Heartbleed incident from 2014.

First, a Belgian researcher Mathy Vanhoef from the University of Leuven has published the details of several critical weaknesses discovered in WPA2 – the de-facto standard protocol used for securing modern Wi-Fi networks. By exploiting these weaknesses, an attacker can launch so-called key reinstallation attacks (hence the name KRACK, and we’ve discussed the importance of catchy names for vulnerabilities before) and eventually decrypt any sensitive data transmitted over a supposedly secured wireless channel.

As opposed to Heartbleed, however, the vulnerability is not found in a particular library or a product – it’s caused by an ambiguity in the definition of the WPA2 protocol itself, so any operating system or library that implements it correctly is still vulnerable. Thus, all desktop and mobile operating systems were affected by this attack, as well as numerous embedded and IoT devices with built-in Wi-Fi capabilities. Somewhat luckily, this protocol weakness can be fixed in a backwards-compatible manner, so we do not have to urgently switch to WPA3 (and by no means you should switch to WEP or any other even less secure connection method in your wireless network). However, there is no other way to mitigate the problem without patching each client device. Changing the Wi-Fi password, for example, won’t help.

Of course, quite a few vendors have already released updates (including Microsoft), but how long will it take for everyone to apply these? And what about huge numbers of legacy products which will never be patched? The only way to secure them properly is to disable Wi-Fi and basically repurpose them as expensive paperweights. For desktop and mobile users, using HTTPS-only websites or encrypted VPN tunnels for accessing sensitive resources is recommended, just like for any other untrusted network, wireless or not. In general, one should slowly get used to the notion of treating every network as untrusted, even their own home Wi-Fi.

 The second vulnerability revealed just recently is of a different nature, but already classified as even more devastating by many experts. The ROCA (Return of the Coppersmith’s Attack) vulnerability is an implementation flaw discovered by an international team of British, Czech and Italian researchers in a cryptographic library used in security chips produced by Infineon Technology. This flaw essentially means that RSA encryption keys generated using these chips are not cryptographically strong and are much easier to crack.

In theory, this problem should not be as widespread as the KRACK vulnerability, but in reality, it affects numerous security products from such vendors as Microsoft, Google, HP or Lenovo and existing RSA keys dating as far back as 2012 can be vulnerable. Also, since public key cryptography is so widely used in IT – from network encryption to signing application code to digital signatures in eGovernment projects – this opens a broad range of potential exploits: spreading malware, preforming identity theft or bypassing Trusted Platform Modules to run malicious code in secure environments.

What can we do to minimize the damage of this vulnerability? Again, it’s first and foremost about checking for available security updates and applying them in timely manner. Secondly, all potentially affected keys must be replaced (nobody should be using 1024-bit RSA keys in 2017 anyway).

And, of course, we always have be ready for new announcements. The week has only just begun, after all!

The Cargo Cult of Cybersecurity

I’ve been working in IT my whole life and since I’ve joined KuppingerCole ten years ago, cybersecurity has been my job. Needless to say, I like my job: even though we industry analysts are not directly involved in forensic investigations or cyberthreat mitigation, being up-to-date with the latest technological developments and sharing our expertise with both end users and security vendors is our daily life, which is always challenging and exciting at the same time.

However, occasionally I am having doubts about my career choice. Does everything I do even matter? Cybersecurity market is booming, predicted to reach nearly 250 billion USD within the next 5 years. However, do we notice any downward trend in the number of security breaches or financial losses due to cyberattacks? Not really…

Last time I was having these thoughts back in May after the notorious Wannacry incident: just as hundreds of top experts were discussing the most highbrowed cybersecurity problems at our European Identity and Cloud Conference, a primitive piece of malware exploiting a long-fixed problem in Windows operating system has disrupted hundreds of thousands computers around the world, affecting organizations from public hospitals to international telecom providers. How could this even have happened? All right, those poor underfunded and understaffed British hospitals at least have an (still questionable) excuse for not being able to maintain the most basic cybersecurity hygiene principles within their IT departments. But what excuse do large enterprises have for letting their users open phishing emails and not having proper backups of their servers?

“But users do not care about their security or privacy,” people say. This can’t be further from truth though! People care about not being killed very much, so they arm themselves with guns. People care about their finances, so they do not keep their money under mattresses. And people surely care about their privacy, so they buy curtains and lock their doors. However, many people still do not realize that having an antivirus on their mobile phone is just as important for their financial stability and sometimes even physical safety as having a gun on their night table. And even those who are already aware of that, are often sold security products like some kind of magical amulets that are supposed to solve their problems without any effort. But should users really be blamed for that?

With enterprises, the situation is often even worse. Apparently, a substantial percentage of security products purchased by companies never even gets deployed at all. And more often than not, even those that get deployed, will be actively sabotaged by users who see them as a nuisance hindering their business productivity. Add the “shadow IT” problem into the mix, and you’ll realize that many companies that spend millions on cybersecurity are not really getting any substantial return of their investments. This is a classical example of a cargo cult. Sometimes, after reading about another large-scale security breach I cannot completely suppress a mental image of a firewall made out of a cardboard box or a wooden backup appliance not connected to anything.

However, the exact reason for my today’s rant is somewhat different and, in my opinion, even more troubling. While reading the documentation for a security-related product of one reputable vendor, I’ve realized that it uses an external MySQL database to store its configuration. That got me thinking: a security product is sold with a promise to add a layer of protection around an existing business application with known vulnerabilities. However, this security product itself relies on another application with known vulnerabilities (MySQL isn’t exactly known for its security) to fulfill its basic functions. Is the resulting architecture even a tiny bit more secure? Not at all – due to added complexity it’s in fact even more open to malicious attacks.

Unfortunately, this approach towards secure software design is very common. The notorious Heartbleed vulnerability of the OpenSSL cryptographic library has affected millions of systems around the world back in 2014, and three years later at least 200.000 still have not been patched. Of course, software vendors have their reasons for not investing into security of their products: after all, just like any other business, they are struggling to bring their products to the market as quickly as possible, and often they have neither budgets nor enough qualified specialists to design a properly secured one.

Nowadays, this problem is especially evident in consumer IoT products, and this definitely needs a whole separate blog post to cover. However, security vendors not making their products sufficiently secure pose an even greater danger: as I mentioned earlier, for many individuals and organizations, a cybersecurity product is a modern equivalent of a safe. Or an armored car. Or an insulin pump. How can we trust a security product that in fact is about as reliable as a safe with plywood walls?

Well, if you’ve read my past blog posts, you probably know that I’m a strong proponent of government regulation of cybersecurity. I know that this idea isn’t exactly popular among software vendors, but is there really a viable alternative? After all, gunsmiths or medical equipment manufacturers have been under strict government control for ages, and even security guards and private investigators must obtain licenses first. Why not security vendors? For modern digital businesses, the reliability of cybersecurity products is at least as important as pick resistance of their door locks.

Unfortunately, this kind of government regulation isn’t probably going to happen anytime soon, so companies looking for security solutions are still stuck with the “Caveat Emptor” principle. Without enough own experience to judge whether a particular product is really capable of fulfilling its declared functionality, one, of course, should turn to an independent third party for a qualified advice. For example, to an analyst house like us :)

However, the next most useful thing to look for is probably certification according to government or industry standards. For example, when choosing an encryption solution, it’s wise to look for a FIPS 140-2 certification with level 2 or higher. There are appropriate security certifications for cloud service providers, financial institutions, industrial networks, etc.

In any case, do not take any vendor’s claims for granted. Ask for details regarding the architecture of their products, which security standards they implement or whether they rely on open source libraries or third-party products. The more pressure about secure design you put on vendors, the higher are the chances that in the future, they will see security by design as their unique selling proposition and not a waste of resources. And as always, when you don’t know where to start, just ask an expert!

When Are We Finally Going to Do Something About Ransomware?

Just as we have returned from our annual European Identity and Cloud Conference, where we’ve spent four days talking about cybersecurity, identity management and privacy protection with top experts from around the world, we faced the news from Great Britain, where the latest large-scale ransomware attack has nearly shut down IT systems in at least 16 hospitals. Medical workers have been completely locked out of their computers. Patient records, test results, blood banks were no longer available. Critical patients had been rushed to other hospitals for emergency surgeries, while doctors had to switch back to pen and paper to carry on their duties.

How could all this even happen? Sure, the media often present ransomware as some kind of a diabolically complex work of elite hacker groups, but in reality this is one of the least technologically advanced kinds of malware, barely more sophisticated that the proverbial Albanian virus. Typically, ransomware is spread via massive phishing campaigns, luring unsuspecting users to click an attachment and then let the malware exploit a known vulnerability to infect their computers. Finally, ransomware holds the victim’s computer hostage by encrypting their important files or locking access to the whole system, demanding a payment to restore it.

This kind of malware is nowhere new, with a first prototype developed over 20 years ago, but only recently, as the number of computers connected to the Internet has grown exponentially along with availability of online payment services, has it become a profitable business for cybercriminals. After all, there is no need to spend weeks planning a covert targeted attack or developing evasion technologies – one can easily utilize readily available spam networks and vulnerability exploits to start collecting bitcoins or even iTunes gift cards from poor home users mourning the loss of their vacation photos.

In the last couple of years, we’ve learned about several major ransomware types like CryptoLocker or CryptoWall, which have managed to collect millions of dollars in ransom before they were finally taken down by the authorities. Unfortunately, new strains constantly appear to evade antivirus detection and to target various groups of victims around the world. The WannaCry ransomware that affected the hospitals in Britain wasn’t in fact targeting the NHS specifically – within just a few hours after being initially identified, it has already spread around the world, affecting targets in nearly 100 countries including large telecommunications companies in Spain or government agencies in Russia.

Personally, I find it hard to believe that this was the original intention of the people behind this malware campaign. Rather, it looks like “a job done too well”, which led to the uncontrolled spread far beyond what was initially planned. A notable fact about this ransomware strain, however, is that it uses a particular vulnerability in Microsoft Windows that has been weaponized by the NSA and which became public in April after a leak by the Shadow Brokers group.

Although this exploit has been patched by Microsoft even before the leak, a huge number of computers around the world have not yet been updated. This, of course, includes the British hospitals, which still largely utilize extremely outdated computers running Windows XP. Without the budgets needed to upgrade and maintain their IT systems, without properly staffed IT departments and, last but not least, without properly educating the users, the whole IT infrastructure at the NHS was basically a huge ticking bomb, which finally went off today.

So, what can we do to avoid being hit by a ransomware like this? It is worth stressing again that resilience against ransomware attacks is a matter of the most basic “cybersecurity hygiene” practices. My colleague John Tolbert has outlined them in one of his blog posts a month ago. We are planning to publish additional reports on this topic in the near future, including a Leadership Compass on antimalware and endpoint security solutions, so watch this space for new announcements.

There is really nothing complicated about maintaining proper backups and not clicking on attachments in phishing mails, so if an organization was affected by ransomware, this is a strong indicator that its problems lie beyond the realm of technology. For several years, we’ve been talking about the similar divide in the approaches towards cybersecurity between IT and OT. However, where OT experts at least have their reasons for neglecting IT security in favor of safety and process continuity, the glaring disregard for the most basic security best practices in many public-sector institutions can only be attributed to insufficient funding and thus a massive lack of qualified personnel, which is needed not just to operate and secure IT infrastructures, but to continuously educate the users about the latest types of cyberthreats. Unfortunately, the recent cuts in NHS funding do not promise any positive changes for British hospitals.

There is the legal aspect of the problem as well. Whereas oil rigs, nuclear power plants or water supplies are rightfully classified as critical infrastructures, with special government programs created to protect them, hospitals are somehow not yet seen as critical, although many lives obviously depend on them. If an attack on a power plant can be rightfully considered an act of terrorism, why disrupting critical medical services still isn’t?

Quite frankly, I very much hope that, regardless of what the motives of the people behind this ransomware were, cybersecurity experts and international law enforcement agencies team up to find them as quickly as possible and come down on them like a ton of bricks if just for the sake of sending a final warning to other cybercriminals. Because if they don’t, we can only brace ourselves for more catastrophes in the future.

Cognitive Technologies: The Next Big Thing for IAM and Cybersecurity

The ongoing Digital Transformation has already made a profound impact not just on enterprises, but our whole society. By adopting such technologies as cloud computing, mobile devices or the Internet of Things, enterprises strive to unlock new business models, open up new communication channels with their partners and customers and, of course, save on their capital investments.

For more and more companies, digital information is no longer just another means of improving business efficiency, but in fact their core competence and intellectual property.

Unfortunately, the Digital Transformation does not only enable a whole range of business prospects, it also exposes the company's most valuable assets to new security risks. Since those digital assets are nowadays often located somewhere in the cloud, with an increasing number of people and devices accessing them anywhere at any time, the traditional notion of security perimeter ceases to exist, and traditional security tools cannot keep up with the new sophisticated cyberattack methods.

In the recent years, the IT industry has been busy with developing various solutions to this massive challenge, however each new generation of security tools, be it Next Generation Firewalls (NGFW), Security Information and Event Management (SIEM) or Real-Time Security Intelligence (RTSI) solutions, has never entirely lived up to the expectations. Although they do offer significantly improved threat detection or automation capabilities, their “intelligence level” is still not even close to that of a human security analyst, who still has to operate these tools to perform forensic analysis and make informed decisions quickly and reliably.

All this has led to a massive lack of skilled workforce to man all those battle stations that comprise a modern enterprise’s cyber defense center. There are simply not nearly enough humans to cope with the vast amounts of security-related information generated daily. The fact that the majority of this information is unstructured and thus not available for automated analysis by computers makes the problem much more complicated.

Well, the next big breakthrough promising to overcome this seemingly unsolvable problem is coming from the realm of science fiction. Most people are familiar with the so called cognitive technologies from books or movies, where they are usually referred to as “Artificial Intelligence”. Although the true “strong AI” comparable to a human brain may still remain purely theoretical for quite some time, various practical applications of cognitive technologies (like speech recognition, natural language processing, computer vision or machine learning) have found practical uses in many fields already. From Siri and Alexa to market analysis and law enforcement: these technologies are already in use.

More relevant for us at KuppingerCole (and hopefully for you as well) are potential applications for identity management and cybersecurity.

A cognitive security solution can utilize natural language processing to analyze both structured and unstructured security information the way human analysts currently do. This won’t be limited just to pattern or anomaly recognition, but proper semantic interpretation and logical reasoning based on evidence. Potentially, this may save not days but months of work for an analyst, which would ideally only need to confirm the machine’s decision with a mouse click. Similarly, continuous learning, reasoning and interaction can provide significant improvement to existing dynamic policy-based access management solutions. Taking into account not just simple factors like geolocation and time of the day, but complex business-relevant cognitive decisions will increase operational efficiency, provide better resilience against cyber-threats and, last but not least, improve compliance.

Applications of cognitive technologies for Cybersecurity and IAM will be a significant part of this year’s European Identity & Cloud Conference. We hope to see you in Munich on May 9-12, 2017!

Building a Future-proof Intelligent Security Operations Center, Part 2

Security Intelligence Platforms (SIP) are universal and extensible security analytics solutions that offer a holistic approach towards maintaining complete visibility and management of the security posture across the whole organization. Only by correlating both real-time and historical security events from logs, network traffic, endpoint devices and even cloud services and enriching them with the latest threat intelligence data it becomes possible to identify previously unknown advanced security threats quickly and reliably, to be able to respond to them in time and thus minimize the damage.

They are in a sense “next generation SIEM solutions” based on RTSI technologies, which provide substantial improvements over traditional SIEMs both in functionality and efficiency:

  • Performing real-time or near real-time detection of security threats without relying on predefined rules and policies;
  • Correlating both real-time and historical data across multiple sources enables detecting malicious operations as whole events, not separate alerts;
  • Dramatically decreasing the number of alarms by filtering out statistical noise, eliminating false positives and providing clear risk scores for each detected incident;
  • Offering a high level of automation for typical analysis and remediation workflows, thus significantly improving the work efficiency for security analysts;
  • Integrating with external Threat Intelligence feeds in industry standards like STIX/TAXII to incorporate the most recent security research into threat analysis.

Another key aspect of many SIP products is incorporation of Incident Response Platforms. Designed for orchestrating and automating incident response processes, these solutions not only dramatically improve a security analyst’s job analyzing and containing a breach, but also provide predefined and highly automated workflows for managing legal and even PR consequences of a security incident to reduce possible litigation costs, compliance fines and brand reputation losses. Modern SIP products either directly include incident response capabilities or integrate with 3rd party products, finally implementing a full end-to-end security operations and response solution.

By dramatically reducing the number of incidents that require interaction with an analyst and by automating forensic analysis and decision making, next generation SIPs can help address the growing lack of skilled people in information security. As opposed to traditional SIEMs, next generation SIPs should not require a team of trained security experts to operate, relying instead on actionable alerts understandable even to business persons, thus making them accessible even for smaller companies, which previously could not afford operating their own SOC.

Now, what about the future developments in this area? First of all, it’s worth mentioning that the market continues to evolve, and we expect its further consolidation through mergers and acquisitions. New classes of security analytics solutions are emerging, targeting new markets like the cloud or the Internet of Things. On the other hand, many traditional security tools like endpoint or mobile security products are incorporating RTSI technologies to improve their efficiency. In fact, the biggest obstacle for wider adoption of these technologies is no longer the budget, but rather the lack of awareness that such products already exist.

However, the next disruptive technology that promises to change the way Security Operations Centers are operated seems to be Cognitive Security. Whereas Real-Time Security Intelligence can provide security analysts with better tools to improve their efficiency, it still relies on humans to perform the actual analysis and make informed decisions about each security incident. Applying cognitive technologies (the thing closest to the artificial intelligence as we know it from science fiction) to the field of cybersecurity promises to overcome this limitation.

Technologies for language processing and automated reasoning not only help to unlock vast amounts of unstructured “dark security data”, which until now were not available for automated analysis, they actually promise to let the AI to do most of the work that a human analyst must perform now: collect context information, define a research strategy, pull in external intelligence and finally make an expert decision on how to respond to the incident in the most appropriate way. Supposedly, the analyst would only have to confirm the decision with a click of a mouse.

Sounds too good to be true, but the first products incorporating cognitive security technologies are already appearing on the market. The future is now!

Building a Future-proof Intelligent Security Operations Center

I have to admit that I find the very concept of a Security Operations Center extremely… cinematic. As soon as you mention it to somebody, they would probably imagine a large room reminiscent of the NASA Mission Control Center – with walls lined with large screens and dozens of security experts manning their battle stations. From time to time, a loud buzzer informs them that a new security incident has been discovered, and a heroic team starts running towards the viewer in slow motion…

Of course, in reality most SOCs are much more boring-looking, but still this cliché image from action movies captures the primary purpose of an SOC perfectly – it exists to respond to security breaches as quickly as possible in order to contain them and minimize the losses. Unfortunately, looking back at the last decade of SOC platform development, it becomes clear that many vendors have been focusing their efforts elsewhere.

Traditional Security Information and Event Management (SIEM) platforms, which have long been the core of security operations centers, have gone long way to become really good at aggregating security events from multiple sources across organizations and providing monitoring and alerting functions, but when it comes to analyzing a discovered incident, making an informed decision about it and finally mitigating the threat, security experts’ job is still largely manual and time-consuming, since traditional SIEM solutions offer few automation capabilities and usually do not support two-way integration with security devices like firewalls.

Another major problem is the sheer number of security events a typical SOC is receiving daily. The more deperimeterized and interconnected modern corporate networks become, the more open they are for new types of cyberthreats, both external and internal, and the number of events collected by a SIEM increases exponentially. Analysts no longer have nearly enough time to analyze and respond to each alert. The situation is further complicated by the fact that an overwhelming majority of these events are false positives, duplicates or otherwise irrelevant. However, a traditional SIEM offers no way to differentiate them from real threats, drowning analysts in noise and leaving them only minutes to make an informed decision about each incident.

All this leads to the fundamental problem IT industry is now facing: because of the immense complexity of setting up and operating a security operations center, which requires a large budget and a dedicated team of security experts, many companies simply cannot afford it, and even those who can are continuously struggling with the lack of skilled workforce to manage their SOC. In the end, even for the best-staffed security operations centers, the average response time to a security incident is measured in days if not weeks, not even close to the ultimate goal of dealing with them in real time.

In the recent years, this has led to the emergence of a new generation of security solutions based on Real-Time Security Intelligence. Such tools utilize Big Data analytics technologies and machine learning algorithms to correlate large amounts of security data, apply threat intelligence from external sources, detect anomalies in activity patterns and provide a small number of actionable alarms clearly ranked by their risk scores. Such tools promise to dramatically reduce the time to mitigate a breach by performing data analysis in real time, eliminating statistical noise and false positives and, last but not least, providing a high degree of automation to make the security analyst’s job easier.

Although KuppingerCole has been promoting this concept for quite a few years already, the first real products have appeared a couple years ago, and since then the market has evolved and matured at an incredible rate. Back in 2015, when KuppingerCole attempted to produce a Leadership Compass on RTSI solutions, we failed to find enough vendors for a meaningful rating. In 2017, however, we could easily identify over 25 Security Intelligence Platform solutions offered by a variety of vendors, from large veteran players known for their SIEM products to newly established innovative startups.

To be continued...

Big Data and Information Security Study

Since the notion of a corporate security perimeter has all but disappeared in the recent years thanks to the growing adoption of cloud and mobile services, information security has experienced a profound paradigm shift from traditional perimeter protection tools towards monitoring and detecting malicious activities within corporate networks. Increasingly sophisticated attack methods used by cyber criminals and even more so, the growing role of malicious insiders in the recent large scale security breaches clearly indicate that traditional approaches to information security can no longer keep up.

As the security industry’s response to these challenges, a new generation of security analytics solutions has emerged in the recent years, which are able to collect, store and analyze huge amounts of security data across the whole enterprise in real time. These Real-Time Security Intelligence solutions are combining Big Data and advanced analytics to correlate security events across multiple data sources, providing early detection of suspicious activities, rich forensic analysis tools, and highly automated remediation workflows.

Industry analysts, ourselves included, have been covering this fundamental focus shift in the information security for a few years already. However, getting that message across to the general public is not an easy task. To find out how many organizations around the world are truly understanding the critical role of security analytics technology in their corporate security strategies, earlier this year KuppingerCole has teamed up with BARC – a leading enterprise software industry analyst and consulting firm specializing in areas including Data Management and Business Intelligence – to conduct a global survey on Big Data and Information Security. Our survey was focused on security-related aspects of Big Data analytics in cybersecurity and fraud detection and is based on contributions of over 330 participants from 50 countries representing enterprises of all sizes across various industries such as IT, Services, Manufacturing, Finance, Retail or Public Sector.

The study delivers insights into the level of awareness and current approaches in information security and fraud detection in organizations around the world. It measures importance, status quo and future plans of Big Data security analytics initiatives, presents an overview of various opportunities, benefits and challenges relating to those initiatives, as well as outlines the range of technologies currently available to address those challenges.

Here are a few highlights of the study results:

Information Security and Big Data are recognized as the two most important IT trends

Over a half of the survey respondents consider Big Data technology one of the cornerstones of the Digital Transformation and consider protecting their digital assets from security risks and compliance violation extremely important. The public awareness of the potential of security analytics solutions is very impressive as well: almost 90% of the participants believe that these solutions will play a critical role in their corporate security infrastructures.

Current implementations are still lagging behind

Unfortunately, only a quarter of the respondents have already implemented big data security analytics measures. Even fewer, just 13% consider themselves best-in-class in this field, believing to have a better understanding of the technology than their competitors.

Benefits from big data security analytics are high

The overwhelming majority of the best-on-class participants believe that security analytics can bring substantial profits for their companies. In fact, over 70% of all respondents, even those who do not yet have a budget or a strategy for security analytics, already consider potential benefits from implementing such a solution to be high or at least moderate.

Best-in-class companies use a wide range of technologies

The companies with deep understanding of current information security trends and technologies clearly realize that only multi-layered and well-integrated security architectures are capable of resisting modern sophisticated cyber-attacks. They are deploying multiple security tools not just for threat protection, but for identity and access governance, strong authentication, SIEM and user behavior analytics as well. Unfortunately, many of the “laggards” are not even aware that some of these technologies exist.

Automated security controls are a key differentiator

Identifying a security incident is just the first step of a complex remediation process, which is still largely manual and requires a skilled security expert to carry it out properly using a large number of security tools. New generation security analytics solutions therefore place a strong emphasis on automation, which helps to reduce the skill gap and ideally let even a non-technical person initiate an automated incident response process. 98% of the best-in-class respondents are already aware of these developments and consider automation a key aspect of security solutions.

 

You’ll find a short summary of our findings in the handy infographic above. The complete study can be downloaded from our website in English or German. Thanks to the generosity of MicroStrategy, Inc., we are able to make it available free of charge.

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

Privacy & the European Data Protection Regulation Learn more

Privacy & the European Data Protection Regulation

The EU GDPR (General Data Protection Regulation), becoming effective May 25, 2018, will have a global impact not only on data privacy, but on the interaction between businesses and their customers and consumers. Organizations must not restrict their GDPR initiatives to technical changes in consent management or PII protection, but need to review how they onboard customers and consumers and how to convince these of giving consent, but also review the amount and purposes of PII they collect. The impact of GDPR on businesses will be far bigger than most currently expect. [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00