Blog posts by Alexei Balaganski

Amazon WorkMail – a new player on the Enterprise Email and Calendaring market

Amazon Web Services has again made headlines today by announcing Amazon WorkMail – their managed email and calendaring service targeted at corporate customers. This is obviously a direct take on their biggest competitors, namely, Google and Microsoft, and the biggest differentiators Amazon is focusing on are ease of use and security.

Amazon WorkMail is described as a completely managed replacement for an organization’s own legacy email infrastructure. Since the service is compatible with Microsoft Exchange and is capable of integrating with an existing on-premise Active Directory, the process of migration should be quick and seamless. Since AWS will take over most administrative processes, such as patching or backups, this can dramatically decrease administration efforts and costs.

Although WorkMail has its own web interface, AWS is more focused on supporting existing mail and calendaring tools. Any ActiveSync-capable program, including Microsoft Outlook for Windows and OS X, as well as native iOS and Android email clients, can be supported without installing any plug-ins. Migration from an on-premise Exchange server can be completely transparent and does not require any changes on end user devices. A migration wizard is provided as a part of the package.

With the new service, AWS is also placing big emphasis on security. Since email has long been an integral part of our daily business processes, a lot of sensitive corporate information passes through it and ends up getting stored on the mail server. By integrating with AWS Key Management Service, WorkMail will automatically encrypt all email data at rest, while giving customers complete control over the encryption keys. It is also possible to restrict where this information is stored to a specific geographical region to ensure compliance with local privacy regulations.

Last year, AWS announced their Zocalo service for secure storage and sharing of enterprise data, a direct competitor to other cloud storage services like Dropbox or Google Drive. Needless to say, WorkMail is tightly integrated with Zocalo, allowing the secure exchange of documents instead of sending them as unprotected attachments. In fact, AWS offers a bundle of WorkMail with Zocalo for an attractive price.

There is one potential misunderstanding, however, which I feel obligated to mention. Even with all security features integrated into WorkMail, it still cannot be considered a true end-to-end encryption solution and is thus potentially vulnerable to various security problems. This is another example of a tradeoff between security and convenience, and Amazon simply had to make it to ensure compatibility with existing email programs and protocols.

Still, with an impressive integrated offering and traditionally aggressive pricing model, Amazon WorkMail is definitely another step in AWS’s steady push towards global market leadership.

FIDO Alliance announces final FIDO 1.0 specifications

Yesterday, culminating over 20 months of hard work, FIDO Alliance has published final 1.0 drafts of their Universal Authentication Framework (UAF) and Universal 2nd Factor (U2F) specifications, apparently setting a world record in the process as the world’s fastest development of a standard in the Identity Management industry.

I wrote a post about FIDO Alliance in October, when the first public announcement of the specifications has been made. Since that time, I’ve had an opportunity to test several FIDO-compatible solutions myself, including the Security Key and Yubikey Neo-N from Yubico, as well as the FIDO ready fingerprint sensor in my Galaxy S5 phone, which now lets me access my PayPal account securely. I’ve studied the documentation and reference code for building U2F support into web applications and cannot wait to try it myself, seeing how easy it looks. Probably the only thing that’s stopping me right now is that my favorite browser hasn’t implemented U2F yet.

Well, I hope that this will change soon, because that’s what publishing finalized specifications is about: starting today FIDO alliance members are free to officially market their FIDO Ready strong authentication solutions and non-members are encouraged to deploy them with the peace of mind, knowing that their implementation will interoperate with current and future products based on these standards. Press coverage of the event seems to be quite extensive, with many non-technical publications picking up the news. I believe that to be another indication of importance of strong and simple authentication for everyone. Even those who do not understand the technical details are surely picking up the general message of “making the world free of passwords and PINs”.

Those who are interested in technical details would probably be interested in the changes in the final version since the last published draft. I’m sure these can be found on FIDO Alliance’s website or in one of their webinars. What is more important, however, is that products released earlier remain compatible with the final specification and that we should expect many new product announcements from FIDO members really soon. We should probably expect more companies to join the alliance, now that the initiative is gaining more traction. Mozilla Foundation, that includes you as well!

In the meantime, my congratulations to FIDO Alliance on another important milestone in their journey to the future without passwords.

Quis custodiet ipsos custodes?

Or, if your Latin is a bit rusty, “who is guarding the guards themselves”? This was actually my first thought when I’ve read an article published by Heise Online. Apparently, popular security software from Kaspersky Lab, including at least their Internet Security and Antivirus, is still susceptible to the now-well-known POODLE exploit, which allows hackers to perform a man-in-the-middle attack on an SSL 3.0 connection by downgrading the level of encryption and effectively breaking its cryptographic security.

When this vulnerability was published in September, many security researchers called for immediate demise of SSL 3.0, which is a very outdated and in many aspects weak protocol, however quite a lot of older software still doesn’t support TLS, its modern replacement. At the end, many web services, as well as all major browser vendors have implemented some sort of protection against the exploit, either by disabling SSL 3.0 completely or by preventing downgrade attacks using TLS_FALLBACK_SCSV. For a couple of months, we felt safe again.

Well, turns out that getting rid of POODLE isn’t as easy as we thought – it’s not enough to harden both ends of the communication channel, you have to think about the legitimate “men-in-the-middle” as well, which can still be unpatched and vulnerable. This is exactly what happened to Kaspersky’s security products: as soon as the option “Scan encrypted connections” is enabled, they will intercept an outgoing secure connection, decrypt and analyze its content, and then reestablish a new secure connection to the appropriate website. Unfortunately, this new connection is still using SSL 3.0, ready to be exploited.

Think of it: even if you have the latest browser that explicitly disables SSL 3.0, your antivirus software would secretly make your security worse without letting you know (your browser will be connecting to the local proxy using new TLS protocol, which looks perfectly safe). Just like I was writing regarding the Heartbleed bug in April: “there is a fundamental difference between being hacked because of ignoring security best practices and being hacked because our security tools are flawed”. The latter not only adds insult to injury, it can severely undermine user’s trust in security software, which at the end is bad for everyone, even the particular vendor’s competitors.

The problem seems to be originally discovered by a user who posted his findings on Kaspersky’s support forum. I must admit I find the support engineer’s reply very misleading: the SSL vulnerability is by no means irrelevant, and one can imagine multiple scenarios where it could lead to sensitive data leaks.

Well, at least, according to Heise, the company is working on a patch already, which will be released sometime in January. Until then you should think twice before enabling this option: who is going to protect your antivirus after all?

Regin Malware: Stuxnet’s Spiritual Heir?

As if IT security community hasn’t had enough bad news recently, this week has begun with a big one: according to a report from Symantec, a new, highly sophisticated malware has been discovered, which the company dubbed “Regin”. Apparently, the level of complexity and customizability of the malware rivals if not trumps its famous relatives, such as Flamer, Duqu and Stuxnet. Obviously, the investigation is still ongoing and Symantec, together with other researchers like Kaspersky Lab and F-Secure are still analyzing their findings, but even those scarce details allow us to make a few far-reaching conclusions.

Let’s begin with a short summary of currently known facts (although I do recommend reading the full reports from Symantec and Kaspersky Lab linked above, they are really fascinating if a bit too long):

  1. Regin isn’t really new. Researchers have been studying its samples since 2012 and the initial version seems to have been in use since at least 2008. Several components have timestamps from 2003. Makes you appreciate even more how it managed to stay under radars for so long. And did it really? According to F-Secure, at least one company affected by this malware two years ago has explicitly decided to keep quiet about it. What a ground for conspiracy theorists!
  2. Regin’s level of complexity trumps practically any other known piece of malware. Five stages of deployment, built-in drivers for encryption, compression, networking and virtual file systems, utilization of different stealth techniques, different deployment vectors, but most importantly a large number of various payload modules – everything indicates a level of technical competence and financial investment of a state-sponsored project.
  3. Nearly half of affected targets have been private individuals and small businesses and the primary vertical the malware appears to be targeting is telecommunications industry. According to Kaspersky Lab’s report, code for spying on GSM networks has been discovered in it. Geographically, primary targets appear to be Russia and Saudi Arabia, as well as Mexico, Ireland and several other European and Middle Eastern countries.
So, is Regin really the new Stuxnet? Well, no. Surely, its incredible level of sophistication and flexibility indicates that it most certainly is a result of a state-sponsored development. However, Regin’s mode of operation is completely opposite to that of its predecessor. Stuxnet has been a highly targeted attack on Iranian nuclear enrichment facilities with the ultimate goal of sabotaging their work. Regin, on the other hand, is an intelligence-gathering spyware tool, and it doesn’t seem to be targeted on a specific company or government organization. To the contrary, it’s a universal and highly flexible tool designed for long-term covert operations.

Symantec has carefully avoided naming a concrete nation-state or agency that may have been behind this development, but the fact that no infections have been observed in the US or UK is already giving people ideas. And, looking at the Regin discovery as a part of a bigger picture, this makes me feel uneasy.

After Snowden’s revelations, there’s been a lot of hope that public outcry and pressure on governments will somehow lead to major changes limiting intelligence agencies’ powers for cyber spying. Unfortunately, nothing of that kind has happened yet. In fact, looking at the FUD campaign FBI and DoJ are currently waging against mobile vendors (“because of your encryption, children will die!”) or the fact that the same German BND intelligence service that’s promoting mandatory encryption is quietly seeking to install backdoors into email providers and spending millions on zero-day exploits, there isn’t much hope for a change left. Apparently, they seem oblivious to the fact that they are not just undermining trust in the organizations that supposedly exist to protect us from foreign attackers, but also open new attack surfaces for them by setting up backdoors and financing development of new exploits. Do they honestly believe that such a backdoor or exploit won’t be discovered and abused by hackers? This could probably be a topic for a separate blog post…

Isn’t it ironic that among all the talks about Chinese and Russian hackers, the biggest threat to our cybersecurity might come from the West?

Getting a Grip on Operational Technology

Let’s begin with a couple of fundamental definitions:

Information Technology (IT) can be defined as a set of infrastructures, devices and software for processing information. A traditional IT system is in charge of storing, transmitting and transforming data, but it does not interface directly with the physical world.

Operational Technology (OT) is a set of hardware devices, sensors and software that support management and monitoring of physical equipment and processes within an enterprise, such as manufacturing plants or power distribution grids. OT deals with such components as various sensors, meters and valves, as well as industrial control systems (ICS) that supervise and monitor them.

The terms ICS and SCADA, by the way, are nowadays often used interchangeably; however, this isn’t strictly true, since Supervisory Control and Data Acquisition (SCADA) is just a subset of industrial control systems, other types being embedded systems, distributed control systems, etc. Traditionally, the term SCADA has been used for large-scale distributed control systems, such as a power grid or a gas pipeline.

Historically, IT and OT have evolved quite independently, driven by completely different business demands, requirements and regulations. In a sense, Operation Technology predates the era of computers – the first manufacturing control systems weren’t even electronic! Early ICS were monolithic physically isolated systems without network connectivity. Later generations were usually based on proprietary communication protocols and device-specific real-time operating systems. Driven above all by demand of process continuity, they were usually designed without security in mind.

Current ICS, however, have gradually evolved towards large-scale systems based on open standards and protocols, such as IP, as well as using standard PCs running Windows as control workstations. They are becoming increasingly interconnected with office networks and the Internet. Yet, modern industrial networks are often still plagued with the same blatant disregard for security. The underlying reason for that has little to do with technology; on the contrary, it’s a consequence of a deep cultural divide between OT and IT. Operations departments usually consist of industry specialists with engineering background, while IT departments are staffed by people without knowledge of manufacturing processes. OT is usually managed by a business unit, with different requirements, strategies and responsibilities from IT. Instead of collaborating, they are often forced to compete for budgets and fight over issues that the other party simply sees as insignificant.

The times are changing, however. As we are approaching the new “connected” age, the technological divide between industrial and enterprise networks is disappearing. Smart devices or “things” are everywhere now, and embedded intelligence finds widespread use in industrial networks as well. A modern agile business constantly demands for new ways of communication with partners, customers and other external entities. All this creates new exciting opportunities. And new risks.

Opening OT to the world means that industrial networks are exposed to the same old security problems like malware attacks and lack of strong authentication. However, the challenges for information security professionals go far beyond that. There are challenges that traditional IT security isn’t yet capable of addressing. This includes technical issues like securing proprietary programmable logic controllers (PLC), business requirements like ensuring manufacturing process continuity, and completely new challenges like enabling massive-scale identity services for the Internet of Everything.

The convergence of IT and OT is therefore inevitable, even though the challenges the organizations are going to face on the way to it look daunting. And it is the responsibility of IT specialists do lead and steer this process.

“If not us, then who? If not now, then when?”

This article has originally appeared in the KuppingerCole Analysts' View newsletter.

Big News from the FIDO Alliance

FIDO Alliance (where FIDO stands for Fast IDentity Online) is an industry consortium formed in July 2012 with a goal to address the lack of interoperability among various strong authentication devices. Currently among its members are various strong authentication solution vendors (such as RSA, Nok Nok Labs or Yubico), payment providers (VISA, MasterCard, PayPal, Alibaba), as well as IT industry giants like Microsoft and Google. The mission of the FIDO Alliance has been to reduce reliance on passwords for authentication and to develop specifications for open, scalable and interoperable strong authentication mechanisms.

KuppingerCole has been closely following the progress of FIDO Alliance’s developments for the last couple of years. Initially Martin Kuppinger has been somewhat skeptical about the alliance’s chances to gain enough support and acceptance among the vendors. However, seeing how many new members were joining the alliance, as well as announcements like the first FIDO authentication deployment by PayPal and Samsung earlier this year would confirm their dedication to lead a paradigm shift in the current authentication landscape. It’s not just about getting rid of passwords, but about giving users the opportunity to rely on their own personal digital identities, potentially bringing to an end the current rule of social logins.

After years of collaboration, Universal Authentication Framework and Universal 2nd Factor specifications have been made public in October 2014. This has been closely followed by several announcements from different Alliance members, unveiling their products and solutions implementing the new FIDO U2F standard.

One that definitely made the biggest splash is, of course, Google’s announcement of strengthening their existing 2-step verification with a hardware-based second factor, the Security Key. Although Google has been a strong proponent of multifactor authentication for years, their existing infrastructure is based on one-time codes sent to users’ mobile devices. Such schemes are known to be prone to various attacks and cannot protect users from falling victim to a phishing attack.

The Secure Key (which is a physical USB device manufactured by Yubico) enables much stronger verification based on cryptographic algorithms. This also means that each service has its own cryptographic key, meaning that users can reliably tell a real Google website from a fake one. Surely, this first deployment based on a USB device has its deficiencies as well, for example, it won’t work on current mobile devices, since they all lack a suitable USB port. However, since the solution is based on a standard, it’s expected to work with any compatible authentication devices or software solutions from other alliance members.

Currently, U2F support is available only in Google Chrome browser, but since the standard is backed by such a large number of vendors including major players like Microsoft or Salesforce, I am sure that other browsers will follow soon. Another big advantage of an established standard is availability of libraries to enable quick inclusion of U2F support into existing client applications and websites. Yubico, for example, provides a set of libraries for different languages. Google offers open source reference code for U2F specification as well.

In a sense, this first U2F large-scale deployment by Google is just the first step in a long journey towards the ultimate goal of getting rid of passwords completely. But it looks like a large group sharing the same vision has much more chances to reach that goal earlier that anybody planning to walk all the way alone.

GlobalSign acquires Ubisecure, plans to win the IoE market

GlobalSign, one of the world’s biggest certificate authorities and a leading provider of digital identity services, has announced today that it has acquired Ubisecure, a Finnish privately held software development company specializing in Identity and Access Management solutions.

Last year, KuppingerCole has recognized Ubisecure as a product leader in our Leadership Compass on Access Management and Federation. Support for a broad range of authentication methods including national ID cards and banking cards, as well as integrated identity management capabilities with configurable registration workflows have been noted as the product’s strengths. However, it is the solution’s focus on enabling identity services on a large scale, targeted at governments and service providers, which KuppingerCole has noted as Ubisecure’s primary strength.

Unfortunately, until recently the Helsinki-based company has only been present in EMEA (mainly in the Nordic countries), obviously lacking resources to maintain a strong partner network. GlobalSign’s large worldwide presence with 9 international offices and over 5000 reseller partners provides a unique possibility to bring Ubisecure’s technology to a global market quickly and with little effort.

GlobalSign, established in 1996, is one of the oldest and biggest, as well as reportedly the fastest growing certificate authorities on the market. After becoming a part of the Japanese group of companies GMO Internet Inc. in 2006, GlobalSign has been steadily expanding its enterprise presence with services like enterprise PKI, cloud-based managed SSL platform, and strategic collaborations with cloud service providers. With the acquisition of Ubisecure, the company is launching its new long-term strategy of becoming a leading provider of end-to-end identity services for smart connected devices, powering the so-called Internet of Everything.

Market analysts currently estimate that up to 50 billion of such devices (or simply “things”) will be connecting to the Internet within the next 10 years. This may well be the largest technology market in history, with over $14 trillion at stake. Needless to say, the new trend brings new critical challenges that have to be addressed, such as device security and malware protection, however, probably the biggest of all is going to be providing identity services on a massive scale, mediating trust for billions on online transactions between people and “things” every minute and ensuring safety of e-commerce, communications, and content delivery.

A company that manages to bring a service with such capabilities to the market first will definitely be in a very attractive position, and GlobalSign, with their strong background in identity-related solutions, massive existing customer base and a large partner network, is aspiring to grab that position by making Ubisecure’s innovative technology available globally. Time will tell how well they can compete against technological giants on the market, as well as against other API vendors with strong IAM background (Ping Identity and CA / Layer 7 come to mind). Still, recognizing a rare combination of innovative technology and solid market presence, we believe them to be a player in the market that is definitely worth looking at.

First Heartbleed, now Shellshock?

Half a year has passed since the discovery of the dreaded Heardbleed bug, and the shock of that incident, which many have dubbed the most serious security flaw in years, has finally begun to wear off. Then the security community has been shocked again last week, when details of a new critical vulnerability in another widely used piece of software have been made public after the initial embargo.

Apparently, Bash, arguably the most popular Unix shell software used on hundreds of millions of servers, personal computers, and network devices, contains a critical bug in the way it’s processing environment variables, which causes unintentional execution of system commands stored in those variables (you can find a lot of articles explaining the details, ranging from pretty simple to deeply technical). Needless to say, this provides an ample opportunity for hackers to run malicious commands on affected machines, whether they are connected to the network or not. What’s worse, the bug has remained unnoticed for over twenty years, which means that huge numbers of legacy systems are affected as well (as opposed to Heartbleed, which was caused by a bug in a recent version of OpenSSL).

Given the huge number of affected devices, many security researchers have already called Shellshock “bigger than Heartbleed”. In my opinion, however, comparing these two problems directly isn’t that simple. The biggest problem with the Heartbleed bug was that it has affected even those companies that have been consistently following security best practices, simply because the most important security tool itself was flawed. Even worse, those who failed to patch their systems regularly and were still using an old OpenSSL version were not affected.

Shellshock bug, however, is different, since Bash itself, being simply a command-line tool for system administrators, is usually not directly exposed to the Internet, and the vulnerability can only be exploited through other services. In fact, if your IT staff has been following reasonably basic security guidelines, the impact on your network will already be minimal, and with a few additional steps can be prevented completely.

The major attack vector for this vulnerability are naturally CGI scripts. Although CGI is a long outdated technology, which, quite frankly, has no place on a modern web server, it’s still found on a lot of public web servers. For example, the popular Apache web server has a CGI module enabled by default, which means that hackers can use Shellshock bug as a new means to deploy botnet clients on web servers, steal system passwords and so on. There have already been numerous reports about attacks exploiting Shellshock bug in the wild. Researchers also report that weaknesses in DHCP clients or SSH servers can potentially be exploited as well, however this requires special conditions to be met and can be easily prevented by administrators.

So, what are our recommendations on dealing with Shellshock bug?

For consumers:

First of all, you should check whether your computers or network devices are affected by the bug at all. Vulnerable are computers running different Unix flavors, most importantly many Linux distributions and OS X. Obviously, Windows machines are not affected unless they have Cygwin software installed. Most embedded network devices, such as modems and routers, although Linux-based, use a different shell, BusyBox, which doesn’t have the bug. As for mobile devices, stock iOS and Android do not contain Bash shell, but jailbroken iOS devices and custom Android firmwares may have it installed as well.

A simple test for checking whether your shell is vulnerable is this command:

env X="() { :;} ; echo vulnerable" /bin/sh -c "echo hello"
If you see “vulnerable” after running it, you know you are and you should immediately look for a security update. Many vendors have already issued patches for their OS distributions (although Apple is still working on an official patch, there are instructions available for fixing the problem DIY-style).

For network administrators:

Obviously, you should install security updates as well, but to stop there would not be a good idea. Although a series of patches for currently described Bash vulnerability has already been issued, researchers warn that Bash has never been designed for security and that new vulnerabilities can be discovered in it later. A reasonable, if somewhat drastic consideration would be to replace Bash on your servers with a different shell, since just about every other shell does not interpret commands in environment variables and is therefore inherently invulnerable to this exploit.

Another important measure would be to check all network services that can interact with Bash and harden their configurations appropriately. This includes, for example, the ForceCommand feature in OpenSSH.

Last but not the least, you should make sure that your network security tools are updated to recognize the new attack. Security vendors are already working on adding new tests to their software.

For web application developers:

Do not use CGI. Period.

If you are stuck with a legacy application you still have to maintain, you should at least put it behind some kind of a “sanitizing proxy” service that would filter out requests containing malicious environment variables. Many vendors offer specialized solutions for web application security, however, budget solutions using open source tools like nginx are possible as well.

So, if Shellshock bug can be fixed so easily, why are security researchers so worried about it? The main reason is a sheer number of legacy devices that will never be patched and will remain exposed to the exploit for years. Another burning question for IT departments is: how long hackers (or worse, NSA) have been aware of the bug and for how long they could have been secretly exploiting it? Remember, the upper limit for this guess is 22 years!

And of course, in even longer perspective, the problem raises a lot of new questions regarding the latest IT fad: the Internet of Things. Now that we already have smart fridges and smart cars and will soon have smart locks and smart thermostats installed everywhere, how can we make sure that all these devices remain secure in the long term? Vendors predict that in 10 years there will be over 50 billion “things” connected to a global network. Can you imagine patching 50 billion Bash installations? Can you afford not patching your door lock? Will you be able to install an antivirus on your car? Looks like we need to have a serious talk with IoT vendors. How about next year at our European Identity and Cloud Conference?

Real-time Security Intelligence: history, challenges, trends

Information security is just as old as Information Technology itself. As soon as organizations began to depend on IT systems to run their business processes and to store and process business information, it has become necessary to protect these systems from malicious attacks. First concepts of tools for detecting and fighting off intrusions into computer networks were developed in early 1980s, and in the following three decades security analytics has evolved through several different approaches, reflecting the evolution of IT landscape as well as changing business requirements.

First-generation security tools – firewalls and intrusion detection and prevention systems (IDS/IPS) – have essentially been solutions for perimeter protection. Firewalls were traditionally deployed on the edge of a trusted internal network and were meant to prevent attacks from the outside world. First firewalls were simply packet filters that were effective for blocking known types of malicious traffic or protecting from known weaknesses in network services. Later generation of application firewalls can understand certain application layer protocols and thus provide additional protection for specific applications: mitigate cross-site scripting attacks on websites, protect databases from SQL injections, perform DLP functions, etc. Intrusion detection systems can be deployed within networks, but old signature-based systems were only capable of reliably detecting known threats and later statistical anomaly-based solutions were known to generate an overwhelming number of false alerts. In general, tuning an IDS for a specific network was always a difficult and time-consuming process.

These traditional tools are still widely deployed by many organizations and in certain scenarios serve as a useful part of enterprise security infrastructures, but recent trends in the IT industry have largely made them obsolete. Continued deperimeterization of corporate networks because of adoption of cloud and mobile services, as well as emergence of many new legitimate communication channels with external partners has made the task of protecting sensitive corporate information more and more difficult. The focus of information security has gradually shifted from perimeter protection towards detection and defense against threats within corporate networks.

The so-called Advanced Persistent Threats usually involve multiple attack vectors and consist of several covert stages. These attacks may go on undetected for months and cause significant damage for unsuspecting organizations. Often they are first uncovered by external parties, adding reputation damage to financial losses. A well-planned APT may exploit several different vulnerabilities within the organization: an unprotected gateway, a bug in an outdated application, a Zero-Day attack exploiting a previously unknown vulnerability and even social engineering, targeting the human factor often neglected by IT security.

By the mid-2000s, it was obvious that efficient detection and defense against these attacks requires a completely new approach towards network security. The need to analyze and correlate security incidents from multiple sources, to manage a large number of alerts and to be able to perform forensic analysis has led to development of a new organizational concept of Security Operations Center (SOC). An SOC is a single location where a team of experts is monitoring security-related events of entire enterprise information systems and taking actions against detected threats. Many large enterprises have established their own SOCs and for smaller organizations that cannot afford considerable investments and maintaining a skilled security staff on their own, such services are usually offered as a Managed Security Service.

The underlying technological platform of a security operations center is SIEM: Security Information and Event Management – a set of software and services for gathering, analyzing and presenting information from various sources, such as network devices, applications, logging systems, or external intelligence sources. The term has been coined in 2005 and the concept has been quickly adopted by the market: currently there are over 60 vendors offering SIEM solutions in various forms. There was a lot of initial hype around the SIEM concept, as it was offered as a turnkey solution for all security-related problems mentioned above. The reality, however, has shown that, although SIEM solutions are very capable sets of tools for data aggregation, retention and correlation, as well as for monitoring, alerting and reporting of security incidents, they are still just tools, requiring a team of experts to deploy and customize and another team to run it on daily basis.

Although SIEM solutions are currently widely adopted by most large enterprises, there are several major challenges that, according to many information security officers, are preventing them from efficiently using them:

  • Current SIEM solutions require specially trained security operations experts to operate; many organizations simply do not have enough resources to maintain such teams.
  • Current SIEM solutions generate too many false positive alerts, forcing security teams to deal with overwhelming amounts of unnecessary information. Obviously, current correlation and anomaly detection algorithms are not efficient enough.
  • The degree of integration offered by current SIEM solutions is still insufficient to provide a truly single management console for all kinds of operations. Responding to a security incident may still require performing too many separate actions using different tools.
Another common shortcoming of current SIEM solutions is lack of flexibility when dealing with unstructured data. Since many of the products are based on relational databases, they enforce applying rigid schemas to collected information and do not scale well when dealing with large amounts of data. This obviously prevents them from efficiently detecting threats in real time.

Over the last couple of years, these challenges have led to the emergence of the “next-generation SIEM” or rather a completely new technology called Real-time Security Intelligence (RTSI). Although the market is still in its early stage, it is already possible to summarize the key differentiators of RTSI offerings from previous-generation SIEM tools:

  • Real-time or near real-time detection of threats that enables quick remediation before damage is done;
  • Possibility to correlate real-time and historical data from various sources, as well as apply intelligence from external security information services, thus detecting malicious operations as whole events, not separate alerts;
  • Small number of clearly actionable alarms by reducing the false positive rate, as well as introducing different risk levels for incidents;
  • Automated workflows for responding to detected threats, such as, for example, disrupting clearly identified malware attacks or submitting a suspicious event to a managed security service for further analysis.
The biggest technological breakthrough that made these solutions possible is Big Data analytics. The industry has finally reached the point, when business intelligence algorithms for large-scale data processing, previously affordable only to large corporations, have become commoditized. Utilizing readily available frameworks such as Apache Hadoop and inexpensive hardware, vendors are now able to build solutions for collecting, storing and analyzing huge amounts of unstructured data in real-time.

This makes it possible to combine real-time and historical analysis and identify new incidents as being related to others that occurred in the past. Combined with external security intelligence sources that provide current information about the newest vulnerabilities, this can greatly facilitate identification of ongoing APT attacks on the network. Having a large amount of historical data at hand also significantly simplifies initial calibration to the normal patterns of activity of a given network, which are then used to identify anomalies. Existing RTSI solutions are already capable of automated calibration with very little input required from administrators.

Alerting and reporting capabilities of RTSI solutions are also significantly improved. Big Data analytics technology can generate a small number of concise and clearly categorized alerts to allow even an inexperienced person to make a relevant decision, yet provides a forensic expert with much more details about the incident and its relations with other historical anomalies.

As mentioned above, the RTSI market is still in its early stage. There are many new offerings with various scopes of functionality from both established IT security vendors as well as startups available today or planned for release in near future. It is still difficult to predict in which direction the market will evolve and which features should be expected from an innovation leader. However, it is already clear that only the vendors that will offer complete solutions and not just set of tools will win the market. It is important to understand that Real-time Security Intelligence is more than just SIEM 2.0.

This article was originally published in the KuppingerCole Analysts’ View Newsletter. Also check out video statements of my colleagues Mike Small and Rob Newby on this topic.

Did someone just steal my password?

Large-scale security breaches are nothing new. Last December we’ve heard about the American retail chain Target’s network hack, when over 40 million credit cards and 70 million addresses have been stolen. This May, eBay announced that hackers got away with more than 145 million of their customer data. And the trend doesn’t stop: despite of all the efforts of security researchers and government institutions, data breaches occur more frequently and get bigger and more costly. The average total cost of a data breach for a company is currently estimated at $3.5 million. The public has already heard about these breaches so often that it became a bit desensitized to them. However, the latest announcement from an American company Hold Security should definitely make even the laziest people sit up and take notice.

Apparently, a gang of cybercriminals from Russia, which the company dubbed CyberVor (“cyber thief” in Russian), have managed to amass the largest known collection of stolen credentials, over 1.2 billion passwords and more than 500 million email addresses! The company hasn’t revealed a lot of details, but these were not, of course, spoils of a single breach – the gang has allegedly compromised over 420 thousand websites over the course of several years. Still, the numbers are overwhelming: the whole collection contains over 4.5 billion records. Surely, I can be somewhere in that huge list, too? What can I do to prevent hackers from stealing my precious passwords? Can someone help me with that?

In a sense, we still live in the era of the Internet Wild West. No matter how often the passwords are proclaimed dead and how hard security vendors are trying to sell their alternative, more secure authentication solutions, no matter how long government commissions are discussing stricter regulations and larger fines for data breaches - way too many companies around the world are still storing their customers’ credentials in clear text and way too many users are still using the same password “password” for all their accounts. Maybe in twenty years or so, we will be remembering these good old days of the “Internet Freedom” with romantic nostalgia, but now we have to face the harsh reality of the world where nobody is going to protect our personal information for us.

This, by the way, reminds me about another phenomenon of the Wild West era: snake oil peddlers. Unfortunately, quite a few security companies now attempt to capitalize on the data breach fear in a similar way. Instead of providing customers with the means to protect their credentials, they offer instead such services like “pay to see whether your account has been stolen”. And these services aren’t cheap.

Surely, these companies need to earn money just like everyone else, but charging people for such useless information is dubious at best. I’m not even going to mention the fact that there might be even services out there that are essentially good old phishing sites, which would collect your credentials and use them for malicious purposes.

As a famous Russian novel “The Twelve Chairs” states, mocking a common propaganda slogan of the early Soviet period: “Assistance to drowning persons is in the hands of those persons themselves.” I’ve published a blog post some time ago, outlining a list of simple rules one should follow to protect themselves from the consequences of a data breach: create long and complex passwords, do not reuse the same password for several sites, invest in a good secure password manager, look for sites that support two-factor authentication and so on. Of course, this won’t prevent future breaches from happening (apparently, nothing can), but it will help minimize the consequences: in the worst case, only one of your accounts will be compromised, not all of them.

Whenever you hear that a website you’re using has been hacked, you no longer have to wonder whether your credentials have been stolen or not, you simply assume the worst and then spend a minute to change your password and stay assured that the hackers have no use for your old credentials anymore. This way, you’re not only avoiding exposure to “CyberVors”, but also don’t let “CyberZhuliks” (cyber fraudsters) make money by selling you their useless services.

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

Modern Cybersecurity Trends & Technologies Learn more

Modern Cybersecurity Trends & Technologies

Companies continue spending millions of dollars on their cybersecurity. With an increasing complexity and variety of cyber-attacks, it is important for CISOs to set correct defense priorities and be aware of state-of-the-art cybersecurity mechanisms. [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00