English   Deutsch   Русский   中文    

Blog posts by Alexei Balaganski

De-Mail: Now with End-to-end Encryption?

Mar 10, 2015 by Alexei Balaganski

In case you don’t know (and unless you live in Germany, you most probably don’t), De-Mail is an electronic communications service maintained by several German providers in accordance with German E-Government initiative and the De-Mail law declaring this as a secure form of communication. The purpose of the service is to complement traditional postal mail for the exchange of legal documents between citizens, businesses and government organizations.

Ever since its original introduction in 2012, De-Mail has been struggling to gain acceptance of German public. According to the latest report, only around 1 million of private citizens have registered for the service, which is way below original plans and not enough by far to reach the “critical mass”. That is actually quite understandable, since for a private person the service doesn’t bring much in comparison with postal mail (in fact, it even makes certain things, such as legally declining to receive a letter, no longer possible). Major points of criticism of the service include incompatibility with regular e-mail and other legal electronic communications services, privacy concerns regarding the personal information collected during identification process, as well as insufficient level of security.

Now the German government is attempting once more to address the latter problem by introducing end-to-end encryption. Their plan is to rely on OpenPGP standard, which will be introduced by all cooperating providers (Deutsche Telekom, Mentana-Claimsoft and United Internet known for its consumer brands GMX and Web.de) in May. According to Thomas de Maizière, Germany’s Federal Minister of the Interior, adding PGP support will provide an easy and user-friendly way of increasing the security of De-Mail service. Reaction from security experts and public, however, wasn’t particularly enthusiastic.

Apparently, to enable this new functionality, users would have to install a browser plugin. The solution is based on an open source JavaScript OpenPGP implementation and is currently available for Chrome and Firefox browsers only. According to publicly available statistics, this leaves over 60% of all German internet users out of luck, since their browsers are not supported. Even bigger problem is lack of support for mobile Apps or desktop mail clients.

Unfortunately, no integration of the plugin into De-Mail user directory is offered, which means that users are supposed to tackle the biggest challenge of any end-to-end encryption solution – secure and convenient key exchange – completely on their own. In this regard, De-Mail looks no better than any other conventional email service, since PGP encryption is already supported by many mail applications in a completely provider-agnostic manner.

Another issue is supposed ease of use of the new encryption solution. In fact, De-Mail has already been offering encryption based on S/MIME, but it couldn’t get enough traction because “it was too complicated”. However, if you compare the efforts necessary for secure PGP key exchange, it can hardly be considered an easier alternative.

Finally, there is a fundamental question with many possible legal consequences: how does one combine end-to-end encryption with the requirement for the third party (the state) to be able to verify its legitimacy? In fact, the very same de Maizière is known for opposing encryption and advocating the necessity for intelligence agencies to monitor all communications.

In any case, De-Mail is here to stay, at least as long it is actively supported by the government. However, I have serious doubts that attempts like this will have any noticeable impact on its popularity. Legal issues aside, the only proper way of implement end-to-end communications security is not to try to slap another layer on top of the aging e-mail infrastructure, but to implement new protocols designed with security in mind from the very beginning. And the most reasonable way to do that is not to try to reinvent the wheel on your own, but to look for existing developments like, for example, Dark Mail Technical Alliance. What the industry needs is a cooperatively developed standard for encrypted communications, similar to what FIDO alliance has managed to achieve for strong authentication.

Reconciling conflicting views on encryption within the government would also help a lot. Pushing for NSA-like mass surveillance of all internet communications and advocating the use of backdoors and exploits by the same people that now promise increased security and privacy of government services isn’t going to convince either security experts or the general public.


Google+

Internet of Opportunities

Mar 03, 2015 by Alexei Balaganski

For a topic so ubiquitous, so potentially disruptive and so overhyped in the media in the recent couple of years, the concept of the Internet of Things (IoT) is surprisingly difficult to describe. Although the term itself has appeared in the media nearly a decade ago, there is still no universally agreed definition of what IoT actually is. This, by the way, is a trait it shares with its older cousin, the Cloud.

On the very basic level, however, it should be possible to define IoT as a network of physical objects (“things”) capable of interacting and exchanging information with each other as well as with their owners, operators or other people. The specifics of these communications vary between definitions, but it’s commonly agreed that any embedded smart devices that communicate over the existing Internet infrastructure can be considered “things”. This includes both consumer products, such as smart medical devices, home automation systems, or wearables, and enterprise devices ranging from simple RFID tags to complex industrial process monitoring systems. However, general-purpose computers, mobile phones and tablets are traditionally excluded, although they, of course, are used to monitor or control other “things”.

Looking at this definition, one may ask what exactly is new and revolutionary about IoT? After all, industrial control systems have existed for decades, healthcare institutions have been using smart implanted devices like pacemakers and insulin pumps for years, and even smart household appliances are nothing new. This is true: individual technologies that make IoT possible have existed for several decades and even the concept of “ubiquitous internet” dates back to 1999. However, it’s the relatively recent combination of technology, business and media influences that has finally made IoT on of the hottest conversation topics.

First, continuously decreasing technology costs and growing Internet penetration have made connected devices very popular. Adding an embedded networking module to any device is cheap, yet it can potentially unlock completely new ways of interaction with other devices, creating new business value for manufacturers. Second, massive proliferation of mobile devices encourages people to look for new ways of using them to monitor and control various aspects of their life and work. As for enterprises, the proverbial Computing Troika is forcing them to evolve beyond their perimeter, to become more agile and connected, and the IT is responding by creating new technologies and standards (such as big data analytics, identity federation or even cloud computing) to support these new interactions.

It is its scale and interoperability that fundamentally differentiate the Internet of Things from existing isolated networks of various embedded devices. And this scale is truly massive. Extrapolating the new fashion of making each and every device connected, it is estimated that by 2020, the number of “things” in the world will surpass 200 billion and the IoT market will be worth nearly $9 trillion. Although the industry is facing a lot of potential obstacles on their way to that market, including lack of standards, massive security and privacy-related implications, as well as the need to develop a mature application ecosystem, the business opportunities are simply too lucrative to pass.

Practically every industry is potentially impacted by the IoT revolution, including automotive, healthcare, manufacturing, energy and utilities, transportation, financial, retail and others. Numerous use cases demonstrate that adoption of IoT as a part of business processes can bring generate immediate business value by improving process optimization, providing better intelligence and more efficient planning, enabling real-time reaction to various needs and opportunities and improving customer service.

In addition to various improvements of business processes, IoT enables a huge number of completely new consumer services, from life changing to trivial but “nice to have” ones. One doesn’t need to explain how a doctor’s ability to monitor patient’s vital signs can reduce mortality and improve quality of life or how a connected vehicle improves road safety. IoT benefits don’t end there, and it’s up to manufacturers to introduce completely new kinds of smart devices and persuade consumers that these devices will make their life fundamentally better (this has already worked well for wearable devices, for example).

Of course, IoT market doesn’t just include manufacturers of “things” themselves. Supporting and orchestrating such a huge global infrastructure introduces quite a lot of technological challenges. Obviously, manufacturers of networking hardware will play a major role, and it’s no wonder that companies like Intel or Cisco are among the major IoT proponents. However, being able to address other challenges like providing global-scale identity services for billions of transactions per minute can open up huge business opportunities, and vendors are already moving in to grab an attractive position in this market. Another example of a technology that’s expected to get a substantial boost from IoT is Big Data Analytics, because IoT is all about collecting large amounts of information from sensors, which then needs to be organized and used to make decisions.

Interestingly enough, most of current large-scale IoT deployments seem to be driven not by enterprises, but by government-backed projects. The concept of “smart city”, where networks of sensors are continuously monitoring environmental conditions, managing public transportation and so on, has attracted interest in many countries around the world. Such systems naturally integrate with existing eGovernment solutions; they also enable new business opportunities for various merchants and service companies that can plug directly into the global city network.

In any case, whether you represent a hardware vendor, a manufacturing, a service or an IT company, there is one thing about the Internet of Things you cannot afford: ignore it. The revolution is coming, and although we still have to solve many challenges and address many new risks, the future is full of opportunities.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

The Great SIM Heist and Other News from NSA

Feb 20, 2015 by Alexei Balaganski

Even almost two years after Edward Snowden made off with a cache of secret NSA documents, the gradual ongoing publication of these materials, complemented by independent research from information security experts has provided a unique insight into the extent of global surveillance programs run by the US intelligence agencies and their partners from various European countries. Carefully timed, they’ve provided an exciting and at the same time deeply disturbing reading for both IT experts and the general public.

In the recent period, it looked as if the trickle of news regarding our friends from NSA had almost dried out, but apparently, this was just “calm before the storm”. First, just a few days ago Kaspersky Lab published their extensive report on the “Equation group”, a seemingly omnipotent international group of hackers active for over a decade and known to utilize extremely sophisticated hacking tools, including the ability to infect hard drive firmware. Technical details of these tools reveal many similarities with Stuxnet and Flame, both now known to have been developed in collaboration with NSA. It was later confirmed by a former NSA employee that the agency indeed possesses and widely utilizes this technology for collecting their intelligence.

And even before the IT security community was able to regain its collective breath, The Intercept, the publication run by Edward Snowden’s closest collaborators, has unveiled an even bigger surprise. Apparently, back in 2010, American and British intelligence agencies were able to carry out a massive scale breach of mobile phone encryption in a joint operation targeting telecommunication companies and SIM card manufacturers.

If we are to believe the report, they have managed to penetrate the network of Gemalto, world’s largest manufacturer, shipping over 2 billion SIM cards yearly. Apparently, they not only resorted to hacking, but also ran a global surveillance operation on Gemalto employees and partners. In the end, they managed to obtain copies of secret keys embedded into SIM cards that enable mobile phone identification in providers’ networks, as well as encryption of phone calls. Having these keys, NSA and GCHQ are, in theory, able to easily intercept and decrypt any call made from a mobile phone, as well as impersonate any mobile device with a copy of its SIM card. As opposed to previously known surveillance methods (like setting up a fake cell tower), this method is completely passive and undetectable. By exploiting deficiencies of GSM encryption protocols, they are also able to decrypt any previously recorded call, even from years ago.

Since Gemalto doesn’t just produce SIM cards, but various other kinds of security chips, there is a substantial chance that these could have been compromised as well. Both Gemalto and its competitors, as well as other companies working in the industry, are now fervently conducting internal investigations to determine the extent of the breach. It’s worth noting that according to Gemalto’s officials, they hadn’t noticed any indications of the breach back then.

A side note: just another proof that even security professionals need better security tools to stay ahead of the intruders.

Now, what lesson should security experts, as well as ordinary people learn from this? First and foremost, everyone should understand that in the ongoing fight against information security threats everyone is basically on their own. Western governments, which supposedly should be protecting their citizens against international crime, are revealed to be conducting the same activities on a larger and more sophisticated scale (after all, intelligence agencies possess much bigger budgets and legal protection). Until now, all attempts to limit the intelligence agencies’ powers have been largely unsuccessful. The governments even go as far as to lie outright about the extent of their surveillance operations to protect them.

Another, more practical consideration is that the only solutions we can still more or less count on are complete end-to-end encryption systems where the whole information chain is controlled by users themselves, including secure management of encryption keys. Before practical quantum computers become available, breaking a reasonably strong encryption key is still much more difficult than stealing it. For any other communication channel, you should significantly reconsider your risk policies.


Google+

Windows 10 will support FIDO standards for strong authentication

Feb 19, 2015 by Alexei Balaganski

At KuppingerCole, we have been following the progress of FIDO alliance for quite some time. Since their specifications for scalable and interoperable strong authentication have been published last year, FIDO has already had several successful deployments in collaboration with such industry giants as Samsung, Google and Alibaba. However, their probably biggest breakthrough been announced just a few days ago by none other than Microsoft. According to their announcement, Microsoft’s upcoming Windows 10 will include support for FIDO standards to enable strong and password-free authentication for a number of consumer and enterprise applications.

We knew, of course, that Microsoft has been working on implementing a new approach to identity protection and access control in their next operating system. Moving away from passwords towards stronger and more secure forms of authentication has been declared on of their top priorities for Windows 10. Of course, solutions like smartcards and OTP tokens have existed for decades, however, in the modern heterogeneous and interconnected world, relying on traditional enterprise PKI infrastructures or limiting ourselves by a single vendor solution is obviously impractical. Therefore, a new kind of identity is needed, which would work equally well for traditional enterprises and in consumer and web scenarios.

Now, unless you’ve been entirely avoiding all news from Microsoft in the recent years, you should have probably already guessed their next move. Embracing an open standard to allow third party manufacturers to develop compatible biometric devices and providing a common framework for hardware and software developers to build additional security into their products instead of building another “walled garden” isn’t just a good business decision, it’s the only sensible strategy.

Microsoft has joined FIDO alliance as a board member back in December 2013. Since then, they have been actively contributing to the development of FIDO specifications. Apparently, a significant part of their designs will be included in the FIDO 2.0 specification, which will then be incorporated into the Windows 10 release. Unfortunately, it’s a bit too early to talk about specific details of that contribution, since FIDO 2.0 specifications are not yet public.

However, it is already possible to get a peek of some of the new functionality in action. Current Windows 10 Technical Preview is already providing several integration scenarios for Windows Sign-in, Azure Active Directory and a handful of major SaaS services like Microsoft’s own Office 365 and partners like Salesforce, Citrix and Box. Using Azure Active Directory, it’s already possible to achieve end-to-end strong two-factor authentication completely without passwords. Windows 10 release will add support for on-premise Active Directory integration as well as integration with consumer cloud services.

And, of course, since this authentication framework will be built upon an open standard, third party developers will be able to quickly integrate it with their products and services, security device manufacturers will be able to bring a wide array of various (and interoperable) strong authentication solutions to the market and enterprise users will finally be able to forget the words “vendor lock-in”. If this isn’t a win-win situation, I don’t know what is.


Google+

Privacy Issues in Mobile Security

Feb 03, 2015 by Alexei Balaganski

It is estimated by the International Telecommunication Union that the total number of mobile devices in the world has already exceeded the number of people. Mobile devices are becoming increasingly advanced as well. In fact, modern smartphones are as powerful as desktop computers, but “know” much more about their owners: current and past location, contents of their private text messages, photos and other sensitive information, as well as their online banking credentials and other financial data. They are also always connected to the Internet and thus are especially vulnerable to hacking and malware exploits.

Growing adoption of cloud services has brought its own share of privacy concerns: more and more sensitive data is now managed by third parties, so users are losing visibility and control over their information. However, it is social computing that has made the most profound impact on our society. Ultimately, it has led to a significant erosion of public expectation of privacy and made nearly impossible to undo accidental sharing of private information. Some people have gone as far as to claim that privacy is no longer relevant. This, of course, cannot be further from reality: various studies clearly indicate that users value their privacy and strongly object to sharing of their personal data with third parties without consent. However, many users still do not have a clear understanding as to what extent mobile devices can affect their privacy.

With mobile technologies becoming more sophisticated, general public awareness about the associated risks simply cannot keep up with them. Every day, mobile users can easily fall victim to another new method of tracking, stalking or privacy abuse. Stolen personal information has become a valuable product on the black market. It includes not just financial or medical information, but and kind of PII that can be used as a key to your other assets. It’s not just hackers that are after this kind of loot: telecommunications providers, search engines and social network operators are collecting as much of this information about their users as possible to use it for targeted advertising or just to resell it to third parties. And, after Snowden, do we even need to mention government agencies?

For enterprise IT departments, growing adoption of mobile devices has brought their own share of headaches. One of the biggest current challenges for the IT industry is undoubtedly the Bring Your Own Device (BYOD) problem. While technological challenges of the problem are massive, a proper BYOD strategy must address privacy issues as well. Many organizations may easily overlook them, because issues like liability for leaked or lost private data from company-managed devices still vary per country; they are often considered to be in the grey area of current laws and regulations. These regulations are changing, however, and to stay on the safe side companies should always carefully study and address legal aspects of their mobile device policies: a mistake can cost you a fortune. KuppingerCole provides this kind of expertise as well.

However, regulations alone cannot solve the fundamental cause of so many privacy-related problems of current mobile platforms. As mentioned earlier, modern smartphones and tablets have the same computing power as desktop computers. Yet, both consumers and device manufacturers still fail to realize that mobile devices need at least the same level of protection against malware and hackers as traditional computers.

Modern mobile platforms are based on Unix-like operating systems, incorporating various low-level security features like hardware isolation or code signing. Yet, they are still far behind desktop or server systems when it comes to more sophisticated security tools like firewalls or application control. Even worse, no modern mobile platform includes any built-in vendor-neutral security APIs that would allow 3rd party developers to create such tools. Although there are several solutions available on the market now (like Samsung KNOX), they are all limited to a small number of supported devices and have their own security issues.

Modern mobile platforms are much more closed than desktop operating systems, and this is a source of privacy-related concerns as well. Consider a typical situation for iOS: we learn about data leaks or other violations in a standard app, and it takes months for Apple to even acknowledge the problem, let alone to release a patch for it. The open nature of Android’s ecosystem, on the other hand, leads to platform fragmentation and often vendors simply stop supporting old devices completely. Despite of their differences, the result is still the same: because of fundamental deficiencies in their platforms, both vendors fail to provide adequate means of protecting user’s privacy.

Thus, it is clear that long-term solutions to these problems require a major paradigm shift. Privacy cannot be protected by government regulations or “bolt on” security products – it has to become an integral part of any mobile platform and application. Unfortunately, this stands in stark contrast to the goals of many hardware and software vendors, with only a few already realizing the business value behind “privacy by design”. To break the current trend of hoarding as much personal information as possible, consumers, enterprises and government regulators have to join their efforts and bring everyone to a clear realization that long-term losses from violating customers’ trust will always be greater than short-term gains.

For more information and concrete recommendations to enterprises, mobile device manufacturers and application developers please refer to KuppingerCole’s Advisory Note “Dealing with privacy risks in mobile environments”.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

Amazon WorkMail – a new player on the Enterprise Email and Calendaring market

Jan 29, 2015 by Alexei Balaganski

Amazon Web Services has again made headlines today by announcing Amazon WorkMail – their managed email and calendaring service targeted at corporate customers. This is obviously a direct take on their biggest competitors, namely, Google and Microsoft, and the biggest differentiators Amazon is focusing on are ease of use and security.

Amazon WorkMail is described as a completely managed replacement for an organization’s own legacy email infrastructure. Since the service is compatible with Microsoft Exchange and is capable of integrating with an existing on-premise Active Directory, the process of migration should be quick and seamless. Since AWS will take over most administrative processes, such as patching or backups, this can dramatically decrease administration efforts and costs.

Although WorkMail has its own web interface, AWS is more focused on supporting existing mail and calendaring tools. Any ActiveSync-capable program, including Microsoft Outlook for Windows and OS X, as well as native iOS and Android email clients, can be supported without installing any plug-ins. Migration from an on-premise Exchange server can be completely transparent and does not require any changes on end user devices. A migration wizard is provided as a part of the package.

With the new service, AWS is also placing big emphasis on security. Since email has long been an integral part of our daily business processes, a lot of sensitive corporate information passes through it and ends up getting stored on the mail server. By integrating with AWS Key Management Service, WorkMail will automatically encrypt all email data at rest, while giving customers complete control over the encryption keys. It is also possible to restrict where this information is stored to a specific geographical region to ensure compliance with local privacy regulations.

Last year, AWS announced their Zocalo service for secure storage and sharing of enterprise data, a direct competitor to other cloud storage services like Dropbox or Google Drive. Needless to say, WorkMail is tightly integrated with Zocalo, allowing the secure exchange of documents instead of sending them as unprotected attachments. In fact, AWS offers a bundle of WorkMail with Zocalo for an attractive price.

There is one potential misunderstanding, however, which I feel obligated to mention. Even with all security features integrated into WorkMail, it still cannot be considered a true end-to-end encryption solution and is thus potentially vulnerable to various security problems. This is another example of a tradeoff between security and convenience, and Amazon simply had to make it to ensure compatibility with existing email programs and protocols.

Still, with an impressive integrated offering and traditionally aggressive pricing model, Amazon WorkMail is definitely another step in AWS’s steady push towards global market leadership.


Google+

FIDO Alliance announces final FIDO 1.0 specifications

Dec 10, 2014 by Alexei Balaganski

Yesterday, culminating over 20 months of hard work, FIDO Alliance has published final 1.0 drafts of their Universal Authentication Framework (UAF) and Universal 2nd Factor (U2F) specifications, apparently setting a world record in the process as the world’s fastest development of a standard in the Identity Management industry.

I wrote a post about FIDO Alliance in October, when the first public announcement of the specifications has been made. Since that time, I’ve had an opportunity to test several FIDO-compatible solutions myself, including the Security Key and Yubikey Neo-N from Yubico, as well as the FIDO ready fingerprint sensor in my Galaxy S5 phone, which now lets me access my PayPal account securely. I’ve studied the documentation and reference code for building U2F support into web applications and cannot wait to try it myself, seeing how easy it looks. Probably the only thing that’s stopping me right now is that my favorite browser hasn’t implemented U2F yet.

Well, I hope that this will change soon, because that’s what publishing finalized specifications is about: starting today FIDO alliance members are free to officially market their FIDO Ready strong authentication solutions and non-members are encouraged to deploy them with the peace of mind, knowing that their implementation will interoperate with current and future products based on these standards. Press coverage of the event seems to be quite extensive, with many non-technical publications picking up the news. I believe that to be another indication of importance of strong and simple authentication for everyone. Even those who do not understand the technical details are surely picking up the general message of “making the world free of passwords and PINs”.

Those who are interested in technical details would probably be interested in the changes in the final version since the last published draft. I’m sure these can be found on FIDO Alliance’s website or in one of their webinars. What is more important, however, is that products released earlier remain compatible with the final specification and that we should expect many new product announcements from FIDO members really soon. We should probably expect more companies to join the alliance, now that the initiative is gaining more traction. Mozilla Foundation, that includes you as well!

In the meantime, my congratulations to FIDO Alliance on another important milestone in their journey to the future without passwords.


Google+

Quis custodiet ipsos custodes?

Dec 08, 2014 by Alexei Balaganski

Or, if your Latin is a bit rusty, “who is guarding the guards themselves”? This was actually my first thought when I’ve read an article published by Heise Online. Apparently, popular security software from Kaspersky Lab, including at least their Internet Security and Antivirus, is still susceptible to the now-well-known POODLE exploit, which allows hackers to perform a man-in-the-middle attack on an SSL 3.0 connection by downgrading the level of encryption and effectively breaking its cryptographic security.

When this vulnerability was published in September, many security researchers called for immediate demise of SSL 3.0, which is a very outdated and in many aspects weak protocol, however quite a lot of older software still doesn’t support TLS, its modern replacement. At the end, many web services, as well as all major browser vendors have implemented some sort of protection against the exploit, either by disabling SSL 3.0 completely or by preventing downgrade attacks using TLS_FALLBACK_SCSV. For a couple of months, we felt safe again.

Well, turns out that getting rid of POODLE isn’t as easy as we thought – it’s not enough to harden both ends of the communication channel, you have to think about the legitimate “men-in-the-middle” as well, which can still be unpatched and vulnerable. This is exactly what happened to Kaspersky’s security products: as soon as the option “Scan encrypted connections” is enabled, they will intercept an outgoing secure connection, decrypt and analyze its content, and then reestablish a new secure connection to the appropriate website. Unfortunately, this new connection is still using SSL 3.0, ready to be exploited.

Think of it: even if you have the latest browser that explicitly disables SSL 3.0, your antivirus software would secretly make your security worse without letting you know (your browser will be connecting to the local proxy using new TLS protocol, which looks perfectly safe). Just like I was writing regarding the Heartbleed bug in April: “there is a fundamental difference between being hacked because of ignoring security best practices and being hacked because our security tools are flawed”. The latter not only adds insult to injury, it can severely undermine user’s trust in security software, which at the end is bad for everyone, even the particular vendor’s competitors.

The problem seems to be originally discovered by a user who posted his findings on Kaspersky’s support forum. I must admit I find the support engineer’s reply very misleading: the SSL vulnerability is by no means irrelevant, and one can imagine multiple scenarios where it could lead to sensitive data leaks.

Well, at least, according to Heise, the company is working on a patch already, which will be released sometime in January. Until then you should think twice before enabling this option: who is going to protect your antivirus after all?


Google+

Regin Malware: Stuxnet’s Spiritual Heir?

Nov 26, 2014 by Alexei Balaganski

As if IT security community hasn’t had enough bad news recently, this week has begun with a big one: according to a report from Symantec, a new, highly sophisticated malware has been discovered, which the company dubbed “Regin”. Apparently, the level of complexity and customizability of the malware rivals if not trumps its famous relatives, such as Flamer, Duqu and Stuxnet. Obviously, the investigation is still ongoing and Symantec, together with other researchers like Kaspersky Lab and F-Secure are still analyzing their findings, but even those scarce details allow us to make a few far-reaching conclusions.

Let’s begin with a short summary of currently known facts (although I do recommend reading the full reports from Symantec and Kaspersky Lab linked above, they are really fascinating if a bit too long):

  1. Regin isn’t really new. Researchers have been studying its samples since 2012 and the initial version seems to have been in use since at least 2008. Several components have timestamps from 2003. Makes you appreciate even more how it managed to stay under radars for so long. And did it really? According to F-Secure, at least one company affected by this malware two years ago has explicitly decided to keep quiet about it. What a ground for conspiracy theorists!
  2. Regin’s level of complexity trumps practically any other known piece of malware. Five stages of deployment, built-in drivers for encryption, compression, networking and virtual file systems, utilization of different stealth techniques, different deployment vectors, but most importantly a large number of various payload modules – everything indicates a level of technical competence and financial investment of a state-sponsored project.
  3. Nearly half of affected targets have been private individuals and small businesses and the primary vertical the malware appears to be targeting is telecommunications industry. According to Kaspersky Lab’s report, code for spying on GSM networks has been discovered in it. Geographically, primary targets appear to be Russia and Saudi Arabia, as well as Mexico, Ireland and several other European and Middle Eastern countries.
So, is Regin really the new Stuxnet? Well, no. Surely, its incredible level of sophistication and flexibility indicates that it most certainly is a result of a state-sponsored development. However, Regin’s mode of operation is completely opposite to that of its predecessor. Stuxnet has been a highly targeted attack on Iranian nuclear enrichment facilities with the ultimate goal of sabotaging their work. Regin, on the other hand, is an intelligence-gathering spyware tool, and it doesn’t seem to be targeted on a specific company or government organization. To the contrary, it’s a universal and highly flexible tool designed for long-term covert operations.

Symantec has carefully avoided naming a concrete nation-state or agency that may have been behind this development, but the fact that no infections have been observed in the US or UK is already giving people ideas. And, looking at the Regin discovery as a part of a bigger picture, this makes me feel uneasy.

After Snowden’s revelations, there’s been a lot of hope that public outcry and pressure on governments will somehow lead to major changes limiting intelligence agencies’ powers for cyber spying. Unfortunately, nothing of that kind has happened yet. In fact, looking at the FUD campaign FBI and DoJ are currently waging against mobile vendors (“because of your encryption, children will die!”) or the fact that the same German BND intelligence service that’s promoting mandatory encryption is quietly seeking to install backdoors into email providers and spending millions on zero-day exploits, there isn’t much hope for a change left. Apparently, they seem oblivious to the fact that they are not just undermining trust in the organizations that supposedly exist to protect us from foreign attackers, but also open new attack surfaces for them by setting up backdoors and financing development of new exploits. Do they honestly believe that such a backdoor or exploit won’t be discovered and abused by hackers? This could probably be a topic for a separate blog post…

Isn’t it ironic that among all the talks about Chinese and Russian hackers, the biggest threat to our cybersecurity might come from the West?


Google+

Getting a Grip on Operational Technology

Nov 04, 2014 by Alexei Balaganski

Let’s begin with a couple of fundamental definitions:

Information Technology (IT) can be defined as a set of infrastructures, devices and software for processing information. A traditional IT system is in charge of storing, transmitting and transforming data, but it does not interface directly with the physical world.

Operational Technology (OT) is a set of hardware devices, sensors and software that support management and monitoring of physical equipment and processes within an enterprise, such as manufacturing plants or power distribution grids. OT deals with such components as various sensors, meters and valves, as well as industrial control systems (ICS) that supervise and monitor them.

The terms ICS and SCADA, by the way, are nowadays often used interchangeably; however, this isn’t strictly true, since Supervisory Control and Data Acquisition (SCADA) is just a subset of industrial control systems, other types being embedded systems, distributed control systems, etc. Traditionally, the term SCADA has been used for large-scale distributed control systems, such as a power grid or a gas pipeline.

Historically, IT and OT have evolved quite independently, driven by completely different business demands, requirements and regulations. In a sense, Operation Technology predates the era of computers – the first manufacturing control systems weren’t even electronic! Early ICS were monolithic physically isolated systems without network connectivity. Later generations were usually based on proprietary communication protocols and device-specific real-time operating systems. Driven above all by demand of process continuity, they were usually designed without security in mind.

Current ICS, however, have gradually evolved towards large-scale systems based on open standards and protocols, such as IP, as well as using standard PCs running Windows as control workstations. They are becoming increasingly interconnected with office networks and the Internet. Yet, modern industrial networks are often still plagued with the same blatant disregard for security. The underlying reason for that has little to do with technology; on the contrary, it’s a consequence of a deep cultural divide between OT and IT. Operations departments usually consist of industry specialists with engineering background, while IT departments are staffed by people without knowledge of manufacturing processes. OT is usually managed by a business unit, with different requirements, strategies and responsibilities from IT. Instead of collaborating, they are often forced to compete for budgets and fight over issues that the other party simply sees as insignificant.

The times are changing, however. As we are approaching the new “connected” age, the technological divide between industrial and enterprise networks is disappearing. Smart devices or “things” are everywhere now, and embedded intelligence finds widespread use in industrial networks as well. A modern agile business constantly demands for new ways of communication with partners, customers and other external entities. All this creates new exciting opportunities. And new risks.

Opening OT to the world means that industrial networks are exposed to the same old security problems like malware attacks and lack of strong authentication. However, the challenges for information security professionals go far beyond that. There are challenges that traditional IT security isn’t yet capable of addressing. This includes technical issues like securing proprietary programmable logic controllers (PLC), business requirements like ensuring manufacturing process continuity, and completely new challenges like enabling massive-scale identity services for the Internet of Everything.

The convergence of IT and OT is therefore inevitable, even though the challenges the organizations are going to face on the way to it look daunting. And it is the responsibility of IT specialists do lead and steer this process.

“If not us, then who? If not now, then when?”

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+


top
Author info

Alexei Balaganski
Senior Analyst
Profile | All posts
KuppingerCole Blog
By:
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Register now
Spotlight
Internet of Things
It is its scale and interoperability that fundamentally differentiate the Internet of Things from existing isolated networks of various embedded devices. And this scale is truly massive. Extrapolating the new fashion of making each and every device connected, it is estimated that by 2020, the number of “things” in the world will surpass 200 billion and the IoT market will be worth nearly $9 trillion.
KuppingerCole EXTEND
KC EXTEND shows how the integration of new external partners and clients in your IAM can be done while at the same time the support of the operational business is ensured.
Links
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on Google+

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing

 GenericIAM
Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2015 KuppingerCole