English   Deutsch   Русский   中文    

KuppingerCole Blog

Beyond Datacenter Micro-Segmentation – start thinking about Business Process Micro-Segmentation!

Feb 04, 2016 by Martin Kuppinger

Sometime last autumn I started researching the field of Micro-Segmentation, particularly as a consequence of attending a Unisys analyst event and, subsequently, VMworld Europe. Unisys talked a lot about their Stealth product, while at VMworld there was much talk about the VMware NSX product and its capabilities, including security by Micro-Segmentation.

The basic idea of Datacenter Micro-Segmentation, the most common approach on Micro-Segmentation, is to segment by splitting the network into small (micro) segments for particular workloads, based on virtual networks with additional capabilities such as integrated firewalls, access control enforcement, etc.

Using Micro-Segmentation, there might even be multiple segments for a particular workload, such as the web tier, the application tier, and the database tier. This allows further strengthening security by different access and firewall policies that are applied to the various segments. In virtualized environments, such segments can be easily created and managed, far better than in physical environments with a multitude of disparate elements from switches to firewalls.

Obviously, by having small, well-protected segments with well-defined interfaces to other segments, security can be increased significantly. However, it is not only about datacenters.

The applications and services running in the datacenter are accessed by users. This might happen through fat-client applications or by using web interfaces; furthermore, we see a massive uptake in the use of APIs by client-side apps, but also backend applications consuming and processing data from other backend services. Furthermore, there is also a variety of services where, for example, data is stored or processed locally, starting with downloading documents from backend systems.

Apparently, not everything can be protected perfectly well. Data accessed through browsers is out of control once it is at the client – unless the client can become a part of the secure environment as well.

Anyway, there are – particularly within organizations with good control of everything within the perimeter and at least some level of control around the devices – more options. Ideally, everything becomes protected across the entire business process, from the backend systems to the clients. Within that segmentation, other segments can exist, such as micro-segments at the backend. Such “Business Process Micro-Segmentation” stands not in contrast to Datacenter Micro-Segmentation, but extends that concept.

From my perspective, we will need two major extensions for moving beyond Datacenter Micro-Segmentation to Business Process Micro-Segmentation. One is encryption. While there is limited need for encryption within the datacenter (don’t consider your datacenter being 100% safe!) due to the technical approach on network virtualization, the client resides outside the datacenter. The minimal approach is protecting the transport by means like TLS. More advanced encryption is available in solutions such as Unisys Stealth.

The other area for extension is policy management. When looking at the entire business process —and not only the datacenter part — protecting the clients by integrating areas like endpoint security into the policy becomes mandatory.

Neither Business Process Micro-Segmentation nor Datacenter Micro-Segmentation will solve all of our Information Security challenges. Both are only building blocks within a comprehensive Information Security strategy. In my opinion, thinking beyond Datacenter Micro-Segmentation towards Business Process Micro-Segmentation is also a good example of the fact that there is not a “holy grail” for Information Security. Once organizations start sharing information with external parties beyond their perimeter, other technologies such as Information Rights Management – where documents are encrypted and distributed along with the access controls that are subsequently enforced by client-side applications – come into play.

While there is value in Datacenter Micro-Segmentation, it is clearly only a piece of a larger concept – in particular because the traditional perimeter no longer exists, which also makes it more difficult to define the segments within the datacenter. Once workloads are flexibly distributed between various datacenters in the Cloud and on-premises, pure Datacenter Micro-Segmentation reaches its limits anyway.


Google+

AWS IoT Suite

Jan 20, 2016 by Alexei Balaganski

After an “extended holiday season” (which for me included spending a vacation in Siberia and then desperately trying to get back into shape) it’s finally time to resume blogging. And the topic for today is the cloud platform for IoT services from AWS, which went out of beta in December. Ok, I know it’s been a month already, but better late than never, right?

As already mentioned earlier, the very definition of the Internet of Things is way too blanket and people tend to combine many different types of devices under this term. So, if your idea of the Internet of Things means controlling your thermostat or your car from your mobile phone, the new service from AWS is probably not what you need. If, however, your IoT includes thousands or even millions of sensors generating massive amounts of data which needs to be collected, processed by complex rules and finally stored somewhere, then look no further, especially if you already have your backend services in the AWS cloud.

In fact, with AWS being the largest cloud provider, it’s safe to assume that its backend services have already been used for quite a few IoT projects. However, until now they would have to rely on third-party middleware for connecting their “things” to AWS services. Now the company has closed the gap by offering their own managed platform for interacting with IoT devices and processing data collected from them. Typically for AWS, their solution follows the no-frills, no-nonsense approach, offering native integrations with their existing services, a rich set of SDKs and development tools and aggressive pricing. In addition, they are bringing in a number of hardware vendors with starter kits that can help quickly implement a prototype for your new IoT project. And, of course, with the amount of computing resources at hand, they can safely claim to be able to manage billions of devices and trillions of messages.

The main components of the new platform are the following:

The Device Gateway supports low-latency bi-directional communications between IoT devices and cloud backends. AWS provides support for both standard HTTP and much more resource-efficient MQTT messaging protocols, both secured by TLS. Strong authentication and fine-grained authorization are provided by familiar AWS IAM services, with a number of simplified APIs available.

The Device Registry keeps track of all devices currently or potentially connected to the AWS IoT infrastructure. It provides various management functions like support and maintenance or firmware distribution. Besides that, the registry maintains Device Shadows – virtual representations of IoT devices, which may be only intermittently connected to the Internet. This functionality allows cloud and mobile apps to access all devices using a universal API, masking all the underlying communication and connectivity issues.

The Rules Engine enables continuous processing of data sent by IoT devices. It supports a large number of rules for filtering and routing the data to AWS services like Lambda, DynamoDB or S3 for processing, analytics and storage. It can also apply various transformations on the fly, including math, string, crypto and other operations or even call external API endpoints.

A number of SDKs are provided including a C SDK for embedded systems, a node.js SDK for Linux, an Arduino library and mobile SDKs for iOS and Android. Combined with a number of “official” hardware kits available to play with, this ensures that developers can quickly start working on an IoT project of almost any kind.

Obviously, one has to mention that Amazon isn’t the first cloud provider to offer an IoT solution – Microsoft has announced their Azure IoT Suite earlier in 2015 and IBM has their own Internet of Things Foundation program. However, each vendor has a unique approach towards addressing various IoT integration issues. The new solution from AWS, with a strong focus on existing standard protocols and unique features like device shadows, not just looks compelling to existing AWS customers, but will definitely kickstart quite a few new large-scale IoT projects. On the Amazon cloud, of course.


Google+

Secured by Design: The smart streets of San Francisco

Jan 15, 2016 by Martin Kuppinger

On April 18th 1906, an earthquake and fires destroyed nearly three quarters of San Francisco. Around 3000 people lost their lives. Right up to the present many other, less critical tremors followed. The danger of another catastrophe can’t be ignored. In a city like San Francisco, however wonderful it might be to live there, people always have to be aware that their whole world can change in an instant. Now the Internet of Things (IoT) can help to make alarm systems get better. People in this awesome city can at least be sure that the mayor and his office staff do their best to keep them safe and secure in all aspects.

Not only that: With the help of the Internet of Things (IoT) they’re also looking for new ways to make the life of the citizens more convenient. That became clear to me when I saw ForgeRock’s presentation about their IoT and Identity projects in San Francisco. I noticed with pleasure that Lasse Andresen, ForgeRock’s CTO and Founder confirmed what I have been saying for quite some time: Security and Privacy must not be an afterthought. Rightfully designed from the start, both do not hinder new successful business models but actually enable them. In IoT, security and privacy are integral elements. They lead to more agility and less risk.

Andresen says in the presentation that Identity, Security and Privacy are core to IoT: “It’s kind of what makes IoT work or not work. Or making big data valuable or not valuable.” San Francisco is an absolutely great example of what that means in practice. Everything – “every thing” - in this huge city shall have its own unique identity, from the utility meter to the traffic lights and parking spaces to the police, firefighters and ambulance. This allows fast, secure and ordered action in case of emergencies. Because of their identities and with geolocation, the current position of each vehicle is always exactly known to the emergency coordinators. The firefighters identify themselves with digital key cards at the scene to show that they are authorized to be there. Thus everything and everyone becomes connected with each other, people, things and services. With identity as the glue.

Identity information enables business models that e. g. improve life in the city. The ForgeRock demonstration shows promising examples such as optimizing the traffic flow and road planning with big data, street lights that reduce power consumption by turning on and off automatically, smart parking that allows the car driver to reserve a space online in advance combined with demand based pricing of parking spaces and, last but not least, live-optimization of service routes.

The ForgeRock solution matches the attributes and characteristics of human users to those of things, devices, and apps, collects the notifications all together in a big data repository and then flexibly manages the relationships between all entities - people and things -  from this central authoritative source. Depending on her or his role, each different user will be carefully provisioned with access to certain devices as well as certain rights and privileges. That is why identity is a prerequisite for secure relationships. Things are just another channel demanding access to the internet. It has to be clear what they are allowed to do, e. g. may item A send sensitive data to a certain server B? If so, does the information have to be encrypted? Without the concept of identities, their relations, and for managing their access there are too many hindrances for successful change in business models and regulations.

Besides the questions about security and privacy, the lack of standards has long been the biggest challenge for full-functioning IoT. Manifold platforms, various protocols and many different APIs made overall integration of IoT systems problematic. Yes, there are even many different “standards”. However, with User Managed Access (UMA) a new standard eventually evolved that’s taking care of the management of access rights. With UMA, millions of users can manage their own access rights and keep full control over their own data without giving it to the service provider. They alone decide which information they share with others. While the resources may be stored on several different servers, a central authorization server controls that the rules laid down by the owner are being reliably applied. Any enterprise that adapts UMA early now has the chance to build a new, strong and long-lasting relationship with customers built on security and privacy by design.


Google+

Stack creep - from the network layer to the application layer

Jan 12, 2016 by Graham Williamson

Last year saw an unprecedented interest in protection of corporate data. With several high-profile losses of intellectual property organisations have started looking for a better way.

For the past 30 years the bastion against data loss has been network devices. We have relied on routers, switches and firewalls to protect our classified data and ensure it’s not accessed by un-authorised persons. Databases were housed on protected sub-nets to which we could restrict access on the basis of IP address, a Kerberos ticket or AD group membership.

But there are a couple of reasons that this approach is no longer sufficient. Firstly, with the relentless march of technology the network perimeter is increasingly “fuzzy”. No longer can we rely on secure documents being used and stored on the corporate network. Increasingly we must share data with business partners and send documents external to the corporate network. We need to store documents on Cloud storage devices and approve external collaborators to access, edit, print and save our documents as part of our company’s business processes.

Secondly, we are increasingly being required to support mobile devices. We can no longer rely on end-point devices that we can control with a standard operating environment and a federated login. We must now support tablets and smartphone devices that may be used to access our protected documents from public spaces and at unconventional times of the day.

As interest in a more sophisticated way to protect documents has risen, so have the available solutions. We are experiencing unprecedented interest in Information Rights Management (IRM) whereby a user’s permission to access or modify a document is validated at the time access is requested. When a document is created the author, or a corporate policy, classifies the document appropriately to control who can read it, edit it, save it or print it. IRM can also be used to limit the number of downloads of a document or time-limit access rights. Most solutions in this space support AD Rights Management and Azure Rights Management; some adopt their own information rights management solution with end-point clients that manage external storage or emailing.

Before selecting a solution companies should understand their needs. A corporate-wide secure document repository solution for company staff is vastly different from a high-security development project team sharing protected documents with collaboration partners external to the company. A CIA approach to understanding requirements is appropriate:

Confidentiality – keep secret Typically encryption is deployed to ensure data at rest is protected from access by unapproved persons. Solutions to this requirement vary from strong encryption of document repositories to a rights management approach requiring document classification mechanisms and IRM-enabled client software.
Integrity – keep accurate Maintaining a document’s integrity typically involves a digital signature and key management in order to sign and verify signatures. Rights management can also be employed to ensure that a document has not been altered.
Availability – secure sharing Supporting business processes by making protected documents available to business partners is at the core of Secure information sharing. Persons wanting accesses to confidential information should not have to go through a complex or time-consuming procedure in order to gain access to the required data. Rights management can provide a secure way to control permissions to protected documents while making appropriate data available for business purposes.

Never has there been a better time to put in-place a secure data sharing infrastructure that leverages an organisation’s identity management environment to protect corporate IP, while at the same time enhance business process integration.


Google+

Information Rights Management explained

Jan 12, 2016 by Alexei Balaganski

With the amount of digital assets a modern company has to deal with growing exponentially, the need to access them any time from any place, across various devices and platforms has become a critical factor for business success. This does not include just the employees – to stay competitive, modern businesses must be increasingly connected to their business partners, suppliers, current and future customers and even smart devices (or things). New digital businesses therefore have to be agile and connected.

Unsurprisingly, the demand for solutions that provide strongly protected storage, fine-grained access control and secure sharing of sensitive digital information is extremely high nowadays, with vendors rushing to bring their various solutions to the market. Of course, no single information sharing solution can possibly address all different and often conflicting requirements of different organizations and industries, and the sheer number and diversity of such solutions is a strong indicator for this. Vendors may decide to support just certain types of storage or document formats, concentrate on solving a specific pain points of many companies like enabling mobile access, or design their solutions for specific verticals only.

Traditional approach to securing sensitive information is storing it in a secured repository on-premise or in the cloud. By combining strong encryption and customer managed encryption keys or in the most extreme cases even implementing Zero Knowledge Encryption principle, vendors are able to address even the strictest security and compliance requirements. However, as soon as a document leaves the repository, traditional solutions are no longer able to ensure its integrity or to prevent unauthorized access to it.

Information Rights Management (IRM) offers a completely different, holistic approach towards secure information sharing. Evolving from earlier Digital Rights Management technologies, the underlying principle behind IRM is data-centric security. Essentially, each document is wrapped in a tiny secured container and has its own access policy embedded directly in it. Each time an application needs to open, modify or otherwise access the document, it needs to validate user permissions with a central authority. If those permissions are changed or revoked, this will be immediately applied to the document regardless of its current location. The central IRM authority also maintains a complete audit trail of document accesses.

Thus, IRM is the only approach that can protect sensitive data at rest, in motion, and in use. In the post-firewall era, this approach is fundamentally more future-proof, flexible and secure than any combination of separate technologies addressing different stages of the information lifecycle. However, it has one fatal flaw: IRM only works without impeding productivity if your applications support it. Although IRM solutions have gone a long way from complicated on-premise solutions towards cloud-based completely managed services, their adoption rate is still quite low. Probably the biggest reason for that is lack of interoperability between different IRM implementations, but arguably more harmful is the lack of general awareness that such solutions even exist! However, recently the situation has changed, with several notable vendors increasing efforts in marketing their IRM-based solutions.

One of the pioneers and certainly the largest such vendor is Microsoft. With the launch of their cloud-based Azure Rights Management services in 2014, Microsoft finally made their IRM solution affordable not just for large enterprises. Naturally, Microsoft’s IRM is natively supported by all Microsoft Office document formats and applications. PDF documents, images or text files are natively supported as well, and generic file encapsulation into a special container format is available for all other document types. Ease of deployment, flexibility and support across various device platforms, on-premise and cloud services make Azure RMS the most comprehensive IRM solution in the market today.

However, other vendors are able to compete in this field quite successfully as well either by adding IRM functionality into their existing platforms or by concentrating on delivering more secure, more comprehensive or even more convenient solutions to address specific customer needs.

A notable example of the former is Intralinks. In 2014, the company has acquired docTrackr, a French vendor with an innovative plugin-free IRM technology. By integrating docTrackr into their secure enterprise collaboration platform VIA, Intralinks is now able to offer seamless document protection and policy management to their existing customers. Another interesting solution is Seclore FileSecure, which provides a universal storage- and transport-neutral IRM extension for existing document repositories.

Among the vendors that offer their own IRM implementations one can name Covertix, which offers a broad portfolio of data protection solutions with a focus on strong encryption and comprehensive access control across multiple platforms and storage services. On the other end of the spectrum on can find vendors like Prot-On, which focus more on ease of use and seamless experience, providing their own EU-based cloud service to address local privacy regulations.

For more in-depth information about leading vendors and products in the file sharing and collaboration market please refer to KuppingerCole’s Leadership Compass on Secure Information Sharing.


Google+

ISO/IEC 27017 was it worth the wait?

Jan 05, 2016 by Mike Small

On November 30th, 2015 the final version of the standard ISO/IEC 27017 was published.  This standard provides guidelines for information security controls applicable to the provision and use of cloud services.  This standard has been some time in gestation and was first released as a draft in spring 2015.  Has the wait been worth it?  In my opinion yes.

The gold standard for information security management is ISO/IEC 27001 together with the guidance given in ISO/IEC 27002.  These standards remain the foundation but the guidelines are largely written on the assumption that an organization’s processes its own information.  The increasing adoption of managed IT and cloud services, where responsibility for security is shared, is challenging this assumption.  This is not to say that these standards and guidelines are not applicable to the cloud, rather it is that they need interpretation in a situation where the information is being processed externally.  ISO/IEC 27017 and ISO/IEC 27018 standards provide guidance to deal with this.

ISO/IEC 27018, which was published in 2014, establishes controls and guidelines for measures to protect Personally Identifiable Information for the public cloud computing environment.  The guidelines are based on those specified in ISO/IEC 27002 with controls objectives extended to include the requirements needed to satisfy privacy principles in ISO/IEC 29100.  These are easily mapped onto the existing EU privacy principles.  This standard is extremely useful to help an organization assure compliance when using a public cloud service to process personally identifiable information.  Under these circumstances the cloud customer is the Data Controller and, under current EU laws, remains responsible for processing breaches of the Data Processor.  To provide this level of assurance some cloud service providers have obtained independent certification of their compliance with this standard.

The new ISO/IEC 27017 provides guidance that is much more widely applicable to the use of cloud services.  Specific guidance is provided for 37 of the existing ISO/IEC 27002 controls; separate but complementary guidance is given for the cloud service customer and the cloud service provider.  This emphasizes the shared responsibility for security of cloud services.  This includes the need for the cloud customer to have policies for the use of cloud services and for the cloud service provider to provide information to the customer.

For example, as regards restricting access (ISO 27001 control A.9.4.1) the guidance is:

  • The cloud service customer should ensure that access to information in the cloud service can be restricted in accordance with its access control policy and that such restrictions are realized.
  • The cloud service provider should provide access controls that allow the cloud service customer to restrict access to its cloud services, its cloud service functions and the cloud service customer data maintained in the service.

 In addition, the standard includes 7 additional controls that are relevant to cloud services.  These new controls are numbered to fit with the relevant existing ISO/IEC 27002 controls; these extended controls cover:

  • Shared roles and responsibilities within a cloud computing environment
  • Removal and return of cloud service customer assets
  • Segregation in virtual computing environments
  • Virtual machine hardening
  • Administrator's operational security
  • Monitoring of Cloud Services
  • Alignment of security management for virtual and physical networks

 

In summary ISO/IEC 27017 provides very useful guidance providers and KuppingerCole recommends that this guidance should be followed by cloud customers and cloud service providers. While it is helpful for cloud service providers to have independent certification that they comply with this standard, this does not remove the responsibility from the customer for ensuring that they also follow the guidance.

KuppingerCole has conducted extensive research into cloud service security and compliance, cloud service providers as well as engaging with cloud service customers.  This research has led to a deep understanding of the real risks around the use of cloud service and how to approach these risks to safely gain the potential benefits.  We have created services, workshops and tools designed to help organizations to manage their adoption of cloud services in a secure and compliant manner while preserving the advantages that these kinds of IT service bring.


Google+

Why Distributed Public Ledgers such as Blockchain will not solve the identification and thus the authentication problem

Dec 17, 2015 by Martin Kuppinger

There is a lot of talk about Blockchain and, more generally, Distributed Public Ledgers (DPLs) these days. Some try to position DPLs as a means for better identification and, in consequence, authentication. Unfortunately, this will not really work. We might see a few approaches for stronger or “better” identification and authentication, but no real solution. Not even by DPLs, which I see as the most disruptive innovation in Information Technology in a very, very long time.

Identification is the act of finding out whether someone (or something) is really the person (or thing) he (it) claims to be. It is about knowing whether the person claiming to be Martin Kuppinger is really Martin Kuppinger or in fact someone else.

Authentication, in contrast, is a proof that you have a proof such as a key, a password, a passport, or whatever. The quality of authentication depends on one hand on the quality of identification (to obtain the proof) and on the other hand on aspects such as protection against forgery and the ubiquitous authentication strength.

Identification is, basically, the challenge in the enrollment process of an authenticator. There are various ways of doing it. People might be identified by their DNA or fingerprints – which works as long as you know that the DNA or fingerprint belongs to someone. But even then, you might not have the real name of that person. People might be identified by showing their ID cards or passports – which works well unless they use faked ID cards or passports. People might be identified by linking profiles of social networks together – which doesn’t help much, to be honest. They might use fake profiles or they might use fake names in real profiles. There is no easy solution for identification.

In the end, it is about trust: Do we trust the identification when rolling out authentications to trust the authenticators?

Authentication can be performed with a variety of mechanisms. Again, this is about trust: How much do we trust a certain authenticator? However, authentication does not identify you. It proves that you know the username and password; that you possess a token; or that someone has access to your fingerprints. Some approaches are more trustworthy; others are less trustworthy.

So why don’t DPLs such as Blockchain solve the challenge of identification and authentication? For identification, this is obvious. They might provide a better proof that an ID is linked to various social media profiles (such as with Onename), but they don’t solve the underlying identification challenge.

DPLs also don’t solve the authentication issue. If you have such an ID, it either must be unlocked in some way (e.g. by password, in the worst case) or bound to something (e.g. a device ID). That is the same challenge as we have today.

DPLs can help in improving trust e.g. in that still the same social media profiles are linked. It can support non-repudiation which is an essential element. It will increase the trust level with a growing number of parties participating in a DPL. But it can’t solve the underlying challenges of identification and authentication. Simply said, Technology will never know exactly who someone is.


Google+

Cyber Security: Why Machine Learning is Not Enough

Dec 09, 2015 by Martin Kuppinger

Currently, there is a lot of talk about new analytical approaches in the field of cyber security. Anomaly detection and behavioral analytics are some of the overarching trends along with RTSI (Real Time Security Intelligence), which combines advanced analytical approaches with established concepts such as SIEM (Security Information and Event Management).

Behind all these changes and other new concepts, we find a number of buzzwords such as pattern-matching algorithms, predictive analytics, or machine learning. Aside from the fact that such terms frequently aren’t used correctly and precisely, some of the concepts have limitations by design, e.g. machine learning.

Machine learning implies that the “machine” (a piece of software) is able to “learn”. In fact this means that the machine is able to improve its results over time by analyzing the effect of previous actions and then adjusting the future actions.

One of the challenges with cyber security is the fact that there are continuously new attack vectors. Some of them are just variant of established patterns; some of them are entirely new. In an ideal world, a system is able to recognize unknown vectors. Machine learning per se doesn’t – the concept is learning from things that have gone wrong.

This is different from anomaly detection which identifies unknown or changing patterns. Here, the new is something which is identified as an anomaly.

Interestingly, some of the technologies where marketing talks about “machine learning” in fact do a lot more than ex-post-facto machine learning. Frequently, it is not a matter of technology but of the wrong use of buzzwords in marketing. Anyway, customers should be careful about buzzwords: Ask the vendor what is really meant by them. Any ask yourself whether the information provided by the vendor really is valid and solves your challenge.


Google+

Security and Privacy: An opportunity, not a threat

Dec 08, 2015 by Martin Kuppinger

One of the lessons I have learned over the years is that it is far simpler “selling” things by focusing on the positive aspects, instead of just explaining that risk can be reduced. This is particularly true for Information Security. It also applies to privacy as a concept. A few days ago I had a conversation about the chances organizations have in better selling their software or services through supporting advanced privacy features. The argument was that organizations can achieve better competitive positioning by supporting high privacy requirements.

Unfortunately, this is only partially true. It is true in areas with strong compliance regulations. It is true for that part of the customer base that is privacy-sensitive. However, it might even become a negative inhibitor in other countries with different regulations and expectations.

There are three different groups of arguments for implementing more security and privacy in applications and services:

  1. Security and regulatory requirements – even while they must be met, these arguments are about something that must be done, with no business benefit.
  2. Competitive differentiation – an opportunity; however, as described above, that argument commonly is only relevant for certain areas and some of the potential customers. For these, it is either a must-have (regulations) or a positive argument, a differentiator (security/privacy sensitive people).
  3. Security and privacy as a means for becoming more agile in responding to business requirements. Here we are talking about positive aspects. Software and services that can be as secure as it needs to be (depending on regulations or customer demand) or as open as the market requires allows organizations to react flexibly on demand amid changing requirements.

The third approach is obviously the most promising one when trying to sell your project internally as well as your product to customers.


Google+

“A Stab in the Back” of IoT Security

Dec 07, 2015 by Alexei Balaganski

Following the topic of the Internet of Things security covered in our latest Analysts’ View newsletter, I’d like to present a perfect example of how IoT device manufacturers are blatantly ignoring the most basic security best practices in their products. As an Austrian information security company SEC Consult revealed in their report, millions of embedded devices around the world, including routers and modems, IP phones and cameras and other network products, are reusing a small number of hardcoded SSH keys and SSL certificates.

According to SEC Consult, they have analyzed firmware images (usually freely available for download from manufacturers’ websites) of over 4000 various devices and were able to extract more than 580 unique private keys. Remember, a private key is the most critical component of any public key infrastructure and according to the most basic security best practices has to be protected from falling into the wrong hands by all means available. After that, the researches correlated their findings with the data from internet-wide scans, again publicly available to anyone interested, and found out that a handful of those hardcoded keys are used in over 4 million hosts directly connected to the Internet.

Although similar researches has been done earlier, this time the company was able to expose concrete products and vendors responsible, which include both small regional manufacturers and large international companies like Cisco, Huawei or ZyXEL. These devices are deployed by large internet service providers around the world, exposing millions of their subscribers to possible attacks.

It can be speculated what the exact reason for a particular manufacturer to include a hardcoded key into their product would be, but in the end it all boils down to blindly reusing sample code supplied by manufacturers of network chips or boards that power these devices. Whether because of incompetence or pure negligence, these “default” keys or certificates end up being included into device firmware images.

Since hackers would have private keys at hand, they could launch different types of attacks, including impersonation, man-in-the-middle or passive decryption attacks. Although the researchers rightfully point out that exploiting modems or routers from the internet is difficult and mostly limited to “evil ISPs”, one has to realize that SEC Consult’s research has only revealed the tip of an iceberg, and their findings do not present an exceptional case but rather a typical approach of many IoT vendors towards security. As more and more smart devices are deployed everywhere – in hospitals, connected cars and traffic lights or in manufacturing plants and power grids, the risk of exposing these devices to key reuse attacks increases dramatically, along with the severity of possible consequences of such attacks.

So, what can and must be done to prevent these attacks in the future? SEC Consult’s report outlines the steps that vendors and ISPs have to make, and they are pretty obvious. Device vendors have to stop including hardcoded keys into their firmware, generating unique keys on the first boot instead. ISPs should ensure that the devices they install have remote management disabled. End users should change keys in their devices (which, by the way, requires certain technical skills and in many devices is not permitted at all).

However, the bigger question isn’t what’s needed to fix the problem, but how to force vendors and internet providers to change their current business processes. They have not cared about security for years, why would they suddenly change their mind and start investing into it? There is no single answer for this question, and in any case a combined effort of government agencies, security experts and the end users themselves is needed to break the current trend. Only when vendors realize that building their products upon the Security by Design principle not only saves them from massive fines and legal problems, but in fact makes their products more competitive on the market, can we expect to see positive changes. Until then, IoT security will remain simply a fictional concept.


Google+


top
KuppingerCole Blog
By:
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live training sessions.
Register now
Spotlight
Information Classification and Secure Information Sharing
To stay competitive, modern businesses must be increasingly connected to their business partners, suppliers, current and future customers and even smart devices (or things). New digital businesses therefore have to be agile and connected.
KC EXTEND
KC EXTEND shows how the integration of new external partners and clients in your IAM can be done while at the same time the support of the operational business is ensured.
Links
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing
Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2016 KuppingerCole