English   Deutsch   Русский   中文    

KuppingerCole Blog

Why we need adaptive authentication everywhere - not just in eBanking

Feb 09, 2016 by Matthias Reinwarth

Most probably the first thing that comes to your mind, when being asked about what should be highly secure when being done online is electronic banking. Your account data, your credit card transactions and all the various types of transactions that can be executed online today, ranging from simple status queries to complex brokerage, require adequate protection. With the criticality of the individual transactions varying substantially, this is also true for the required level of protection. This makes electronic banking the perfect use case for explaining and demonstrating adaptive authentication.

But creating an access matrix by mapping a set of client side attributes on one axis to a set of application functionality of varying criticality on the other axis makes perfect sense for almost any application available online as of today. The analysis of a user's location, the time of day and the time zone, the operating system or browser type and the respective version of the used device is vital information for identifying heightened access risk. Cleverer context data might be derived from the number of consecutive failed login attempts and the actual time required for the successful login process (several failed attempts and then very fast login: might be an automated brute force attack). These mechanisms are incredible improvements for online privacy and security when defined and implemented appropriately.

Maybe the most important argument for having adaptive authentication in almost every other application as well might come as a surprise: It is ease of use. Many applications do provide certain functionalities that require strong authentication in general and certain other functionalities that should be protected adequately in case of inappropriate context information suggesting this. The majority of functionalities, however, is usually of lower criticality and should, as a result, be available without customers / citizens / members / subscribers/… having to provide strong authentication data, requiring for example two-factor-authentication.

  • Many users log into their favourite shopping portals just for “research purposes” on a regular basis without actually buying something in that very session (i.e. the digital equivalent for going window shopping). As long as no purchase is made no additional adaptive authentication should be required in most cases.
  • The same can apply for accessing basic functionality (e.g. read-only access to general information of the local municipality) within an eGovernment site. As long as no PII or other sensitive data is retrieved or even modified, a simple authentication on basis of username and a reasonably strong password should be sufficient.
  • Watching a cartoon suitable for children on your favourite video streaming service might be fine using the usually already configured basic authentication, but the attempt to change the parental control settings obviously has to trigger a request for providing additional authentication.

But we are of course not only talking about end users accessing more or less public online services in different scenarios. In corporate environments, employees, partners and external workforce are accessing enterprise resources located on-premises or deployed in diverse cloud or hybrid infrastructures. With the ever changing landscape of user devices they will want to have access to corporate applications and services from various types of devices ranging from corporate notebooks to all types of personal devices, including mobile phones or tablets. Every organisation will have the ability to define their own policies, whether individual types of access is permitted in general and which rules apply for which type and criticality of functionality. The decision whether access can be provided at all or whether e.g. a step-up authentication is required can be made based on large set of available information at runtime. This information can include e.g. the general user group(employee, external workforce, partners, freelance sales partners and even customers), of course detailed identity data of the individual user, the accessing device (e.g. type, operating system, browser, versions), type and security of current network access (e.g. mobile access via cellular network, originating country, secured by VPN or not), local time and time zone and many many more. With all this information available at runtime it is clear that decisions for accessing the same resource for the same user can be different for a secured national connection from a corporate end-user device versus a connection from an outdated Android device originating from an untrusted country and without line encryption.

The above given examples clearly show the advantages of adaptive authentication in a lot of different use case scenarios: Adaptive authentication allows to provide both: a) An improved user experience by not requesting additional authentication whenever it is not required. And b) an adequate level of protection for user security and privacy whenever context information and/or the criticality of the requested functionality demand for that. Both are perfect reasons for you as either the provider of online services or being responsible for secure and modern corporate infrastructure components to consider implementing adaptive authentication soon.


Google+

Adaptive authentication explained

Feb 09, 2016 by Dave Kearns

To understand what this article is about it’s important that we have an agreement on what we mean when we use the term “adaptive authentication”. It isn’t a difficult concept, but it’s best if we’re all on the same page, so to speak.

First, the basics: authentication is the ceremony which allows someone to present credentials which allow access to something. Typically and traditionally this is a username/password combination. But username/password is only one facet of one factor of authentication and we usually speak of three possible factors, identified as:

  • Something you know (e.g., a password)
  • Something you have (e.g., a token such as a SecureID fob)
  • Something you are (e.g., a biometric such as a fingerprint)

There are multiple facets to each of these, of course, such as the so-called There are multiple facets to each of these, of course, such as the so-called “security questions” (mother’s maiden name, first pet’s name, city you were born in, etc.) which are part of the Something you know factor.

Beginning around 30 years ago, it was suggested that multi-factor authentication – using two of the three factors, or even all three – made for stronger security. Within the last ten years, on-line organizations (such as financial businesses) and even social networks (Google+, Facebook, etc.) have suggested users move to two-factor authentication.

While this is good practice, this multi-factor authentication is static. Every time you access the service you need to present the same two credentials in order to log in. It’s always the same. Once a hacker (usually through what’s called “phishing”) knows the two factors your account is as open to them as if there was no security.

Within the past 5 years we sat KuppingerCole have advocated moving to what we called “dynamic” authentication – authentication that could change “on the fly”. But because we advocated much more than a change in how the authentication credentials were established, we now call the technology “adaptive” authentication.

It’s called “adaptive” because it adapts to the circumstances at the time of the authentication ceremony and dynamically adjusts both the authentication factors as well as the facet(s) of the factors chosen. This is all done as part of the risk analysis of what we call the Adaptive Policy-based Access Management (APAM) system. It’s best to show an example of how this works.

Let’s say that the CFO of a company wishes to access the company’s financial data from his desktop PC in his office on a Monday afternoon. The default authentication is a username, password and hardware token. The CFO presents these, and is granted full access. Now let’s say another CFO of another company wishes to access that company’s financial data. But she’s not in the office, so she’s using a computer at an internet café on a Caribbean island where she’s vacationing. The access control system notes the “new” hardware’s footprint, it’s previously unknown IP address and the general location. Based on these (and other) context data from the transaction the access control system asks for additional factors and facets for authentication, perhaps password, token, security questions and more. Even so, once the CFO presents these facets and factors she is only given limited read access to the data.

The authentication is dynamically changed and adapted to the circumstances. That’s what we’re discussing here.


Google+

Beyond Datacenter Micro-Segmentation – start thinking about Business Process Micro-Segmentation!

Feb 04, 2016 by Martin Kuppinger

Sometime last autumn I started researching the field of Micro-Segmentation, particularly as a consequence of attending a Unisys analyst event and, subsequently, VMworld Europe. Unisys talked a lot about their Stealth product, while at VMworld there was much talk about the VMware NSX product and its capabilities, including security by Micro-Segmentation.

The basic idea of Datacenter Micro-Segmentation, the most common approach on Micro-Segmentation, is to segment by splitting the network into small (micro) segments for particular workloads, based on virtual networks with additional capabilities such as integrated firewalls, access control enforcement, etc.

Using Micro-Segmentation, there might even be multiple segments for a particular workload, such as the web tier, the application tier, and the database tier. This allows further strengthening security by different access and firewall policies that are applied to the various segments. In virtualized environments, such segments can be easily created and managed, far better than in physical environments with a multitude of disparate elements from switches to firewalls.

Obviously, by having small, well-protected segments with well-defined interfaces to other segments, security can be increased significantly. However, it is not only about datacenters.

The applications and services running in the datacenter are accessed by users. This might happen through fat-client applications or by using web interfaces; furthermore, we see a massive uptake in the use of APIs by client-side apps, but also backend applications consuming and processing data from other backend services. Furthermore, there is also a variety of services where, for example, data is stored or processed locally, starting with downloading documents from backend systems.

Apparently, not everything can be protected perfectly well. Data accessed through browsers is out of control once it is at the client – unless the client can become a part of the secure environment as well.

Anyway, there are – particularly within organizations with good control of everything within the perimeter and at least some level of control around the devices – more options. Ideally, everything becomes protected across the entire business process, from the backend systems to the clients. Within that segmentation, other segments can exist, such as micro-segments at the backend. Such “Business Process Micro-Segmentation” stands not in contrast to Datacenter Micro-Segmentation, but extends that concept.

From my perspective, we will need two major extensions for moving beyond Datacenter Micro-Segmentation to Business Process Micro-Segmentation. One is encryption. While there is limited need for encryption within the datacenter (don’t consider your datacenter being 100% safe!) due to the technical approach on network virtualization, the client resides outside the datacenter. The minimal approach is protecting the transport by means like TLS. More advanced encryption is available in solutions such as Unisys Stealth.

The other area for extension is policy management. When looking at the entire business process —and not only the datacenter part — protecting the clients by integrating areas like endpoint security into the policy becomes mandatory.

Neither Business Process Micro-Segmentation nor Datacenter Micro-Segmentation will solve all of our Information Security challenges. Both are only building blocks within a comprehensive Information Security strategy. In my opinion, thinking beyond Datacenter Micro-Segmentation towards Business Process Micro-Segmentation is also a good example of the fact that there is not a “holy grail” for Information Security. Once organizations start sharing information with external parties beyond their perimeter, other technologies such as Information Rights Management – where documents are encrypted and distributed along with the access controls that are subsequently enforced by client-side applications – come into play.

While there is value in Datacenter Micro-Segmentation, it is clearly only a piece of a larger concept – in particular because the traditional perimeter no longer exists, which also makes it more difficult to define the segments within the datacenter. Once workloads are flexibly distributed between various datacenters in the Cloud and on-premises, pure Datacenter Micro-Segmentation reaches its limits anyway.


Google+

AWS IoT Suite

Jan 20, 2016 by Alexei Balaganski

After an “extended holiday season” (which for me included spending a vacation in Siberia and then desperately trying to get back into shape) it’s finally time to resume blogging. And the topic for today is the cloud platform for IoT services from AWS, which went out of beta in December. Ok, I know it’s been a month already, but better late than never, right?

As already mentioned earlier, the very definition of the Internet of Things is way too blanket and people tend to combine many different types of devices under this term. So, if your idea of the Internet of Things means controlling your thermostat or your car from your mobile phone, the new service from AWS is probably not what you need. If, however, your IoT includes thousands or even millions of sensors generating massive amounts of data which needs to be collected, processed by complex rules and finally stored somewhere, then look no further, especially if you already have your backend services in the AWS cloud.

In fact, with AWS being the largest cloud provider, it’s safe to assume that its backend services have already been used for quite a few IoT projects. However, until now they would have to rely on third-party middleware for connecting their “things” to AWS services. Now the company has closed the gap by offering their own managed platform for interacting with IoT devices and processing data collected from them. Typically for AWS, their solution follows the no-frills, no-nonsense approach, offering native integrations with their existing services, a rich set of SDKs and development tools and aggressive pricing. In addition, they are bringing in a number of hardware vendors with starter kits that can help quickly implement a prototype for your new IoT project. And, of course, with the amount of computing resources at hand, they can safely claim to be able to manage billions of devices and trillions of messages.

The main components of the new platform are the following:

The Device Gateway supports low-latency bi-directional communications between IoT devices and cloud backends. AWS provides support for both standard HTTP and much more resource-efficient MQTT messaging protocols, both secured by TLS. Strong authentication and fine-grained authorization are provided by familiar AWS IAM services, with a number of simplified APIs available.

The Device Registry keeps track of all devices currently or potentially connected to the AWS IoT infrastructure. It provides various management functions like support and maintenance or firmware distribution. Besides that, the registry maintains Device Shadows – virtual representations of IoT devices, which may be only intermittently connected to the Internet. This functionality allows cloud and mobile apps to access all devices using a universal API, masking all the underlying communication and connectivity issues.

The Rules Engine enables continuous processing of data sent by IoT devices. It supports a large number of rules for filtering and routing the data to AWS services like Lambda, DynamoDB or S3 for processing, analytics and storage. It can also apply various transformations on the fly, including math, string, crypto and other operations or even call external API endpoints.

A number of SDKs are provided including a C SDK for embedded systems, a node.js SDK for Linux, an Arduino library and mobile SDKs for iOS and Android. Combined with a number of “official” hardware kits available to play with, this ensures that developers can quickly start working on an IoT project of almost any kind.

Obviously, one has to mention that Amazon isn’t the first cloud provider to offer an IoT solution – Microsoft has announced their Azure IoT Suite earlier in 2015 and IBM has their own Internet of Things Foundation program. However, each vendor has a unique approach towards addressing various IoT integration issues. The new solution from AWS, with a strong focus on existing standard protocols and unique features like device shadows, not just looks compelling to existing AWS customers, but will definitely kickstart quite a few new large-scale IoT projects. On the Amazon cloud, of course.


Google+

Secured by Design: The smart streets of San Francisco

Jan 15, 2016 by Martin Kuppinger

On April 18th 1906, an earthquake and fires destroyed nearly three quarters of San Francisco. Around 3000 people lost their lives. Right up to the present many other, less critical tremors followed. The danger of another catastrophe can’t be ignored. In a city like San Francisco, however wonderful it might be to live there, people always have to be aware that their whole world can change in an instant. Now the Internet of Things (IoT) can help to make alarm systems get better. People in this awesome city can at least be sure that the mayor and his office staff do their best to keep them safe and secure in all aspects.

Not only that: With the help of the Internet of Things (IoT) they’re also looking for new ways to make the life of the citizens more convenient. That became clear to me when I saw ForgeRock’s presentation about their IoT and Identity projects in San Francisco. I noticed with pleasure that Lasse Andresen, ForgeRock’s CTO and Founder confirmed what I have been saying for quite some time: Security and Privacy must not be an afterthought. Rightfully designed from the start, both do not hinder new successful business models but actually enable them. In IoT, security and privacy are integral elements. They lead to more agility and less risk.

Andresen says in the presentation that Identity, Security and Privacy are core to IoT: “It’s kind of what makes IoT work or not work. Or making big data valuable or not valuable.” San Francisco is an absolutely great example of what that means in practice. Everything – “every thing” - in this huge city shall have its own unique identity, from the utility meter to the traffic lights and parking spaces to the police, firefighters and ambulance. This allows fast, secure and ordered action in case of emergencies. Because of their identities and with geolocation, the current position of each vehicle is always exactly known to the emergency coordinators. The firefighters identify themselves with digital key cards at the scene to show that they are authorized to be there. Thus everything and everyone becomes connected with each other, people, things and services. With identity as the glue.

Identity information enables business models that e. g. improve life in the city. The ForgeRock demonstration shows promising examples such as optimizing the traffic flow and road planning with big data, street lights that reduce power consumption by turning on and off automatically, smart parking that allows the car driver to reserve a space online in advance combined with demand based pricing of parking spaces and, last but not least, live-optimization of service routes.

The ForgeRock solution matches the attributes and characteristics of human users to those of things, devices, and apps, collects the notifications all together in a big data repository and then flexibly manages the relationships between all entities - people and things -  from this central authoritative source. Depending on her or his role, each different user will be carefully provisioned with access to certain devices as well as certain rights and privileges. That is why identity is a prerequisite for secure relationships. Things are just another channel demanding access to the internet. It has to be clear what they are allowed to do, e. g. may item A send sensitive data to a certain server B? If so, does the information have to be encrypted? Without the concept of identities, their relations, and for managing their access there are too many hindrances for successful change in business models and regulations.

Besides the questions about security and privacy, the lack of standards has long been the biggest challenge for full-functioning IoT. Manifold platforms, various protocols and many different APIs made overall integration of IoT systems problematic. Yes, there are even many different “standards”. However, with User Managed Access (UMA) a new standard eventually evolved that’s taking care of the management of access rights. With UMA, millions of users can manage their own access rights and keep full control over their own data without giving it to the service provider. They alone decide which information they share with others. While the resources may be stored on several different servers, a central authorization server controls that the rules laid down by the owner are being reliably applied. Any enterprise that adapts UMA early now has the chance to build a new, strong and long-lasting relationship with customers built on security and privacy by design.


Google+

Stack creep - from the network layer to the application layer

Jan 12, 2016 by Graham Williamson

Last year saw an unprecedented interest in protection of corporate data. With several high-profile losses of intellectual property organisations have started looking for a better way.

For the past 30 years the bastion against data loss has been network devices. We have relied on routers, switches and firewalls to protect our classified data and ensure it’s not accessed by un-authorised persons. Databases were housed on protected sub-nets to which we could restrict access on the basis of IP address, a Kerberos ticket or AD group membership.

But there are a couple of reasons that this approach is no longer sufficient. Firstly, with the relentless march of technology the network perimeter is increasingly “fuzzy”. No longer can we rely on secure documents being used and stored on the corporate network. Increasingly we must share data with business partners and send documents external to the corporate network. We need to store documents on Cloud storage devices and approve external collaborators to access, edit, print and save our documents as part of our company’s business processes.

Secondly, we are increasingly being required to support mobile devices. We can no longer rely on end-point devices that we can control with a standard operating environment and a federated login. We must now support tablets and smartphone devices that may be used to access our protected documents from public spaces and at unconventional times of the day.

As interest in a more sophisticated way to protect documents has risen, so have the available solutions. We are experiencing unprecedented interest in Information Rights Management (IRM) whereby a user’s permission to access or modify a document is validated at the time access is requested. When a document is created the author, or a corporate policy, classifies the document appropriately to control who can read it, edit it, save it or print it. IRM can also be used to limit the number of downloads of a document or time-limit access rights. Most solutions in this space support AD Rights Management and Azure Rights Management; some adopt their own information rights management solution with end-point clients that manage external storage or emailing.

Before selecting a solution companies should understand their needs. A corporate-wide secure document repository solution for company staff is vastly different from a high-security development project team sharing protected documents with collaboration partners external to the company. A CIA approach to understanding requirements is appropriate:

Confidentiality – keep secret Typically encryption is deployed to ensure data at rest is protected from access by unapproved persons. Solutions to this requirement vary from strong encryption of document repositories to a rights management approach requiring document classification mechanisms and IRM-enabled client software.
Integrity – keep accurate Maintaining a document’s integrity typically involves a digital signature and key management in order to sign and verify signatures. Rights management can also be employed to ensure that a document has not been altered.
Availability – secure sharing Supporting business processes by making protected documents available to business partners is at the core of Secure information sharing. Persons wanting accesses to confidential information should not have to go through a complex or time-consuming procedure in order to gain access to the required data. Rights management can provide a secure way to control permissions to protected documents while making appropriate data available for business purposes.

Never has there been a better time to put in-place a secure data sharing infrastructure that leverages an organisation’s identity management environment to protect corporate IP, while at the same time enhance business process integration.


Google+

Information Rights Management explained

Jan 12, 2016 by Alexei Balaganski

With the amount of digital assets a modern company has to deal with growing exponentially, the need to access them any time from any place, across various devices and platforms has become a critical factor for business success. This does not include just the employees – to stay competitive, modern businesses must be increasingly connected to their business partners, suppliers, current and future customers and even smart devices (or things). New digital businesses therefore have to be agile and connected.

Unsurprisingly, the demand for solutions that provide strongly protected storage, fine-grained access control and secure sharing of sensitive digital information is extremely high nowadays, with vendors rushing to bring their various solutions to the market. Of course, no single information sharing solution can possibly address all different and often conflicting requirements of different organizations and industries, and the sheer number and diversity of such solutions is a strong indicator for this. Vendors may decide to support just certain types of storage or document formats, concentrate on solving a specific pain points of many companies like enabling mobile access, or design their solutions for specific verticals only.

Traditional approach to securing sensitive information is storing it in a secured repository on-premise or in the cloud. By combining strong encryption and customer managed encryption keys or in the most extreme cases even implementing Zero Knowledge Encryption principle, vendors are able to address even the strictest security and compliance requirements. However, as soon as a document leaves the repository, traditional solutions are no longer able to ensure its integrity or to prevent unauthorized access to it.

Information Rights Management (IRM) offers a completely different, holistic approach towards secure information sharing. Evolving from earlier Digital Rights Management technologies, the underlying principle behind IRM is data-centric security. Essentially, each document is wrapped in a tiny secured container and has its own access policy embedded directly in it. Each time an application needs to open, modify or otherwise access the document, it needs to validate user permissions with a central authority. If those permissions are changed or revoked, this will be immediately applied to the document regardless of its current location. The central IRM authority also maintains a complete audit trail of document accesses.

Thus, IRM is the only approach that can protect sensitive data at rest, in motion, and in use. In the post-firewall era, this approach is fundamentally more future-proof, flexible and secure than any combination of separate technologies addressing different stages of the information lifecycle. However, it has one fatal flaw: IRM only works without impeding productivity if your applications support it. Although IRM solutions have gone a long way from complicated on-premise solutions towards cloud-based completely managed services, their adoption rate is still quite low. Probably the biggest reason for that is lack of interoperability between different IRM implementations, but arguably more harmful is the lack of general awareness that such solutions even exist! However, recently the situation has changed, with several notable vendors increasing efforts in marketing their IRM-based solutions.

One of the pioneers and certainly the largest such vendor is Microsoft. With the launch of their cloud-based Azure Rights Management services in 2014, Microsoft finally made their IRM solution affordable not just for large enterprises. Naturally, Microsoft’s IRM is natively supported by all Microsoft Office document formats and applications. PDF documents, images or text files are natively supported as well, and generic file encapsulation into a special container format is available for all other document types. Ease of deployment, flexibility and support across various device platforms, on-premise and cloud services make Azure RMS the most comprehensive IRM solution in the market today.

However, other vendors are able to compete in this field quite successfully as well either by adding IRM functionality into their existing platforms or by concentrating on delivering more secure, more comprehensive or even more convenient solutions to address specific customer needs.

A notable example of the former is Intralinks. In 2014, the company has acquired docTrackr, a French vendor with an innovative plugin-free IRM technology. By integrating docTrackr into their secure enterprise collaboration platform VIA, Intralinks is now able to offer seamless document protection and policy management to their existing customers. Another interesting solution is Seclore FileSecure, which provides a universal storage- and transport-neutral IRM extension for existing document repositories.

Among the vendors that offer their own IRM implementations one can name Covertix, which offers a broad portfolio of data protection solutions with a focus on strong encryption and comprehensive access control across multiple platforms and storage services. On the other end of the spectrum on can find vendors like Prot-On, which focus more on ease of use and seamless experience, providing their own EU-based cloud service to address local privacy regulations.

For more in-depth information about leading vendors and products in the file sharing and collaboration market please refer to KuppingerCole’s Leadership Compass on Secure Information Sharing.


Google+

ISO/IEC 27017 was it worth the wait?

Jan 05, 2016 by Mike Small

On November 30th, 2015 the final version of the standard ISO/IEC 27017 was published.  This standard provides guidelines for information security controls applicable to the provision and use of cloud services.  This standard has been some time in gestation and was first released as a draft in spring 2015.  Has the wait been worth it?  In my opinion yes.

The gold standard for information security management is ISO/IEC 27001 together with the guidance given in ISO/IEC 27002.  These standards remain the foundation but the guidelines are largely written on the assumption that an organization’s processes its own information.  The increasing adoption of managed IT and cloud services, where responsibility for security is shared, is challenging this assumption.  This is not to say that these standards and guidelines are not applicable to the cloud, rather it is that they need interpretation in a situation where the information is being processed externally.  ISO/IEC 27017 and ISO/IEC 27018 standards provide guidance to deal with this.

ISO/IEC 27018, which was published in 2014, establishes controls and guidelines for measures to protect Personally Identifiable Information for the public cloud computing environment.  The guidelines are based on those specified in ISO/IEC 27002 with controls objectives extended to include the requirements needed to satisfy privacy principles in ISO/IEC 29100.  These are easily mapped onto the existing EU privacy principles.  This standard is extremely useful to help an organization assure compliance when using a public cloud service to process personally identifiable information.  Under these circumstances the cloud customer is the Data Controller and, under current EU laws, remains responsible for processing breaches of the Data Processor.  To provide this level of assurance some cloud service providers have obtained independent certification of their compliance with this standard.

The new ISO/IEC 27017 provides guidance that is much more widely applicable to the use of cloud services.  Specific guidance is provided for 37 of the existing ISO/IEC 27002 controls; separate but complementary guidance is given for the cloud service customer and the cloud service provider.  This emphasizes the shared responsibility for security of cloud services.  This includes the need for the cloud customer to have policies for the use of cloud services and for the cloud service provider to provide information to the customer.

For example, as regards restricting access (ISO 27001 control A.9.4.1) the guidance is:

  • The cloud service customer should ensure that access to information in the cloud service can be restricted in accordance with its access control policy and that such restrictions are realized.
  • The cloud service provider should provide access controls that allow the cloud service customer to restrict access to its cloud services, its cloud service functions and the cloud service customer data maintained in the service.

 In addition, the standard includes 7 additional controls that are relevant to cloud services.  These new controls are numbered to fit with the relevant existing ISO/IEC 27002 controls; these extended controls cover:

  • Shared roles and responsibilities within a cloud computing environment
  • Removal and return of cloud service customer assets
  • Segregation in virtual computing environments
  • Virtual machine hardening
  • Administrator's operational security
  • Monitoring of Cloud Services
  • Alignment of security management for virtual and physical networks

 

In summary ISO/IEC 27017 provides very useful guidance providers and KuppingerCole recommends that this guidance should be followed by cloud customers and cloud service providers. While it is helpful for cloud service providers to have independent certification that they comply with this standard, this does not remove the responsibility from the customer for ensuring that they also follow the guidance.

KuppingerCole has conducted extensive research into cloud service security and compliance, cloud service providers as well as engaging with cloud service customers.  This research has led to a deep understanding of the real risks around the use of cloud service and how to approach these risks to safely gain the potential benefits.  We have created services, workshops and tools designed to help organizations to manage their adoption of cloud services in a secure and compliant manner while preserving the advantages that these kinds of IT service bring.


Google+

Why Distributed Public Ledgers such as Blockchain will not solve the identification and thus the authentication problem

Dec 17, 2015 by Martin Kuppinger

There is a lot of talk about Blockchain and, more generally, Distributed Public Ledgers (DPLs) these days. Some try to position DPLs as a means for better identification and, in consequence, authentication. Unfortunately, this will not really work. We might see a few approaches for stronger or “better” identification and authentication, but no real solution. Not even by DPLs, which I see as the most disruptive innovation in Information Technology in a very, very long time.

Identification is the act of finding out whether someone (or something) is really the person (or thing) he (it) claims to be. It is about knowing whether the person claiming to be Martin Kuppinger is really Martin Kuppinger or in fact someone else.

Authentication, in contrast, is a proof that you have a proof such as a key, a password, a passport, or whatever. The quality of authentication depends on one hand on the quality of identification (to obtain the proof) and on the other hand on aspects such as protection against forgery and the ubiquitous authentication strength.

Identification is, basically, the challenge in the enrollment process of an authenticator. There are various ways of doing it. People might be identified by their DNA or fingerprints – which works as long as you know that the DNA or fingerprint belongs to someone. But even then, you might not have the real name of that person. People might be identified by showing their ID cards or passports – which works well unless they use faked ID cards or passports. People might be identified by linking profiles of social networks together – which doesn’t help much, to be honest. They might use fake profiles or they might use fake names in real profiles. There is no easy solution for identification.

In the end, it is about trust: Do we trust the identification when rolling out authentications to trust the authenticators?

Authentication can be performed with a variety of mechanisms. Again, this is about trust: How much do we trust a certain authenticator? However, authentication does not identify you. It proves that you know the username and password; that you possess a token; or that someone has access to your fingerprints. Some approaches are more trustworthy; others are less trustworthy.

So why don’t DPLs such as Blockchain solve the challenge of identification and authentication? For identification, this is obvious. They might provide a better proof that an ID is linked to various social media profiles (such as with Onename), but they don’t solve the underlying identification challenge.

DPLs also don’t solve the authentication issue. If you have such an ID, it either must be unlocked in some way (e.g. by password, in the worst case) or bound to something (e.g. a device ID). That is the same challenge as we have today.

DPLs can help in improving trust e.g. in that still the same social media profiles are linked. It can support non-repudiation which is an essential element. It will increase the trust level with a growing number of parties participating in a DPL. But it can’t solve the underlying challenges of identification and authentication. Simply said, Technology will never know exactly who someone is.


Google+

Cyber Security: Why Machine Learning is Not Enough

Dec 09, 2015 by Martin Kuppinger

Currently, there is a lot of talk about new analytical approaches in the field of cyber security. Anomaly detection and behavioral analytics are some of the overarching trends along with RTSI (Real Time Security Intelligence), which combines advanced analytical approaches with established concepts such as SIEM (Security Information and Event Management).

Behind all these changes and other new concepts, we find a number of buzzwords such as pattern-matching algorithms, predictive analytics, or machine learning. Aside from the fact that such terms frequently aren’t used correctly and precisely, some of the concepts have limitations by design, e.g. machine learning.

Machine learning implies that the “machine” (a piece of software) is able to “learn”. In fact this means that the machine is able to improve its results over time by analyzing the effect of previous actions and then adjusting the future actions.

One of the challenges with cyber security is the fact that there are continuously new attack vectors. Some of them are just variant of established patterns; some of them are entirely new. In an ideal world, a system is able to recognize unknown vectors. Machine learning per se doesn’t – the concept is learning from things that have gone wrong.

This is different from anomaly detection which identifies unknown or changing patterns. Here, the new is something which is identified as an anomaly.

Interestingly, some of the technologies where marketing talks about “machine learning” in fact do a lot more than ex-post-facto machine learning. Frequently, it is not a matter of technology but of the wrong use of buzzwords in marketing. Anyway, customers should be careful about buzzwords: Ask the vendor what is really meant by them. Any ask yourself whether the information provided by the vendor really is valid and solves your challenge.


Google+


top
KuppingerCole Blog
By:
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live training sessions.
Register now
Spotlight
Adaptive Authentication & Authorization
It’s called “adaptive” because it adapts to the circumstances at the time of the authentication ceremony and dynamically adjusts both the authentication factors as well as the facet(s) of the factors chosen. This is all done as part of the risk analysis of what we call the Adaptive Policy-based Access Management (APAM) system.
KC EXTEND
KC EXTEND shows how the integration of new external partners and clients in your IAM can be done while at the same time the support of the operational business is ensured.
Links
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing
Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2016 KuppingerCole