English   Deutsch   Русский   中文    

KuppingerCole Blog

The need for an "integrated identity" within hybrid cloud infrastructures

Jul 02, 2015 by Matthias Reinwarth

Yes, you might have heard it in many places: "Cloud is the new normal". And this is surely true for many modern organisations, especially start-ups or companies doing all or parts of their native business within the cloud. But for many other organisations this new normal is only one half of normal.

A lot of enterprises currently going through the process of digital transformation are maintaining their own infrastructure on premises and are looking into extending their business into the cloud. This might be done for various reasons, for example for the easier creation of infrastructure allowing rapid scalability and the ability to replace costly infrastructure which is not mission-critical to be implemented within the traditional organisational perimeter.

For many organisations it is simply not an option to move completely to the cloud for various good reasons including the protection of intellectual property within the traditional infrastructure or the necessity to maintain legacy infrastructure which in turn is business critical. For this type of enterprises, typically large and with a decent history regarding their IT, of which many are in highly regulated sectors, the future infrastructure paradigm has to be the hybrid cloud, at least for the near or medium-term future.

Cloud service providers are required to offer standardized technological approaches for this type of customers. A seamless, strategic approach to extending the existing on-premises infrastructure into the cloud is an important prerequisite for this type of customers. This is true for the actual network connectivity basis and it is especially true for the administration, the operation and the security aspects of modern IT infrastructures.

For every company that already has a well-defined IAM/IAG infrastructure and the relevant maintenance and governance processes in place it is essential that Identity Management for and within the cloud is well integrated into the existing processes. Many successful, corporate IAM systems build upon the fact, that enterprise–internal data silos have been broken up and have been integrated into an overall identity and Access Management system. For the maintenance of the newly designed cloud infrastructure it obviously does not make any sense to create a new silo of identity information for the cloud. Maintaining technical and business accounts for cloud usage is in the end a traditional identity management task. Designing the appropriate entitlement structure and assigning the appropriate access rights to the right people within the cloud while adhering to best practice like the least privilege principle is in the end a traditional Access Management task. Defining, implementing and enforcing appropriate processes to control and govern assigned access rights to identities across a hybrid infrastructure are in the end traditional access governance and access intelligence tasks.

Providers of traditional, on premises IAM infrastructures and cloud service providers alike have to support this class of customer organisations to fulfil their hybrid security and hence their IAM/IAG needs. CSPs like Amazon Web Services embrace this hybrid market by providing overall strategies for hybrid cloud infrastructures including a suitable identity, access and security approach. The implementation of a concept for an "integrated identity" across all platforms, be they cloud or on premises, is therefore a fundamental requirement. Leveraging mechanisms like inbound and outbound federation, the deployment of open standards like SAML 2.0, the availability of APIs for integrative access to the AWS IAM/IAG functionality and the integration of existing policies into the AWS IAM policies implemented as JSON files are important steps towards this "integrated identity". For the access intelligence and access governance side the AWS CloudTrail component can provide detailed logs down to an API-call-per-user-level for the existing cloud infrastructure. Such extensive logs can then be evaluated by means of an existing Access Intelligence, an existing Real-Time Security Intelligence (RTSI) solution or by deploying the AWS analytics mechanisms like Lambda.

It is obvious that these are "only" building blocks for solutions, not a fully designed solution architecture. But we're one step closer to the design and implementation for an appropriate solution for each individual enterprise. Covering all relevant aspects of security and IT GRC inside and outside the cloud will be one key challenge for the deployment of cloud infrastructures for this type of organisations.

Hybrid architectures might not be the final target architecture for some organisations, but for the next years they will form an important deployment scenario for many large organisations. Vendors and implementation partners alike have to make sure that easily deployable, standardised mechanisms are in place to extend an existing on-premises IAM seamlessly into the cloud, providing the required levels of security and governance. And since we are talking about standards and integration: This will have to work seamlessly for other, probably upcoming types of architectures as well, e.g. for those where the step towards cloud based IAM systems deploying Azure Active Directory has already been taken.


Google+

From Hybrid Cloud to Standard IT?

Jun 18, 2015 by Mike Small

I have recently heard from a number of cloud service providers (CSP) telling me about their support for a “hybrid” cloud. What is the hybrid cloud and why is it important? What enterprise customers are looking for is a “Standard IT” that would allow them to deploy their applications flexibly wherever is best. The Hybrid Cloud concept goes some way towards this.

There is still some confusion about the terminology that surrounds cloud computing and so let us go back to basics. The generally accepted definition of cloud terminology is in NIST SP-800-145. According to this there are three service models and four deployment models. The service models being IaaS, PaaS and SaaS. The four deployment models for cloud computing are: Public Cloud, Private Cloud, Community Cloud and Hybrid Cloud. So “Hybrid” is related to the way cloud services are deployed. The NIST definition of the Hybrid Cloud is:

“The cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).”

However sometimes Hybrid is used to describe a cloud strategy – meaning that the organization using the cloud will use cloud services for some kinds of application but not for others. This is a perfectly reasonable strategy but not quite in line with the above definition. So I refer to this as a Hybrid Cloud Strategy.

In fact this leads us on to the reality for most enterprises is that the cloud is just another way of obtaining some of their IT services. Cloud services may be the ideal solution for development because of the speed with which they can be obtained. They may be good for customer interaction services because of their scalability. They may be the best way to perform data analytics needing the occasional burst of very high performance computing. Hence, to the enterprise, the cloud becomes another added complexity in their already complex IT environment.

So the CSPs have recognised that in order to tempt the enterprises to use their cloud services they need recognise this complexity challenge that enterprises face and provide help to solve it. So the “Hybrid” cloud that will be attractive to enterprises needs to:

* Enable the customer to easily migrate some parts of their workload and data to a cloud service. This is because there may be some data that is required to remain on premise for compliance or audit reasons.

* Orchestrate the end to end processing which may involve on premise as well as services from other cloud providers.

* Allow the customer to assure the end to end security and compliance for their workload.

When you look at these requirements it becomes clear that standards are going to be a key component to allow this degree of flexibility and interoperability. The standards needed go beyond the support for Hypervisors, Operating Systems, Databases and middleware to include the

deployment, management and security of workloads in a common way across on premise and cloud deployments as well as between cloud services from different vendors.

There is no clear winner in the standards yet – although OpenStack has wide support including from IBM, HP and Rackspace – but one of the challenges is that vendors offer versions of this with their own proprietary additions. Other important vendors have their own proprietary offerings that they would like customers to adopt including AWS, Microsoft and VMWare. So the game is not over yet, but the industry should recognize that the real requirement is for a “Standard IT” that can easily be deployed in whatever way is most appropriate at any given time.


Google+

Why Cybersecurity and Politics Just Don’t Mix Well

Jun 12, 2015 by Alexei Balaganski

With the number of high-profile security breaches growing rapidly, more and more large corporations, media outlets and even government organizations are falling victim to hacking attacks. These attacks are almost always widely publicized, adding insult to already substantial injury for the victims. It’s no surprise that the recent news and developments in the field of cybersecurity are now closely followed and discussed not just by IT experts, but by the general public around the world.

Inevitably, just like any other sensational topic, cybersecurity has attracted politicians. And whenever politics and technology are brought together, the resulting mix of macabre and comedy is so potent that it will make every security expert cringe. Let’s just have a look at the few of the most recent examples.

After the notorious hack of Sony Pictures Entertainment last November, which was supposedly carried out by a group of hackers demanding not to release a comedy movie about a plot to assassinate Kim Jong-Un, United States intelligence agencies were quick to allege that the attack was sponsored by North Korea. For some time, it was strongly debated whether a cyber-attack constitutes an act of war and whether the US should retaliate with real weapons.

Now, every information security expert knows that attributing hacking attacks is a long and painstaking process. In fact, the only known case of a cyber-attack more or less reliably attributed to a state agency until now is Stuxnet, which after several years of research has been found out to be a product of US and Israeli intelligence teams. In case of the Sony hack, many security researchers around the world have pointed out that it was most probably an insider job having no relation to North Korea at all. Fortunately, cool heads in the US military have prevailed, but the thought that next time such an attack can be quickly attributed to a nation without nuclear weapons is still quite chilling…

Another repercussion of the Sony hack has been the ongoing debate about the latest cybersecurity ‘solutions’ the US and UK governments have come up with this January. Among other crazy ideas, these proposals include introducing mandatory backdoors into every security tool and banning certain types of encryption completely. Needless to say, all this is served under the pretext of fighting terrorism and organized crime, but is in fact aimed at further expanding government capabilities of spying on their own citizens.

Unfortunately, just like any other technology plan devised by politicians, it won’t just not work, but will have disastrous consequences for the whole society, including ruining people’s privacy, making every company’s IT infrastructure more vulnerable to hacking attacks (exploiting the same government-mandated backdoors), blocking significant part of academic research, not to mention completely destroying businesses like security software vendors or cloud service providers. Sadly, even in Germany, the country where privacy is considered an almost sacred right, the government is engaged in similar activities as well.

Speaking about Germany, the latest, somewhat more lighthearted example of politicians’ inability to cope with cybersecurity comes from the Bundestag, the German federal parliament. After another crippling cyber-attack on its network in May, which allowed hackers to steal large amount of data and led to a partial shutdown of the network, the head of Germany’s Federal Office for Information Security has come up with a great idea. Citing concerns for mysterious Russian hackers still lurking in the network, it has been announced that the existing infrastructure including over 20,000 computers has to be completely replaced. Leaving aside the obvious question – are the same people that designed the old network really able to come up with a more secure one this time? – one still cannot but wonder whether millions needed for such an upgrade could be better spent somewhere else. In fact, my first thought after reading the news was about President Erdogan’s new palace in Turkey. Apparently, he just had to move to a new 1,150-room presidential palace simply because the old one was infested by cockroaches. It was very heartwarming to hear the same kind of reasoning from a German politician.

Still, any security expert cannot but continue asking more specific questions. Was there an adequate incident and breach response strategy in place? Has there been a training program for user security awareness? Were the most modern security tools deployed in the network? Was privileged account management fine-grained enough to prevent far-reaching exploitation of hijacked administrator credentials? And, last but not the least: does the agency have budget for hiring security experts with adequate qualifications for running such a critical environment?

Unfortunately, very few details about the breach are currently known, but judging by the outcome of the attack, the answer for most of these questions would be “no”. German government agencies are also known for being quite frugal with regards to IT salaries, so the best experts are inevitably going elsewhere.

Another question that I cannot but think about is what if the hackers have utilized one of the zero-day vulnerability exploits that the German intelligence agency BND is known to have purchased for their own covert operations? That would be a perfect example of “karmic justice”.

Speaking of concrete advice, KuppingerCole provides a lot of relevant research documents. You should probably start with the recently published free Leadership Brief: 10 Security Mistakes That Every CISO Must Avoid and then dive deeper into specific topics like IAM & Privilege Management in the research area of our website. Our live webinars, as well as recordings from past events can also provide a good introduction into relevant security topics. If you are looking for further support, do not hesitate to talk to us directly!


Google+

Who will become the Google, Facebook or Apple of Life Management Platforms?

Jun 09, 2015 by Dave Kearns

A Life Management Platform (LMP) allows individuals to access all relevant information from their daily life and manage its lifecycle, in particular data that is sensitive and typically paper-bound today, like bank account information, insurance information, health information, or the key number of their car.

Three years ago, at EIC 2012, one of the major topics was Life Management Platforms (LMPs), which was described as “a concept which goes well beyond the limited reach of most of today’s Personal Data Stores and Personal Clouds. It will fundamentally affect the way individuals share personal data and thus will greatly influence social networks, CRM (Customer Relationship Management), eGovernment, and many other areas.”

In talking about LMPs, Martin Kuppinger wrote: “Life Management Platforms will change the way individuals deal with sensitive information like their health data, insurance data, and many other types of information – information that today frequently is paper-based or, when it comes to personal opinions, only in the mind of the individuals. They will enable new approaches for privacy and security-aware sharing of that information, without the risk of losing control of that information. A key concept is “informed pull” which allows consuming information from other parties, neither violating the interest of the individual for protecting his information nor the interest of the related party/parties.” (Advisory Note: Life Management Platforms: Control and Privacy for Personal Data - 70608)

It’s taken longer than we thought, but the fundamental principle that a person should have direct control of the information about themselves is finally taking hold. In particular, the work of the User Managed Access (UMA) group through the Kantara Initiative should be noted.

Fueled by the rise of cloud services (especially personal, public cloud services) and the explosive growth in the Internet of Things (IoT) which all leads to the concept we’ve identified as the Internet of Everything and Everyone (IoEE), Life Management Platforms – although not known by that name – are beginning to take their first, hesitant baby steps with everyone from the tech guru to the person in the street. The “platform” tends to be a smart, mobile device with a myriad of applications and services (known as “apps”) but the bottom line is that the data, the information, is, at least nominally, under control of the person using that platform. And the real platform is the cloud-based services, fed and fueled by public, standard Application Programming Interfaces (APIs) which provide the data for that mobile device everyone is using.

Social media, too, has had an effect. Using Facebook login, for example, to access other services people are learning to look closely at what those services are requesting (“your timeline, list of friends, birthday”) and, especially, what the service won’t do (“the service will not post on your behalf”). There’s still work to be done there, as the conditions are not negotiable yet but must be either accepted or rejected – but more flexible protocols will emerge to cover that. There’s also, of course, the fact that Facebook itself “spies” on your activity. Slowly, grudgingly, that is changing – but we’re not there yet. The next step is for enterprises to begin to provide the necessary tools that will enable the casual user to more completely control their data – and the release of their data to others – while protecting their privacy. Google (via Android), Apple (via IOS) and even Microsoft (thru Windows Mobile) are all in a position to become the first mover in this area – but only if they’re ready to fundamentally change their business model or complement their business models by an alternative approach. Indeed, some have taken tentative steps in that direction, while others seem to be headed in the opposite direction. Google and Facebook (and Microsoft, via Bing) do attempt to monetize your data. Apple tends to tell you what you want, then not allow you to change it.

But there are suggestions that users may be willing to pay for more control over their information- either in cash, or in licensing its re-use under strict guidelines. So who will step up? We shouldn’t ignore Facebook, of course, but – without a mobile operating system – they are at a disadvantage compared to the other three. And maybe, lurking in the wings, there’s an as yet undiscovered (or overlooked – yes, there are some interesting approaches) vendor ready to jump in and seize the market. After all, that’s what Google did (surprising Yahoo!) and Facebook did (supplanting MySpace) so there is precedent for a well-designed (i.e., using Privacy by Design principles) start-up to sweep this market. Someone will, we’re convinced of that. And just as soon as we’ve identified the key player, we’ll let you know so you can be prepared.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

Life Management Platforms: Players, Technologies, Standards

Jun 09, 2015 by Alexei Balaganski

When KuppingerCole outlined the concept of Life Management Platforms several years ago, the perspective of numerous completely new business models based on user-centric management of personal data may have seemed a bit too farfetched to some. Although the very idea of customers being in control of their digital lives has been actively promoted for years by the efforts of ProjectVRM and although even back then the public demand for privacy was already strong, the interest in the topic was still largely academic.

Quite a lot has changed during these years. Explosive growth of mobile devices and cloud services has significantly altered the way businesses communicate with their partners and customers. Edward Snowden’s revelations have made a profound impression on the perceived importance of privacy. User empowerment is finally no longer an academic concept. The European Identity and Cloud Conference 2015 featured a whole track devoted to user managed identity and access, which provided an overview of recent developments as well as notable players in this field.

Qiy Foundation, one of the veteran players (in 2012, we have recognized them as the first real implementation of the LMP concept) has presented their newest developments and business partnerships. They were joined by Meeco, a new project centered around social channels and IoT devices, which has won this year’s European Identity and Cloud Award.

Such industry giants as Microsoft and IBM has presented their latest research in the field of user-managed identity as well. Both companies are doing extensive research targeted on technologies implementing the minimal disclosure principle fundamental for the Life Management Platform concept. Both Microsoft’s U-Prove and IBM’s Identity Mixer projects are aimed at giving users cryptographically certified, yet open and easy to use means of disclosing their personal information to online service providers in a controlled and privacy-enhancing manner. Both implement a superset of traditional Public Key Infrastructure functionality, but instead of having a single cryptographic public key, users can have an independent pseudonymized key for each transaction, which makes tracking impossible, yet still allows to verify any subset of personal information user may choose to share with a service provider.

Qiy Foundation, having the advantage of a very early start, already provides their own design and reference implementation of the whole stack of protocols and legal frameworks for an entire LMP ecosystem. Their biggest problem - and in fact the biggest obstacle for the whole future development in this area - is the lack of interoperability with other projects. However, as the LMP track and workshop at the EIC 2015 have shown, all parties working in this area are clearly aware of this challenge and are closely following each other’s developments.

In this regard, the role of Kantara Initiative cannot be overestimated. Not only this organization has been developing UMA, an important protocol for user-centric access management, privacy and consent, they are also running the Trust Framework Provider program, which ensures that various trust frameworks around the world are aligned with government regulations and each other. Still, looking at the success of the FIDO Alliance in the field of strong authentication, we cannot but hope to see in the nearest future some kind of a body uniting major players in the LMP field, driven by the shared vision and understanding that interoperability is the most critical factor for future developments.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

The business case for user empowerment

Jun 09, 2015 by Martin Kuppinger

At the end of the day, every good idea stays and falls with the business model. If there is no working business model, the best idea will fail. Some ideas appear at a later time and are successful then. Let’s take tablets. I used a Windows tablets back in the days of Windows XP, way before the Apple iPad arrived. But it obviously was too early for widespread adoption (and yes, it was a different concept than the iPad, but one that is quite popular these days again).

So, when talking about user empowerment, the first question must be: Is there a business case? I believe it is, more than ever before. When talking about user empowerment, we are talking about enabling the user to control their data. When looking at the fundamental concept we have outlined initially back in 2012 as Life Management Platforms (there is an updated version available, dating late 2013), this includes the ability of sharing data with other parties in a controlled way. It furthermore is built on the idea on having a centralized repository for personal information – at least logically centralized, physically it might be distributed.

Such centralized data store simplifies management of personal information, from scanned contracts to health data collected via one of these popular activity-tracking wristbands. Furthermore, users can manage their preferences, policies, etc. in a single location.

Thus, it seems to make a lot of sense for instance for health insurance companies to support the concept of Life Management Platforms.

However, there might be a counterargument: The health insurance company wants to get a full grip on the data. But is this really in conflict with supporting user empowerment and concepts such as Life Management Platforms? Honestly, I believe that there is a better business case for Health Insurance Companies in supporting user empowerment. Why?

  1. They get rid of the discussion what should happen to e.g. the data collected with an activity tracker once a customer moves to another health insurance company – the user can just revoke access to that information (OK, the health insurance still will have to care for its copies, but that is easier to solve, particularly within advanced approaches).
  2. However, the customer still might allow access to a pseudonymised version of that data – he (or she) might even do so without being a customer at all, which then would allow health insurance companies to gain access to more statistical information, allowing them to better shape rates and contracts. There might be even a centralized statistical service for the industry, collecting data across all health insurance companies.
  3. Anyway, the most compelling argument from my perspective is another one: It is quite simple to connect to a Life Management Platform. Supporting a variety of activity trackers and convincing the customers that they must rely on a specific one isn’t the best approach. Just connecting to a service that provides health data in a standardized way is simpler and cheaper. And the customer can use the activity tracker he wants or already does – if he wants to share the data and benefit from better rates.

User empowerment does not stand in stark contrast to the business model of most organizations. It is only in conflict with the business models of companies such as Facebook, Google, etc. However, in many cases, the organizations such as retailers, insurance companies, etc. do not really benefit from relying on the data these companies collect – they pay for it and they might even pay twice through unwillingly collecting information that is then sold to the competition.

For most organizations, supporting user empowerment means simplified access to information and less friction by privacy discussions. Yes, the users can revoke access – but companies also might build far better relationships with customers and thus minimize that risk. There are compelling business cases today. And, in contrast to 2012, the world appears being ready for solutions that force user empowerment.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

EMC to acquire Virtustream

May 27, 2015 by Mike Small

On May 26th EMC announced that it is to acquire the privately held company Virtustream. What does this mean and what are the implications?

Virtustream is both a software vendor and a cloud service provider (CSP). Its software offering includes a cloud management platform xStream, an infrastructure assessment product Advisor, and the risk and compliance management software, ViewTrust. It also offers Infrastructure as a Service (IaaS) with datacentres in the US and Europe. KuppingerCole identified Virtustream as a “hidden gem” in our report: Leadership Compass: Infrastructure as a Service - 70959

The combination of these products has been used by Virtustream to target the Fortune 500 companies and help them along their journey to the cloud. Legacy application often have very specific needs that are difficult to reproduce in the vanilla cloud and risk and compliance issues are the top concerns when migrating systems of record to the cloud.

In addition the Virtustream technology works with VMWare to provide an extra degree of resource optimization through their Micro Virtual Machine (µVM) approach. This approach uses smaller units of allocation for both memory and processor which removes artificial sizing boundaries, makes it easier to track resources consumed, and results in less wasted resources.

The xStream cloud management software enables the management of hybrid clouds through a “single pane of glass” management console using open published APIs. It is claimed to provides enterprise grade security with integrity built upon the capabilities in the processors. Virtustream was the first CSP to announce support for NIST draft report IR 7904 Trusted Geolocation in the Cloud: Proof of Concept Implementation. This allows the user to control the geolocation of their data held in the cloud.

EMC already provides their Federation Enterprise Hybrid Cloud Solution — an on premise private cloud offering that provides a stepping stone to public cloud services. EMC also recently entered the cloud service market with an IaaS service vCloud Air based on VMWare. Since many organization already use VMWare to run their IT on premise, it was intended to make it possible to migrate these workloads without change to the cloud. An assessment of vCloud Air is also included in our Leadership Compass Report on Infrastructure as a Service – 70959.

The early focus by CSPs was on DevOps but the market for enterprise grade cloud solutions is a growth area as large organizations look to save costs by datacentre consolidation and “cloud sourcing” IT services. However success in this market needs the right combination of consultancy services, assurance and trust to succeed. Virtustream seems to have met with some success in attracting US organizations to their service. The challenge for EMC is to clearly differentiate between the different cloud offerings they now have and to compete with the existing strong players in this market. As well as the usual challenges of integrating itself into the EMC group, Virtustream may also find it difficult to focus on both providing cloud services as well as developing software.


Google+

Consent – Context – Consequence

May 21, 2015 by Martin Kuppinger

Consent and Context: They are about to change the way we do IT. This is not only about security, where context already is of growing relevance. It is about the way we have to construct most applications and services, particularly the ones dealing with consumer-related data and PII in the broadest sense. Consent and context have consequences. Applications must be constructed such that these consequences can be taken.

Imagine the EU comes up with tighter privacy regulations in the near future. Imagine you are a service provider or organization dealing with customers in various locations. Imagine your customers being more willing to share data – consent with sharing – when they remain in control of data. Imagine that what Telcos must already do, e.g. in at least some EU countries, becoming mandatory for other industries and countries: Handing over customer data to other Telcos and “forgetting” about large parts of that data rapidly.

There are many different scenarios where regulatory changes or changing expectations of customers mandate changes in applications. Consent (and regulations) increasingly control application behavior.

On the other hand, there is context. Mitigating risks is tightly connected to understanding the user context and acting accordingly. The days of black and white security are past. Depending on the context, an authenticated user might be authorized to do more or less.

Simply said: Consent and context have – must mandatorily have – consequences in application behavior. Thus, application (and this includes cloud services) design must take consent and context into account. Consent is about following the principles of Privacy by Design. An application designed for privacy can be opened up if the users or regulations allow. This is quite easy, when done right. Far easier than, for example, adapting an application to tightening privacy regulations. Context is about risk-based authentication and authorization or, in a broader view, APAM (Adaptive, Policy-based Access Management). Again, if an application is designed for adaptiveness, it easily can react to changing requirements. An application with static security is hard to change.

Understanding Consent, Context, and Consequences can save organizations – software companies, cloud service providers, and any organization developing its own software – a lot of money. And it’s not only about cost savings, but agility – flexible software makes business more agile and resilient to changes and increases time-to-market.


Google+

Venom, or the Return of the Virtualized Living Dead

May 21, 2015 by Matthias Reinwarth

The more elderly amongst us might remember a family of portable, magnetic disk based storage media, with typical capacities ranging from 320 KB to 1.44 MB, called Floppy Disc. These were introduced in the early 1970s then faced their decline in the late 1990s, with today’s generation of Digital Natives most probably not having seen this type of media in the wild.

Have you ever thought it possible in 2015, that your virtual machines, your VM environment, your network and thus potentially your complete IT infrastructure might be threatened by a vulnerable floppy disk controller? Or even worse: by a virtualized floppy disk controller? No? Or that the VM you are running at your trusted provider of virtualization solutions might have been in danger of being attacked by an admin of a VM running on the same infrastructure for the last 11 years?

But this is exactly what has been uncovered this week with the publication of a vulnerability called Venom, CVE-2015-3456 (with Venom being actually an acronym for “Virtualized Environment Neglected Operations Manipulation”). The vulnerability has been identified, diligently documented, and explained by Jason Geffner of CrowdStrike.

Affected virtualization platforms include Xen, VirtualBox and QEMU, but it is the original open source QEMU virtual floppy disc controller code, that has been re-used in several virtualization environments, which has been identified as the alleged origin of the vulnerability.

As a floppy disk driver still is typically included in a VM configuration by default and the issue is within the hypervisor code, almost any installation of the identified platforms is expected to be affected, no matter which underlying hosting operating system has been chosen. Although no exploits have been yet documented prior to the publication, this should be expected to change soon.

The immediately required steps are obvious:

  • If you are hosting a virtualization platform for yourselves or your organization, make sure that you’re running a version that is not affected or otherwise apply the most recent patches. A patch for your host OS and virtualizing platform should be already available. And do it now.
  • In case you are running one or more virtual machines at providers using one of the affected platforms, make sure that appropriate measures have been taken for mitigating this vulnerability. And do it now!

More importantly this vulnerability again puts a spotlight on the reuse of open source software within other products, especially commercial products or those used widely in commercial environments. Very much like the heart bleed bug or shellshock this vulnerability once more proves that relying on the given quality of open source code cannot be considered appropriate. This vulnerability has been out in the wild for more than 11 years now.

Open source software comes with the great opportunity of allowing code inspection and verification. But just because code is open does not mean that code is secure unless somebody actually takes a look (or more).

Improving application and code security has to be on the agenda right now. This is true for both commercial and open source software. Appropriate code analysis tools and services for achieving this are available. Intelligent mechanisms for static and dynamic code vulnerability analyses have to be integrated effectively within any relevant software development cycles. This is not a trending topic, but it should be. The responsibility for achieving this is surely a commercial topic, but it is also a political topic and a topic that has to be discussed in the various OSS communities. Venom might not be as disruptive as heart bleed, but the next heart bleed is out there and we should try to get at least some of them fixed before they are exploited.

And while we’re at it, why not change the default for including floppy disks in new VMs from “yes” to “no”, just for a start…


Google+

100%, 80% or 0% security? Make the right choice!

May 19, 2015 by Martin Kuppinger

Recently, I have had a number of conversations with end user organizations, covering a variety of Information Security topics but all having the same theme: There is a need for certain security approaches such as strong authentication on mobile devices, secure information sharing, etc. But the project has been stopped due to security concerns: The strong authentication approach is not as secure as the one currently implemented for desktop systems; some information needs to be stored in the cloud; etc.

That’s right, IT Security people stopped Information Security projects due to security concerns.

The result: There still is 0% security, because nothing has been done yet.

There is the argument, that insecure is insecure. Either something is perfectly secure or it is insecure. However, when following that path, everything is insecure. There are always ways to break security, if you just invest sufficient criminal energy.

It is time to move away from our traditional black-and-white approach to security. It is not about being secure or insecure, but, rather, about risk mitigation. Does a technology help in mitigating risk? Is it the best way to achieve that target? Is it a good economic (or mandatory) approach?

When thinking in terms of risk, 80% security is obviously better than 0% security. 100% might be even better, but also worse, because it’s costly, cumbersome to use, etc.

It is time to stop IT security people from inhibiting improvements in security and risk mitigation by setting unrealistic security baselines. Start thinking in terms of risk. Then, 80% of security now and at fair cost are commonly better than 0% now or 100% sometime in the future.

Again: There never ever will be 100% security. We might achieve 99% or 98% (depending on the scale we use), but cost grows exponentially. The limit of cost is infinite for security towards 100%.


Google+


top
KuppingerCole Blog
By:
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Register now
Spotlight
User Empowerment / Life Management
For most organizations, supporting user empowerment means simplified access to information and less friction by privacy discussions. Yes, the users can revoke access – but companies also might build far better relationships with customers and thus minimize that risk. There are compelling business cases today. And, in contrast to 2012, the world appears being ready for solutions that force user empowerment.
KuppingerCole Services
KuppingerCole offers clients a wide range of reports, consulting options and events enabling aimed at providing companies and organizations with a clear understanding of both technology and markets.
Links
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on Google+

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing
Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2015 KuppingerCole