English   Deutsch   Русский   中文    

Blog posts by Mike Small

Why Governance Matters to IT Security

Dec 02, 2015 by Mike Small

MetricStream, a US company that supplies Governance, Risk and Compliance applications, held their GRC Summit in London on November 11th and 12th.  Governance is important to organizations because of the increasing burden of regulations and laws upon their operations.  It is specifically relevant to IT security because these regulations touch upon the data held in the IT systems.  It is also highly relevant because of the wide range of IT service delivery models in use today.

Organizations using IT services provided by a third party (for example a cloud service provider) no longer have control over the details of how that service is delivered.  This control has been delegated to the service provider.  However the organization will likely remain responsible for the data being processed and held in a way that is compliant.  This is the challenge that governance can address and why governance of IT service provision is becoming so important.

The distinction between governance and management is clearly defined in COBIT 5. Governance ensures that business needs are clearly defined and agreed and that they are satisfied in an appropriate way.  Governance sets priorities and the way in which decisions are made; it monitors performance and compliance against agreed objectives.  Governance is distinct from management in that management plans, builds, runs and monitors activities in alignment with the direction set by the governance body to achieve the objectives.  This is illustrated for cloud services in the figure below.

Governance provides an approach to IT security that can be applied consistently across the many different IT service delivery models.  By focussing on the business objectives and monitoring outcomes it decouples the activities involved in providing the service from those concerned with its consumption.  Most large organization have a complex mix of IT services provided in different ways: on premise managed internally, on premise managed by a third party, hosted services and cloud services.  Governance provide a way for organizations to ensure that IT security and compliance can be directed, measured and compared across this range of delivery models in a consistent way.

Since this specification and measurement process can involve large amounts of data from a wide variety of sources it helps to use a common governance framework (such as COBIT 5) and technology platform such as the MetricStream GRC Platform.  This platform provides centralized storage of and access to risk and compliance data, and a set of applications that allow this data to be consumed from a wide variety of sources and the results shared through a consistent user interface available on different devices.

The need for this common platform and integrated approach was described at the event by Isabel Smith Director Corporate Internal Audit at Johnson & Johnson.  Ms Smith explained that the problem of an integrated approach is particularly important because Johnson and Johnson has more than 265 operating companies located in 60 countries around the world with more than 125,000 employees.  These operating companies have a wide degree of autonomy to allow them to meet the local needs.  However the global organization must comply with regulations ranging from financial, such as Sarbanes Oxley, to those relating to health care and therapeutic products. Using the common platform enabled Johnson and Johnson to achieve a number of benefits including: getting people across the organization to use a common language around compliance and risk, to streamline and standardize policy and controls and obtain an integrated view of control tests results.

In conclusion organizations need to take a governance led approach to IT security across the heterogeneous IT service delivery models in use today.  Many of these are outside the direct control of the customer organization and their use places control of the service and infrastructure in the hands of a third party.   A governance based approach allows trust in the service to be assured through a combination of internal processes, standards and independent assessments.  Adopting a common governance framework and technology platform are important enablers for this.


AWS Security and Compliance Update

Nov 23, 2015 by Mike Small

Security is a common concern of organizations adopting cloud services and so it was interesting to hear from end users at the AWS Summit in London on November 17th how some organizations have addressed these concerns.

Financial services is a highly regulated industry with a strong focus on information security.  At the event Allan Brearley, Head of Transformation Services at Tesco Bank, described the challenges they faced exploiting cloud services to innovate and reduce cost, while ensuring security and compliance.  The approach that Tesco Bank took, which is the one recommended in KuppingerCole Advisory Note: Selecting your Cloud Provider, is to identify and engage with the key stakeholders.  According to Mr Brearley it is important adopt a culture to satisfy all of the stakeholders’ needs all of the time.

In the UK the government has a cloud first strategy. Government agencies using cloud services must follow the Cloud Security Principles, first issued by UK Communications- Electronics Security Group’s (CESG) in 2014.  These describe the need to take a risk based approach to ensure suitability for purpose.   Rob Hart of the UK DVSA (Driver & Vehicle Standards Agency), that is responsible for road safety in UK, described the DVSA’s journey to the adoption of AWS cloud services.  Mr Hart described that the information being migrated to the cloud was classified according to UK government guidelines as “OFFICIAL”.  That is equivalent to commercially sensitive or Personally Identifiable Information.  The key to success, according to Mr Hart, was to involve the Information Security Architects from the very beginning.  This was helped by these architects being in the same office as the DVSA cloud migration team.

AWS has always been very open that the responsibility for security is shared between AWS and the customer.  AWS publish their “Shared Responsibility Model” which distinguishes between the aspects of security that AWS are responsible for, and those for which the customer is responsible. 

Over the past months AWS has made several important announcements around the security and compliance aspects of their services.  There are too many to cover in here and so I have chosen 3 around compliance and 3 around security.  Firstly announcements around compliance include:

  • ISO/IEC 27018:2014 – AWS has published a certificate of compliance with this ISO standard which provides a code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors.

  • UK CESG Cloud Security Principles.  In April 2015 AWS published a whitepaper to assist organisations using AWS for United Kingdom (UK) OFFICIAL classified workloads in alignment with CESG Cloud Security Principles.

  • Security by Design – In October 2015 AWS published a whitepaper describing a four-phase approach for security and compliance at scale across multiple industries.  This points to the resources available to AWS customers to implement security into the AWS environment, and describes how to validate controls are operating.

Several new security services were also announced at AWS re:Invent in October.  The functionality provided by these services is not unique however it is tightly integrated with AWS services and infrastructure.  Therefore these services provide extra benefits to a customer that is prepared to accept the risk of added lock-in.  Three of these include:

  • Amazon Inspector – this service, which is in preview, scans applications running on EC2 for a wide range of known vulnerabilities. It includes a knowledge base of rules mapped to common security compliance standards (e.g. PCI DSS) as well as up to date known vulnerabilities.

  • AWS WAF Web Application Firewall – this is a Web Application Firewall that can detect suspicious network traffic.  It helps to protect web applications from attack by blocking common web exploits like SQL injection and cross-site scripting.

  •  S2N Open Source implementation of TLS – This is a replacement created by AWS for the commonly used OpenSSL (which contained the “Heartbleed” vulnerability).  S2N replaces the 500,000 lines code in OpenSSL with approximately 6,000 lines of audited code.  This code has been contributed to Open Source and is available from S2N GitHub Repository.

AWS has taken serious steps to help customers using its cloud services to do so in a secure manner and to assure that they remain compliant with laws and industry regulations.  The customer experiences presented at the event confirm that AWS’s claims around security and compliance are supported in real life.  KuppingerCole recommends that customers using AWS services should make full use of the security and compliance functions and services provided by AWS.


Building a Cyber Defence Centre: IBM’s rules for success

Nov 13, 2015 by Mike Small

According to GCHQ, the number of cyber-attacks threatening UK national security have doubled in the past 12 months. How can organizations protect themselves against this growing threat especially when statistics show that most data breaches are only discovered some time after the attack took place? One important approach is to create a Cyber Defence Centre to implement and co-ordinate the activities needed to protect, detect and respond to cyber-attacks.

The Cyber Defence Centre has evolved from the SOC (Security Operation Centre). It supports the processes for enterprise security monitoring, defence, detection and response to cyber based threats. It exploits Real Time Security Intelligence (RTSI) to detect these threats in real time or in near real time to enable action to be taken before damage is done. It uses techniques taken from big data and business intelligence to reduce that massive volume of security event data collected by SIEM to a small number of actionable alarms where there is a high confidence that there is a real threat.

A Cyber Defence Centre is not cheap or easy to implement so most organizations need help with this from an organization with real experience in this area. At a recent briefing IBM described how they have evolved a set of best practice rules based on their analysis of over 300 SOCs. These best practices include:

The first and most important of these rules is to understand the business perspective of what is at risk. It has often been the case that the SOC would focus on arcane technical issues rather than the business risk. The key objective of the Cyber Defence Centre is to protect the organization’s business critical assets. It is vital that what is business-critical is defined by the organization’s business leaders rather than the IT security group.

Many SOCs have evolved from NOCs (Network Operation Centres) – however the NOC is not a good model for cyber-defence. The NOC is organized to detect, manage and remediate what are mostly technical failures or natural disasters rather than targeted attacks. Its objective is to improve service uptime and to restore service promptly after a failure. On the other hand, the Cyber Defence Centre has to deal with the evolving tactics, tools and techniques of intelligent attackers. Its objective is to detect these attacks while at the same time protecting the assets and capturing evidence. The Cyber Defence Centre should assume that the organizational network has already been breached. It should include processes to proactively seek attacks in progress rather than passively wait for an alarm to be raised.

The Cyber Defence Centre must adopt a systematized and industrialized operating model. An approach that depends upon the individual skills is neither predictable nor scalable. The rules and processes should be designed using the same practices as for software with proper versioning and change control. The response to a class of problem needs to be worked out together with the rules on how to detect it. When the problem occurs is not a good time to figure out what to do. Measurements is critical – you can only manage what you can measure and measurement allows you to demonstrate the change levels of threats and the effectiveness of the cyber defence.

Finally, as explained by Martin Kuppinger in his blog: Your future Security Operations Center (SOC): Not only run by yourself, it is not necessary or even practical to operate all of the cyber defence activities yourself. Enabling this sharing of activities needs a clear model of how the Cyber Defence Centre will be operated. This should cover the organization and the processes as well as the technologies employed. This is essential to decide what to retain internally and to define what is outsourced an effective manner. Once again, an organization will benefit from help to define and build this operational model.

At the current state of the art for Cyber Defence, Managed Services are an essential component. This is because of the rapid evolution of threats, which makes it almost impossible for a single organization to keep up to date, and the complexity of the analysis that is required to identify how to distinguish these. This up-to-date knowledge needs to be delivered as part of the Cyber Defence Centre solution.

KuppingerCole Advisory Note: Real Time Security Intelligence provides an in-depth look at this subject.


Real Time Security Intelligence (RTSI)

Nov 03, 2015 by Mike Small

Organizations depend upon the IT systems and the information that they provide to operate and grow. However, the information that they contain and the infrastructure upon which they depend is under attack. Statistics show that most data breaches are detected by agents outside of the organization rather than internal security tools. Real Time Security Intelligence (RTSI) seeks to remedy this.

Unfortunately, many organizations fail to take simple measures to protect against known weaknesses in infrastructure and applications. However, even those organizations that have taken these measures are subject to attack. The preferred technique of attacks is increasingly one of stealth; the attacker wants to gain access to the target organization’s systems and data without being noticed. The more time the attacker has for undetected access the more the opportunity to steal data or cause damage.

Traditional perimeter security devices like firewalls, IDS (Intrusion Detections Systems) and IPS (Intrusion Prevention Systems) are widely deployed. These tools are effective at removing certain kinds of weaknesses. They also generate alerts when suspicious events occur, however the volume of events is such that it is almost impossible to investigate each as they occur. Whilst these devices remain an essential part of the defence, for the agile business using cloud services, with mobile users and connecting directly to customers and partners, there is no perimeter and they are not sufficient.

SIEM (Security Information and Event Management) was promoted as a solution to these problems. However, in reality SIEM is a set of tools that can be configured and used to analyse event data after the fact and to produce reports for auditing and compliance purposes. While it is a core security technology, it has not been successful at providing actionable security intelligence in real time.

This has led to the emergence of a new technology Real Time Security Intelligence (RTSI). This is intended to detect threats in real time or in near real time to enable action to be taken before damage is done. It uses techniques taken from big data and business intelligence to reduce that massive volume of security event data collected by SIEM to a small number of actionable alarms where there is a high confidence that there is a real threat.

At the current state of the art for RTSI, Managed Services is an essential component. This is because of the rapid evolution of threats, which makes it almost impossible for a single organization to keep up to date, and the complexity of the analysis that is required to identify how to distinguish these. This up to date knowledge needs to be delivered as part of the RTSI solution.

The volume of threats to IT systems, their potential impact and the difficulty to detect them are the reasons why real time security intelligence has become important. However, RTSI technology is at an early stage and the problem of calibrating normal activity still requires considerable skill. It is important to look for a solution that can easily build on the knowledge and experience of the IT security community, vendors and service providers. End user organizations should always opt for solutions that include managed services and pre-configured analytics, not just for tools.

KuppingerCole Advisory Note: Real Time Security Intelligence - 71033 provides an in depth look at this subject.


And all for the want of a nail

Oct 26, 2015 by Mike Small

On Friday morning (October 23rd) I was preparing for my lecture on software vulnerabilities to the final year degree students at the University of Salford when I heard the news of the of the TalkTalk data breach

Now this is not about that breach in particular – it is important to wait until the detailed investigation is complete before drawing conclusions.  However that breach provided me with an example of the high level of responsibility now borne by the CISO.  Using the story as an example I asked the students how they would like to explain to the press and 4 million customers that their organization had suffered a data breach.  Especially if it was – in the words of the old proverb -“all for the want of a nail”

So what does this proverb mean in this context?  Well the evidence from the many data breach surveys is that the majority of breaches occur because of vulnerabilities that could easily have been avoided.  In my lecture I cover many of these: in particular the OWASP Top Ten project and the CWE/SANS 25 most dangerous software errors.  Both of these identify SQL Injection as a highly dangerous but easily avoidable vulnerability.

So what is SQL Injection?  When a web based application allows the users of the web interface to perform a query using a text field it is vital that the application checks the user’s input into that field.   

The need for this check can be explained using an example – imagine that the field allows the user to input the brand name of the products they wish to see.  If the application simply includes the text that the user inputs directly into the SQL query there is a danger.  It allows a hacker to input text which is not a brand name but is actually a form of SQL that would always be logically true.  In tis case the SQL query would return every record in the database.

Encrypting the database does not help with SQL Injection because the data must have already been decrypted, in the expectation that the system is being used in a legitimate way, in order to perform the query and to provide the results to the application.

The programming effort needed to avoid this kind of vulnerability is very low.  All that is usually needed is for the application to scan the content for certain character patterns.  Furthermore there is a wide range of tools available that will scan code and exercise the application to detect this as well as other vulnerabilities.  So this check is the equivalent of the nail in the old proverb.

The consequences of a data breach extend well beyond the organization holding the data.  If an organization loses its own money that organization and its shareholders bear the consequences.  However if the personal details of its customers fall into the wrong hands they will be the ones to suffer.  When a family’s payment card is refused in the supermarket on a Friday evening or their life savings are stolen from their bank account this is a personal tragedy not just a business risk.

So the CISO is responsible not only for the security of the organization but also for the stewardship of the data that the organization holds about its customers, partners and suppliers. Taking the simple steps needed to avoid well-known vulnerabilities is the equivalent of the nail in the proverb.  Failing to take these can lead to much wider consequences.  It will be difficult for a CISO to explain to everyone touched by a data breach why the organization’s stewardship of their data was lacking for the want of a nail.

For more information click here.


Getting the Cloud under Control

Oct 06, 2015 by Mike Small

Many organizations are concerned about the use of cloud services; the challenge is to securely enable the use of these services without negating and the benefits that they bring. To meet this challenge it is essential to move from IT Management to IT Governance.

Cloud services are outside the direct control of the customer’s organization and their use places control of the service and infrastructure in the hands of the Cloud Service Provider (CSP). The service and its security provided cannot be ensured by the customer – the customer can only assure the service through a governance process. A governance based approach allows trust in the CSP to be assured through a combination of internal processes, standards and independent assessments.

Governance is distinct from management in that management plans, builds, runs and monitors the IT service in alignment with the direction set by the governance body to achieve the objectives. This distinction is clearly defined in COBIT 5. Governance ensures that business needs are clearly defined and agreed and that they are satisfied in an appropriate way. Governance sets priorities and the way in which decisions are made; it monitors performance and compliance against the agreed objectives.

The starting point for a governance based approach is to define the organizational objectives for using cloud services; everything else follows from these. Then set the constraints on the use of cloud services in line with the organization’s objectives and risk appetite. There are risks involved with all IT service delivery models; assessing these risks in a common way is fundamental to understanding the additional risk (if any) involved in the use of a cloud service. Finally there are many concrete steps that an organization can take to manage the risks associated with their use of cloud services. These include:

  • Common governance approach – the cloud is only one way in which IT services are delivered in most organizations. Adopt a common approach to governance and risk management that covers all forms of IT service delivery.
  • Discover Cloud Use – find out what cloud services are actually being used by the organization. There is now a growing market in tools to help with this. Assess the services that you have discovered are being used against the organization’s objectives and risk appetite.
  • Govern Cloud Access – to cloud services with the same rigour as if they were on premise. There should be no need for you to use a separate IAM system for this – identity federation standards like SAML 2.0 are well defined and the service should support these. The service should also support the authentication methods, provide the granular access controls and monitor individuals’ user of the services that your organization requires.
  • Identify who is responsible for each risk relating to the cloud service – the CSP or your organization. Make sure that you take care of your responsibilities and assure that the CSP meets their obligations.
  • Require Independent certification – an important way to assure that a cloud service provides what it claims is through independent certification. Demand the CSP provides independent certification and attestations for the aspects of the service that matter to your organization.
  • Use standards – standards provide the best way of avoiding technical lock-in to a proprietary service. Understand what standards are relevant and require the service to support these standards
  • Encrypt your data – there are many ways in which data can be leaked or lost from a cloud service. The safest way to protect your data against this risk is to encrypt it. Make sure that you retain control over the encryption keys.
  • Read the Contract – make sure you read and understand the contract. Most cloud service contracts are offered by the CSP on a take it or leave it basis. Make sure that what is offered is acceptable to your organization.

KuppingerCole has extensive experience of guiding organizations through their adoption of cloud services as well as many published research notes. Remember that the cloud is only one way of obtaining an IT service – have a common governance process for all. If a cloud service meets your organization’s need then the simple motto is “to trust your Cloud Provider but verify everything they claim”.

This article has originally appeared in KuppingerCole Analysts' View newsletter.


Windows 10: How to Ensure a Secure and Private Experience

Aug 13, 2015 by Mike Small

Together with many others I received an offer from Microsoft to upgrade my Windows 7 desktop and Windows 8.1 laptop to Windows 10. Here is my initial reaction to successfully performing this upgrade with a specific focus on the areas of privacy and security.

As always when considering security the first and most important step is to understand what your requirements are. In my case – I have several computers and I mainly use these with Microsoft Office, to use the internet for research and to store personal ‘photos. My main requirements are for consistency and synchronization across these systems together with security and reliability. The critical dimensions that I considered are privacy, confidentiality, integrity and availability. Let’s start with availability:


  1. Make sure you back up your files before you start the upgrade! My files were preserved without problems but it is better to be safe than sorry. It is also a good idea to understand how you could roll back if there is a catastrophic failure during the upgrade. One really big improvement over Windows 8 is the ability to restore files from a Windows 7 backup.
  2. Check that your computer is compatible with the upgrade. The Microsoft upgrade tool checks your computer for compatibility and some manufacturers provide information on which systems they have tested. The Dell support site informed me that my new laptop was tested but my old desktop wasn’t. However both upgraded without problems, but I did need to re-install some software – for my HP printer.
  3. Consider whether you want new features as soon as they are available (with the risk that they may cause problems). The default setting for updates is for these to be automatically installed. You can change this through the advanced setting menu by checking the box to defer upgrades. You will still receive security fixes but new features will be delayed.

  4. Windows 10 has a number of recovery options – you can roll back to your previous OS for up to 30 days after the upgrade as well as performing a complete reset. 


  1. Windows 10 automatically includes Windows Defender for protection – make sure this is activated. If you prefer another anti-malware product you will need to install this yourself.
  2. If you already use OneDrive then you will notice some changes. Previous versions of the OneDrive App supported a placeholder function that allowed File Explorer to display files that were held online but not sync’d onto your PC. This is no longer available; any directories that are not sync’d are not visible through file explorer. I experienced sync problems with files that were previously held online only. I was able to resolve this using the OneDrive Setting menu – first uncheck the folder(s) and save the settings. The folders and files are then erased on your device (scary!). Then repeat the process but this time check the folders for sync in the menu. When you save these settings the files in the folders are re-synced from the cloud. 


  1. The user accounts are copied from your previous OS – if these were all local accounts then they remain so. If you have a Microsoft account than you can link this with one of these local accounts. Doing this allows you to use a PIN instead of a password to log-in.
  2. If you are using Office 365 you will already have a Microsoft Account, you can also set up a free account which provides some free OneDrive space. However if you use the Microsoft account it is a good idea to understand and manage your privacy settings.
  3. The files in OneDrive are all held in the Microsoft cloud and you need to accept the risk that this poses bearing in mind that most breaches result from weak user credentials.
  4. If you are using BitLocker to encrypt your files then the encryption key will also be held on your OneDrive unless you opt out. 


  1. You should review the privacy setting from the Express setup and decide what to change. 

    A future blog will provide more detailed advice on what these mean and how best to set things up. My short advice is to go through these settings carefully and chose which Apps you are happy to allow to access the various functions. In particular I would disable the App Connector since this gives access to unknown apps. I would also not allow Apps to access my name, picture and other info – but then I’m just paranoid.
  2. You also need to consider the privacy setting for the new Edge browser. These are to be found under “Advanced Settings”. Consider whether you really need Flash enabled since this has been a frequent target for attacks. Also consider enabling the “Do not Track Requests Button”.

  3. If you decide to use Cortana – this may involve setting region, language and downloading language pack – make sure you check out the privacy agreement:

My personal experience with this upgrade has been very positive. The upgrades went smoothly and the performance especially the boot up time for my old Desktop is much faster than with windows 7. The settings are now much more understandable and accessible but you need to take the time to review the defaults to achieve your objectives for privacy and confidentiality. KuppingerCole plan a series of future blogs that will give more detailed guidance on how to do this.


Security and Operational Technology / Smart Manufacturing

Jul 07, 2015 by Mike Small

Industry 4.0 is the German government’s strategy to promote the computerization of the manufacturing industry. This strategy foresees that industrial production in the future will be based on highly flexible mass production processes that allow rich customization of products. This future will also include the extensive integration of customers and business partners to provide business and value-added processes. It will link production with high-quality services to create so-called “hybrid products”.

At the same time, in the US, the Smart Manufacturing Leadership Coalition is working on their vision for “Smart Manufacturing”. In 2013 the UK the Institute for Advanced Manufacturing, which is part of the University of Nottingham, received a grant of £4.6M for a study on Technologies for Future Smart Factories.

This vision depends upon the manufacturing machinery and tools containing embedded computer systems that will communicate with each other inside the enterprise, and with partners and suppliers across the internet. This computerization and communication will enable optimization within the organizations, as well as improving the complete value adding chain in near real time through the use of intelligent monitoring and autonomous decision making processes. This is expected to lead to the development of completely new business models as well as exploiting the considerable potential for optimization in the fields of production and logistics.

However there are risks, and organizations adopting this technology need to be aware of and manage these risks. Compromising the manufacturing processes could have far reaching consequences. These consequences include the creation of flawed or dangerous end products as well as disruption of the supply chain. Even when manufacturing processes based on computerized machinery are physically isolated they can still be compromised through maladministration, inappropriate changes and infected media. Connecting these machines to the internet will only increase the potential threats and the risks involved.

Here are some key points to securely exploiting this vision:

  • Take a Holistic Approach: the need for security is no longer confined to the IT systems, the business systems of record but needs to extend to cover everywhere that data is created, transmitted or exploited. Take a holistic approach and avoid creating another silo.
  • Take a Risk based approach: The security technology and controls that need to be built should be determined by balancing risk against rewards based on the business requirements, the assets at risk together with the needs for compliance as well as the organizational risk appetite. This approach should seek to remove identifiable vulnerabilities and put in place appropriate controls to manage the risks.
  • Trusted Devices: This is the most immediate concern since many devices that are being deployed today are likely to be in use, and hence at risk, for long periods into the future. These devices must be designed and manufactured to be trustworthy. They need an appropriate level of physical protection as well as logical protection against illicit access and administration. It is highly likely that these devices will become a target for cyber criminals who will seek to exploit any weaknesses through malware. Make sure that they contain protection that can be updated to accommodate evolving threats.
  • Trusted Data: The organization needs to be able to trust the data from this. It must be possible to confirm the device from which the data originated, and that this data has not been tampered with or intercepted. There is existing low power secure technology and standards that have been developed for mobile communications and banking, and these should be appropriately adopted or adapted to secure the devices.
  • Identity and Access Management – to be able to trust the devices and the data they provide means being able to trust their identities and control access. There are a number of technical challenges in this area; some solutions have been developed for some specific kinds of device however there is no general panacea. Hence it is likely that more device specific solutions will emerge and this will add to the general complexity of the management challenges.

More information on this subject can be found in Advisory Note: Security and the Internet of Everything and Everyone - 71152 - KuppingerCole


From Hybrid Cloud to Standard IT?

Jun 18, 2015 by Mike Small

I have recently heard from a number of cloud service providers (CSP) telling me about their support for a “hybrid” cloud. What is the hybrid cloud and why is it important? What enterprise customers are looking for is a “Standard IT” that would allow them to deploy their applications flexibly wherever is best. The Hybrid Cloud concept goes some way towards this.

There is still some confusion about the terminology that surrounds cloud computing and so let us go back to basics. The generally accepted definition of cloud terminology is in NIST SP-800-145. According to this there are three service models and four deployment models. The service models being IaaS, PaaS and SaaS. The four deployment models for cloud computing are: Public Cloud, Private Cloud, Community Cloud and Hybrid Cloud. So “Hybrid” is related to the way cloud services are deployed. The NIST definition of the Hybrid Cloud is:

“The cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).”

However sometimes Hybrid is used to describe a cloud strategy – meaning that the organization using the cloud will use cloud services for some kinds of application but not for others. This is a perfectly reasonable strategy but not quite in line with the above definition. So I refer to this as a Hybrid Cloud Strategy.

In fact this leads us on to the reality for most enterprises is that the cloud is just another way of obtaining some of their IT services. Cloud services may be the ideal solution for development because of the speed with which they can be obtained. They may be good for customer interaction services because of their scalability. They may be the best way to perform data analytics needing the occasional burst of very high performance computing. Hence, to the enterprise, the cloud becomes another added complexity in their already complex IT environment.

So the CSPs have recognised that in order to tempt the enterprises to use their cloud services they need recognise this complexity challenge that enterprises face and provide help to solve it. So the “Hybrid” cloud that will be attractive to enterprises needs to:

* Enable the customer to easily migrate some parts of their workload and data to a cloud service. This is because there may be some data that is required to remain on premise for compliance or audit reasons.

* Orchestrate the end to end processing which may involve on premise as well as services from other cloud providers.

* Allow the customer to assure the end to end security and compliance for their workload.

When you look at these requirements it becomes clear that standards are going to be a key component to allow this degree of flexibility and interoperability. The standards needed go beyond the support for Hypervisors, Operating Systems, Databases and middleware to include the

deployment, management and security of workloads in a common way across on premise and cloud deployments as well as between cloud services from different vendors.

There is no clear winner in the standards yet – although OpenStack has wide support including from IBM, HP and Rackspace – but one of the challenges is that vendors offer versions of this with their own proprietary additions. Other important vendors have their own proprietary offerings that they would like customers to adopt including AWS, Microsoft and VMWare. So the game is not over yet, but the industry should recognize that the real requirement is for a “Standard IT” that can easily be deployed in whatever way is most appropriate at any given time.


EMC to acquire Virtustream

May 27, 2015 by Mike Small

On May 26th EMC announced that it is to acquire the privately held company Virtustream. What does this mean and what are the implications?

Virtustream is both a software vendor and a cloud service provider (CSP). Its software offering includes a cloud management platform xStream, an infrastructure assessment product Advisor, and the risk and compliance management software, ViewTrust. It also offers Infrastructure as a Service (IaaS) with datacentres in the US and Europe. KuppingerCole identified Virtustream as a “hidden gem” in our report: Leadership Compass: Infrastructure as a Service - 70959

The combination of these products has been used by Virtustream to target the Fortune 500 companies and help them along their journey to the cloud. Legacy application often have very specific needs that are difficult to reproduce in the vanilla cloud and risk and compliance issues are the top concerns when migrating systems of record to the cloud.

In addition the Virtustream technology works with VMWare to provide an extra degree of resource optimization through their Micro Virtual Machine (µVM) approach. This approach uses smaller units of allocation for both memory and processor which removes artificial sizing boundaries, makes it easier to track resources consumed, and results in less wasted resources.

The xStream cloud management software enables the management of hybrid clouds through a “single pane of glass” management console using open published APIs. It is claimed to provides enterprise grade security with integrity built upon the capabilities in the processors. Virtustream was the first CSP to announce support for NIST draft report IR 7904 Trusted Geolocation in the Cloud: Proof of Concept Implementation. This allows the user to control the geolocation of their data held in the cloud.

EMC already provides their Federation Enterprise Hybrid Cloud Solution — an on premise private cloud offering that provides a stepping stone to public cloud services. EMC also recently entered the cloud service market with an IaaS service vCloud Air based on VMWare. Since many organization already use VMWare to run their IT on premise, it was intended to make it possible to migrate these workloads without change to the cloud. An assessment of vCloud Air is also included in our Leadership Compass Report on Infrastructure as a Service – 70959.

The early focus by CSPs was on DevOps but the market for enterprise grade cloud solutions is a growth area as large organizations look to save costs by datacentre consolidation and “cloud sourcing” IT services. However success in this market needs the right combination of consultancy services, assurance and trust to succeed. Virtustream seems to have met with some success in attracting US organizations to their service. The challenge for EMC is to clearly differentiate between the different cloud offerings they now have and to compete with the existing strong players in this market. As well as the usual challenges of integrating itself into the EMC group, Virtustream may also find it difficult to focus on both providing cloud services as well as developing software.


Author info

Mike Small
Fellow Analyst
Profile | All posts
KuppingerCole Blog
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live training sessions.
Register now
Things Security
IoT (Internet of Things) and Smart Manufacturing are part of the ongoing digital transformation of businesses. IoT is about connected things, Smart Manufacturing is about bridging the gap between the business processes and the production processes.
KuppingerCole Serivices
KuppingerCole offers clients a wide range of reports, consulting options and events enabling aimed at providing companies and organizations with a clear understanding of both technology and markets, enabling them to fine-tune their own strategies and projects avoid costly mistakes in choosing vendors and solutions.
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on Google+

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing
Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2015 KuppingerCole