English   Deutsch   Русский   中文    

Blog posts by Mike Small

Migrating IT Infrastructure to the Cloud

Mar 10, 2015 by Mike Small

Much has been written about “DevOps” but there are other ways for organizations to benefit from the cloud. Moving all or part of their existing IT infrastructure and applications could provide savings in capital and, in many cases, increase security.

The cloud has provided an enormous opportunity for organizations to create new markets, to experiment and develop new applications without the need for upfront investment in hardware and to create disposable applications for marketing campaigns. This approach is generally known as DevOps; where the application is developed and deployed into operation in an iterative manner which is made possible by an easily expansible cloud infrastructure.

While DevOps has produced some remarkable results, it doesn’t help with the organization’s existing IT infrastructure. There are many reasons why an organization could benefit from moving some of their existing IT systems to the cloud. Cost is one but there are others including the need to constantly update hardware and to maintain a data centre. Many small organizations are limited to operating in premises that are not suitable as a datacentre; for example in offices over a shopping mall.  Although the organization may be wholly dependent upon their IT systems they may have no control over sprinkler systems, power, telecommunications, and even guaranteed 24x7 access to the building. They may be a risk of theft as well as fire, and incidents outside of their control. These are all factors which are well taken care of by cloud service providers (CSP) hosted in Tier III data centres.

However moving existing IT systems and applications to the cloud is not as simple. These legacy applications may be dependent upon very specific characteristics of the existing infrastructure such as IP address ranges or a particular technology stack which may be difficult to reproduce in the standard cloud environments. It is also important for customers to understand the sensitivity of the systems and data that they are moving to the cloud and the risks that these may be exposed to. Performing a cloud readiness risk assessment is an essential pre-requisite for an organization planning to use cloud services. Many of the issues around this relate to regulation and compliance and are described in KuppingerCole Analysts' View on Compliance Risks for Multinationals.

However it was interesting to hear of a US based CSP dinCloud that is focussing on this market. dinCloud first brought a hosted virtual desktop to the market. They have now expanded their offering to include servers, applications and IT infrastructure. dinCloud claim that their “Business Provisioning” service can help organizations to quickly and easily migrate all or part of their entire existing infrastructure to cloud.

This is a laudable aim; dinCloud claims some successes in the US and intend to expand worldwide. However, some of the challenges that they will face in Europe are the same as those currently faced by all US based CSPs – a lack of trust. Some of this has arisen through the Snowden revelations, the ongoing court case, where Microsoft in Ireland is being required to hand over emails to the US authorities, is fanning these flames. On top of this the EU privacy regulations, which are already strict, face being strengthened; and in some countries certain kinds of data must remain within the country. These challenges are discussed in Martin Kuppinger’s blog Can EU customers rely on US Cloud Providers?

This is an interesting initiative but to succeed in Europe dinCloud will need to win the trust of their potential customers. This will mean expanding their datacentre footprint into the EU/EEA and providing independent evidence of their security and compliance. When using a cloud service a cloud customer has to trust the CSP; independent certification, balanced contracts taking specifics of local regulations and requirements into account, and independent risk assessments are the best way of allowing the customer to verify that trust.


Organization, Security and Compliance for the IoT

Mar 03, 2015 by Mike Small

The Internet of Things (IoT) provides opportunities for organizations to get closer to their customers and to provide products and services that are more closely aligned to their needs. It provides the potential to enhance the quality of life for individuals, through better access to information and more control over their environment. It makes possible more efficient use of infrastructure by more precise control based on detailed and up to data information. It will change the way goods are manufactured by integrating manufacturing machinery, customers and partners allowing greater product customization as well as optimizing costs, processes and logistics.

However the IoT comes with risks the US Federal Trade Commission recently published a report of a workshop they held on this subject. This report, which is limited in its scope to IoT devices sold or used by consumers, identifies three major risks. These risks are enabling unauthorised access and misuse of personal information, facilitating attacks on other systems and creating risks to personal safety. In KuppingerCole’s view the wider risks are summarized in the following figure:

Organizations adopting this technology need to be aware of and manage these risks. As with most new technologies there is often a belief that there is a need to create a new organizational structure. In fact it is more important to ensure that the existing organization understands and addresses the potential risks as well as the potential rewards.

Organizations should take a well governed approach to the IoT by clearly defining the business objectives for its use and by setting constraints. The IoT technology used should be built to be trustworthy and should be used in a way that is compliant with privacy laws and regulations. Finally the organization should be able to audit and assure the organization’s use of the IoT.

The benefits from the IoT come from the vast amount of data that can be collected, analysed and exploited. Hence the challenges of Big Data governance security and management are inextricably linked with the IoT. The data needs to be trustworthy and it should be possible to confirm both its source and integrity. The infrastructure used for the acquisition, storage and analysis of this data needs to be secured; yet the IoT is being built using many existing protocols and technology that are weak and vulnerable.

The devices which form part of the IoT must be designed manufactured, installed and configured to be trustworthy. The security built into these devices for the risks identified today needs to be extensible to be proof against future threats since many of these devices will have lives measured in decades. There are existing low power secure technologies and standards that have been developed for mobile communications and banking, and these should be appropriately adopted, adapted and improved to secure the devices.

Trust in the devices is based on trust in their identities and so these identities need to be properly managed. There are a number of challenges relating to this area but there is no general solution.

Organizations exploiting data from the IoT should do this in a way that complies with laws and regulations. For personal information particular care should be given to aspects such as ensuring informed consent, data minimisation and information stewardship. There is a specific challenge to ensure that users understand and accept that the ownership of the device does not imply complete “ownership” of data. It is important that the lifecycle of data from the IoT properly managed from creation or acquisition to disposal. An organization should have a clear policy which identifies which data needs to be kept, why it needs to be kept and for how long. There should also be a clear policy for the deletion of data that is not retained for compliance or regulatory reasons.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Where is my Workload?

Jan 15, 2015 by Mike Small

One of the major challenges that faces organizations using a cloud or hosting service is to know where their data is held and processed. This may be to ensure that they remain in compliance with laws and regulations or simply because they have a mistrust of certain geo-political regions. The location of this data may be defined in the contract with the CSP (Cloud Service Provider) but how can the organization using the service be sure that the contract is being met? This question has led to many organizations being reluctant to use cloud.

Using the cloud is not the only reason for this concern – my colleague Martin Kuppinger has previously blogged on this subject. Once information is outside of the system it is out of control and potentially lost somewhere in an information heaven or hell.

One approach to this problem is to encrypt the data so that if it moves outside of your control it is protected against unauthorized access. This can be straightforward encryption for structured application data or structured encryption using private and public keys as in some RMS systems for unstructured data like documents. However, as soon as the data is decrypted the risk re-merges. One approach to this could be to make use of ”sticky access policies”.

However while these approaches may protect against leakage they don’t let you ensure that your data is being processed in a trusted environment. What is needed is a way to enable you to control where your workload is being run in a secure and trusted way. This control needs to be achieved in a way that doesn’t add extra security concerns – for example allowing you to control where your data is must not allow an attacker to find your data more easily,

Two years ago NIST published a draft report IR 7904 Trusted Geolocation in the Cloud: Proof of Concept Implementation. The report describes the challenges that this poses and sets out a proposed approach that meets these challenges and which could be implemented as a proof of concept.   The US based cloud service provider Virtustream recently announced that its service now supports this capability. They state “This capability allows our customers to specify what data centre locations that their data can be hosted at and what data centres cannot host their data. This is programmatically managed with our xStream cloud orchestration application.”

The NIST document describes three stages that are needed in the implementation of this approach:

  1. Platform Attestation and Safer Hypervisor Launch. This ensures that the cloud workloads are run on trusted server platforms. To achieve this you need to:
    1. Configure a cloud server platform as being trusted.
    2. Before each hypervisor launch, verify (measure) the trustworthiness of the cloud server platform.
    3. During hypervisor execution, periodically audit the trustworthiness of the cloud server platform.
  2. Trust-Based Homogeneous Secure Migration. This stage allows cloud workloads to be migrated among homogeneous trusted server platforms within a cloud.
    1. Deploy workloads only to cloud servers with trusted platforms.
    2. Migrate workloads on trusted platforms to homogeneous cloud servers on trusted platforms; prohibit migration of workloads between trusted and untrusted servers
  3. Trust-Based and Geolocation-Based Homogeneous Secure Migration. This stage allows cloud workloads to be migrated among homogeneous trusted server platforms within a cloud, taking into consideration geolocation restrictions.
    1. Have trusted geolocation information for each trusted platform instance
    2. Provide configuration management and policy enforcement mechanisms for trusted platforms that include enforcement of geolocation restrictions.
    3. During hypervisor execution, periodically audit the geolocation of the cloud server platform against geolocation policy restrictions.
This is an interesting initiative by Virtustream and, since it is implemented through their xStream software which is used by other CSPs, it is to be hoped that this kind of functionality will be more widely offered. When using a cloud service a cloud customer has to trust the CSP. KuppingerCole’s advice is trust but verify.  This approach has the potential to allow verification by the customer.


A Haven of Trust in the Cloud?

Nov 11, 2014 by Mike Small

In September a survey was published in Dynamic CISO that showed that “72% of Businesses Don’t Trust Cloud Vendors to Obey Data Protection Laws and Regulations”.  Given this lack of trust by their customers what can cloud service vendors do?

When an organization stores data on its own computers, it believes that it can control who can access that data. This belief may be misplaced given the number of reports of data breaches from on premise systems; but most organizations trust themselves more than they trust others.  When the organization stores data in the cloud, it has to trust the cloud provider, the cloud provider’s operations staff and the legal authorities with jurisdiction over the cloud provider’s computers. This creates many serious concerns about moving applications and data to the cloud and this is especially true in Europe and in particular in geographies like Germany where there are very strong data protections laws.

One approach is to build your own cloud where you have physical control over the technology but you can exploit some of the flexibility that a cloud service provides. This is the approach that is being promoted by Microsoft.  In October Microsoft in conjunction with Dell announced their “Cloud Platform System”.  This is effectively a way for an organization to deploy Dell servers running the Microsoft Azure software stack on premise.  Using this platform, an organization can build and deploy on premise applications that are Azure cloud ready.  At the same time it can see for itself what goes on “under the hood”.  Then, when the organization has built enough trust, or when it needs more capacity it can easily extend the existing workload in to the cloud.   This approach is not unique to Microsoft – other cloud vendors also offer products that can be deployed on premise where there are specific needs.

In the longer term Microsoft researchers are working to create what is being described as a “Haven in the Cloud”.  This was described in a paper at the 11th USENIX Symposium on Operating Systems Design and Implementation.  In this paper, Baumann and his colleagues offer a concept they call “shielded execution,” which protects the confidentiality and the integrity of a program, as well as the associated data from the platform on which it runs—the cloud operator’s operating system, administrative software, and firmware. They claim to have shown for the first time that it is possible to store data and perform computation in the cloud with equivalent trust to local computing.

The Haven prototype uses the hardware protection proposed in Intel’s Software Guard Extensions (SGX)—a set of CPU instructions that can be used by applications to isolate code and data securely, enabling protected memory and execution. It addresses the challenges of executing unmodified legacy binaries and protecting them from a malicious host.  It is based on “Drawbridge” another piece of Microsoft research that is a new kind of virtual-machine container.

The question of trust in cloud services remains an important inhibitor to their adoption. It is good to see that vendors are taking these concerns seriously and working to provide solutions.  Technology is an important component of the solution but it is not, in itself sufficient.  In general computers do not breach data by themselves; human interactions play an important part.  The need for cloud services to support better information stewardship as well as for cloud service providers to create an information stewardship culture is also critical to creating trust in their services.  From the perspective of the cloud service customer my advice is always trust but verify.


CESG Draft Cloud Security Principles and Guidelines

Sep 27, 2014 by Mike Small

UK CESG, the definitive voice on the technical aspects of Information Security in UK Government, has published draft versions of guidance for “public sector organizations who are considering using cloud services for handling OFFICIAL information”. (Note that the guidelines are still at a draft stage (BETA) and the CESG is requesting comments).  There are already many standards that exist or are being been developed around the security of cloud services (see: Executive View: Cloud Standards Cross Reference – 71124) so why is this interesting?

Firstly there is an implied prerequisite that the information being held or processed has being classified as OFFICIAL. KuppingerCole advice is very clear; the first step to cloud security is to understand the risk by considering the business impact of loss or compromise of data.  CESG publishes a clear definition for OFFICIAL which is the lowest level of classification and covers “ALL routine public sector business, operations and services”.  So to translate this into business terms the guidelines are meant for cloud services handling the day to day operational services and data.

Secondly the guidelines are simple, clear and concise, and simple is more likely to be successful that complex. There are 14 principles that apply to any organization using cloud services.  The principles are summarized as follows:

  1. Protect data in transit
  2. Protect data stored against tampering, loss, damage or seizure. This includes consideration of legal jurisdiction as well as sanitization of deleted data.
  3. A cloud consumer’s service and data should be protected against the actions of others.
  4. The CSP (service provider) should have and implement a security governance framework.
  5. The CSP should have processes and procedures to ensure the operational security of the service.
  6. CSP staff should be security screened and trained in the security aspects of their role.
  7. Services should be designed and developed in a way that identifies and mitigates security threats.
  8. The service supply chain should support the principles.
  9. Service consumers should be provided with secure management tools for the service.
  10. Access to the service should be limited to authenticated and authorized individuals.
  11. External interfaces should be protected
  12. CSP administration processes should be designed to mitigate risk of privilege abuse.
  13. Consumers of the service should be provided with the audit records they need to monitor their access and the data.
  14. Consumers have responsibilities to ensure the security of the service and their data.
Thirdly there is detailed implementation advice for each of these principles.  As well as providing technical details for each principle it describes six ways in which the customer can obtain assurance.  These assurance approaches can be used in combination to increase confidence.   The approaches are:
  1. Service provider assertions – this relies upon the honesty, accuracy and completeness of the information from the service provider.
  2. Contractual commitment by the service provider.
  3. Review by an independent third party to confirm the service provider’s assertions.
  4. Independent testing to demonstrate that controls are correctly implemented and objectives are met in practice. Ideally this and 3 above should be carried out to a recognised standard. (Note that there are specific UK government standards here but for most commercial organizations these standards would include ISO/IEC 27001, SOC attestations to AICPA SSAE No. 16/ ISAE No. 3402 and the emerging CSA Open Certification Framework)
  5. Assurance in the service design - A qualified security architect is involved in the design or review of the service architecture.
  6. Independent assurance in the components of a service (such as the products, services, and individuals which a service uses).
These guidelines provide a useful addition to the advice that is available around the security of cloud services.  They provide a set of simple principles that are easy to understand.  These principles are backed up with detailed technical advice on their implementation and assurance.  Finally they take a risk based approach where the consumer needs to classify the data and services in terms of their business impact.

KuppingerCole has helped major European organizations to successfully understand and manage the real risks associated with cloud computing. We offer research and services to help cloud service providers, cloud security tool vendors, and end user organizations.  To learn more about how we can help your organization, just contact sales@kuppingercole.com).


Microsoft OneDrive file sync problems

Sep 01, 2014 by Mike Small

A number of users of Microsoft’s OneDrive cloud storage system have reported problems on the Microsoft community relating to synchronizing files between devices. So far, I have not seen an official response from Microsoft. This can be very disconcerting so, in the absence of a response from Microsoft, here are some suggestions to affected users. These worked for me but – in the absence of a formal response from Microsoft – I can offer no cast iron guarantees.

What is the problem? It appears that files created on one device are synced to another device in a corrupt state. This only seems to affect Microsoft Office files (Word, Excel, PowerPoint etc.), which have been created or updated since around August 27th. It does not appear to affect other types of files such as .pdf, .jpg and .zip, for example. When the user tries to access the corrupt file, they get a message of the form “We’re sorry, we can’t open the <file> because we found a problem with its contents”.

This problem does not affect every device, but it can be very disconcerting when it happens to you! The good news is that the data appears to be correct on the OneDrive cloud and – if you are careful – you can retrieve it.

Have I got the problem? Here is a simple test that will allow you to see if you have the problem on your device:

  1. Create a simple Microsoft Office file and save it on the local files store of the device. Do not save it on the OneDrive system.
  2. Log onto OneDrive https://onedrive.live.com/ using a browser and upload the file to a folder on your OneDrive.
  3. Check the synced copy of the file downloaded by the OneDrive App onto your device. If the synced file is corrupted, you have the problem!
What can I do? Do not panic – the data seems to be OK on the OneDrive cloud. Here is how I was able to get the data back onto my device:
  1. Log onto OneDrive https://onedrive.live.com/ using a browser and download the file to your device - replace the corrupt copy
  2. Do NOT delete the corrupt file on your device - this will send the corrupt version to the recycle bin. It will also cause the deletion of the good version on other devices.
  3. It is always a good idea to run a complete malware scan on your devices. If you have not done so recently, now is a very good time. I did that but no threats were detected.
  4. Several people including me have followed the advice on how to troubleshoot sync problems published by Microsoft – but this did not work for them or me.
  5. I did a complete factory reset on my Surface RT – this did not help. Many other people have tried this also to no avail.
Is there a work around? I have not yet seen a formal response from Microsoft, so here are some things that all worked for me:
  1. Accept the problem and whenever you find a corrupt file perform a manual download as described above.
  2. Protect your Office files using a password – this caused the files to be encrypted and it appears that password protected files are not corrupted.  In any case KuppingerCole recommends that information held in cloud storage should be encrypted.
  3. Use WinZip to zip files that are being changed. It seems that .zip files are not being corrupted.
  4. Use some other cloud storage system or a USB to share these files.
This example illustrates some of the downsides of using a cloud service. Cloud services are very convenient when they work, but when they don’t work you may have very little control over the process to fix the problem. You are completely in the hands of the CSP (Cloud Service Provider). If you are using a service for business, access to the data you are entrusting to the CSP may be critical to your business operations. One of the contributors to Microsoft support community described how since he was unable to work he was getting no pay and this is a graphic illustration of the problem.

KuppingerCole can offer research, advice and services relating to securely using the cloud. In London of October 7th KuppingerCole will hold a Leadership Seminar on Risk and Reward from the Cloud and the Internet of Things. Attend this seminar to find out how to manage these kinds of problems for your organization.

Update September 3rd, 2014

An update – the program manager at OneDrive (Arcadiy K) responded to the Microsoft Community and apologized.

“We’ve found the cause of the issue and we believe we have made sure that no new files will be affected. Any files you sync moving forward should work fine and you should no longer encounter the corruption issues described in this thread. Please let us know if you find otherwise.”

I have  tested this on my Windows RT 8.1 and on this device I can confirm that it is fixed. Interestingly there have been no Microsoft updates (or any other changes except a virus signature update) to my device.

Microsoft have just announced the rollup update for August 2014.   Under “Fixed issues included in this is: August 2014 OneDrive reliability update for Windows RT 8.1 and Windows 8.1

Some folk are still having problems trying to clean up the mess from the previous errors. I would advise reading the thread on the support forum for suggestions on how to recover from these.


Cloud Provider Assurance

Aug 05, 2014 by Mike Small

Using the cloud involves an element of trust between the consumer and the provider of a cloud service; however, it is vital to verify that this trust is well founded. Assurance is the process that provides this verification. This article summarizes the steps a cloud customer needs to take to assure that cloud a service provides what is needed and what was agreed.

The first step towards assuring a cloud service is to understand the business requirements for it. The needs for cost, compliance and security follow directly from these requirements. There is no absolute assurance level for a cloud service – it needs to be just as secure, compliant and cost effective as dictated by the business needs – no more and no less.

The needs for security and compliance depend upon the kind of data and applications being moved into the cloud. It is important to classify this data and any applications in terms of their sensitivity and regulatory requirement needs. This helps the procurement process by setting many of the major parameters for the cloud service as well as the needs for monitoring and assurance. Look at Advisory Note: From Data Leakage Prevention (DLP) to Information Stewardship – 70587.

Use a standard process for selecting cloud services that is fast, simple, reliable, standardized, risk-oriented and comprehensive. Without this, there will be a temptation for lines of business to acquire cloud services directly without fully considering the needs for security, compliance and assurance. For more information on this aspect see Advisory Note: Selecting your cloud provider - 70742.

Take care to manage the contract with the cloud service provider. An article on negotiating cloud contracts from Queen Mary University of London provides a comprehensive list of the concerns of organizations adopting the cloud and a detailed analysis of cloud contract terms. According to this article, many of the contracts studied provided very limited liability, inappropriate SLAs (Service Level Agreements), and a risk of contractual lock in. See also - Advisory Note: Avoiding Lock-in and Availability Risks in the Cloud - 70171.

Look for compliance with standards; a cloud service may have significant proprietary content and this can also make the costs of changing provider high. Executive View: Cloud Standards Cross Reference – 71124 provides advice on this.

You can outsource the processing, but you can’t outsource responsibility – make sure that you understand how responsibilities are divided between your organization and the CSP. For example, under EU Data Protection laws, the cloud processor is usually the "data processor" and the cloud customer is the "data controller". Remember that the "data controller" can be held responsible for breaches of privacy by a "data processor".

Independent certification is the best way to verify the claims made by a CSP. Certification of the service to ISO/IEC 27001 is a mandatory requirement. However, it is important to properly understand that what is certified is relevant to your needs. For a complete description of how to assure cloud services in your organization see Advisory Note: Cloud Provider Assurance - 70586.

This article was originally published in the KuppingerCole Analysts’ View Newsletter.


EU Guidelines for Service Level Agreements for Cloud Computing

Jul 03, 2014 by Mike Small

In a press release on June 26th, the European Commission announced the publication of new guidelines “help EU businesses use the Cloud”.  These guidelines have been developed by a Cloud Select Industry Group as part of the Commission’s European Cloud Strategy to increase trust in these services.  These guidelines cover SLAs (Service Level Agreements) for cloud services.  In KuppingerCole’s opinion these guidelines are a good start but are not a complete answer to the concerns of individuals and businesses choosing to use cloud services.

Cloud services are important as they provide a way for individuals and businesses to access IT applications and infrastructure in a flexible way and without the need for large up front capital investment.   This makes it possible for new businesses to minimize the risk of testing new products and for existing businesses to reduce the cost of running core IT services.  It allows individuals to access a range of IT services for free or at minimal cost.

The cost model for cloud services is based on two pillars: the service is standardized and offered to the customer on a take it or leave it basis, and the cloud service provider can exploit the cost savings that accrue from the massive scale of their service.   In the case of services offered to individuals there is a third pillar that the cloud service provider can exploit or sell information gathered about the individual users in exchange for providing the service.

Since the definition of the service offered is not usually open to negotiation it is important that its definition is clear to enable the potential customer to perform a real comparison between services offered by different providers.  This definition should also be transparent on how the service provider handles and uses data stored in, or collected by the service.  This is especially important because many kinds of data are subject to laws and regulations and the customer needs to be able to verify that the data for which they are responsible is being handled appropriately.  In addition the individual user of a service needs to understand how data collected about them will be used.

These new guidelines specify what a cloud SLA should cover but not what the service level should be.   They provide a detailed vocabulary with definitions of the various terms used in SLAs.  They provide a set of SLOs (Service level Objectives) for different aspects of the service.  Some relevant SLOs are suggested for each of the service aspects and SLOs are provided for the following major areas of a cloud service:

  • The performance of the service including: availability, response, capacity, capability, support and reversibility. This latter aspect covers the processes involved when the service is terminated. This is important since one of the key concerns is the return of a customer’s data when the service ends together with guarantees about the erasure of that data.
  • The security of the service including: its reliability, authentication and authorization, cryptography, incident management, logging and monitoring, auditing and verification, vulnerability management and service governance.
  • Data management including: data classification, data mirroring backup and restore, data lifecycle and data portability. The data lifecycle include an SLO “data deletion type”: this should specify the quality of the data deletion ranging from weak to strong sanitization (such as specified in NIST 800-88) where the data cannot easily be recovered.
  • Personal data protection: this focuses on the cases where the cloud service provider acts as a “data processor” for the customer who is the “data controller”: including codes of conduct and certification mechanisms, data minimisation, use retention and disclosure, openness transparency and notice, accountability, geographic location and intervenability.
These guidelines are a good start but are not a complete answer to the concerns of individuals and businesses choosing to use cloud services.  They provide a common set of areas that a cloud SLA should cover and a common set of terms that can be used.  However the definition of the objectives in a standard way that can be measured still falls short; it still allows too much “wriggle room” for the cloud provider.  A worthwhile document that provides more detailed advice on what to measure in cloud contracts and how to measure it is given in ENISA Procure Secure.

It is good that the guidelines distinguish between the legal contractual aspects and the technical service definition.  However the SLOs cover areas of data privacy where there is an essential overlap because of the legal obligations upon the cloud customer where they are using the cloud service to process data subject to regulations or laws.  Section 6.4 covers the contentious area of disclosure of personal data to law enforcement authorities and suggests the objects should include the number of disclosures made over a period of time as well as the number notified.  This will not be sufficient to moderate the significant concerns of European organizations using non EU based cloud service providers.

KuppingerCole has helped major European organizations to successfully understand and manage the real risks associated with cloud computing.  We offer research and services to help cloud service providers, cloud security tool vendors, and end user organizations.  To learn more about how we can help your organization, just contact sales@kuppingercole.com).


AWS: Great Security but can you Trust a US Owned Cloud Service?

May 30, 2014 by Mike Small

Cloud computing provides an unparalleled opportunity for new businesses to emerge and for existing businesses to reduce costs and improve the services to their customer.  However the revelations of Snowden and the continuing disclosure of state sponsored interception and hacking undermine confidence in cloud service providers.  In this environment CSPs need to go the extra mile to prove that their services are trustworthy.

In general there are two kinds of customers that are adopting cloud computing.  The first kind is the so called “born on the cloud” customers who are starting new businesses which depend upon IT but without the need to make large capital investments in IT.  The second is the organizations that are already using IT in house and are creating new IT applications in the cloud and moving existing ones to the cloud.

These two different kinds of customers have a different sets of risks to manage.  For the born on the cloud the biggest risk is whether or not their business will take off, conventional IT security risks are important but not crucial; (although this may prove to be a mistake in the long run.)  However, organizations moving to the cloud may have already invested heavily in IT, to ensure information security, for compliance or to protect intellectual property and, for these organizations, cloud security and governance are critical concerns.  From the announcements it appears that AWS is now working to attract enterprise customers that are moving to the cloud.

At their event in London on April 28th, 2014 AWS produced an impressive list of customers that included start-ups, enterprises and public sector organizations.  What was new was the list of enterprises that were moving their IT entirely to the cloud; these included an Australian bank and a German hotel chain.  To attract and keep these kinds of customer AWS needs to demonstrate the functionality, security and governance of their offering as well as a competitive price.

AWS claims a high level of IT security and governance for their cloud services and these claims are backed by independent certification.   AWS security principles and processes are described in a white paper.  In June 2013, KuppingerCole published an Executive View on this: Amazon Web Services – Security and Assurance – 70779. There are many existing features which AWS offers that are of particular interest to enterprises and these include:

  • The ability to use a dedicated network connection from the enterprise to AWS using standard 802.1q VLANs.
  • A Virtual Private Cloud - a logically isolated section of the AWS Cloud for the enterprise’s AWS resources.
  • Control of access to the enterprise’s AWS resources based on the enterprise Active Directory using Active Directory Federation Services (ADFS)
  • Data encryption using Amazon Cloud HSM – which allows the enterprise to retain control over the encryption keys.
  • Control of the geography in which the enterprise data is held and processed.
Since then AWS have added AWS CloudTrail.  This is a web service that records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service.  With CloudTrail, you can get a history of AWS API calls for your account, including API calls made via the AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services (such as AWS CloudFormation). The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing.

An organization adopting the cloud needs to balance the risks against the rewards.  Information security and compliance are the main risks that are holding enterprises back from cloud adoption.  AWS claims a high level of security and these claims are backed by independent audits – however there is still the problem of trust.  The revelations by Snowden of the extent to which the NSA was intercepting communications has made many organizations wary of US based cloud services.  The US government unwillingness to permit organizations to publish sufficient data relating to Foreign Intelligence Surveillance Act (FISA) orders added to these concerns.   (However – in January 2014 the Obama administration reached a deal, allowing the disclosure of more information on the customer data companies are compelled to share with the US government, albeit with some delay)

The extent to which nation states are eavesdropping on or hacking into commercial enterprises (US justice department charges Chinese with hacking) has added to this concern.

While this may seem unfair on AWS, many European enterprises are choosing not to put business critical application or confidential data into US managed cloud services.  To address these concerns will be difficult.  AWS CTO Werner Vogels was recently featured in an article in the Guardian newspaper.  In this article he writes “Another core value is putting data protection, ownership, and control, in the hands of cloud users. It is essential that customers own and control their data at all times.”KuppingerCole agrees with this sentiment but cloud service providers will need to go the extra mile to prove that their services, their employees and their infrastructure cannot be suborned by national interests or national agencies.


IBM’s Software Defined Environment

Apr 08, 2014 by Mike Small

In IBM’s view the kinds of IT applications that organizations are creating is changing from internal facing systems to external facing systems.  IBM calls these kinds of systems “systems of record” and “systems of engagement” respectively.  The systems of record represent the traditional applications that ensure that the internal aspects of the business run smoothly and the organization is financially well governed.  The systems of engagement exploit the new wave of technology that is being used by customers and partners and which takes the form of social and mobile computing.  In IBM’s opinion a new approach to IT is needed to cater for this change which IBM calls SDE (Software Defined Environments).

According to IBM these systems of engagement are being developed to enable organizations to get closer to their customers and partners, to better understand their need and to better respond to their issues and concerns.  They are therefore vital to the future of the business.

However the way these systems of engagement are developed, deployed and exploited is radically different to that for systems of record.   The development methodology is incremental and highly responsive to user feedback.  Deployment requires IT infrastructure that can quickly and flexibly respond to use by people outside the organization.  Exploitation of these applications requires the use of emerging technologies like Big Data analytics which can place unpredictable demands on the IT infrastructure.

In response to these demands IBM has a number of approaches; for example in February I wrote about how IBM has been investing billions of dollars in the cloud.  IBM also has offers something it calls SDE (Software Defined Environment).  IBM’s SDE custom-builds business services by leveraging the infrastructure according to workload types, business rules and resource availability. Once these business rules are in place, resources are orchestrated by patterns—best practices that govern how to build, deploy, scale and optimize the services that these workloads deliver.

IBM is also not alone in this approach and others notably VMWare are heading in the same direction.

In the IBM approach - abstracted and virtualized IT infrastructure resources are managed by software via API invocations.   Applications automatically define infrastructure requirements, configuration and Service Level expectations.  The developer, the people deploying the service as well as the IT service provider are all taken into account by the SDE.

This is achieved by the IBM SDE being built on software and standards from the OpenStack Foundation of which IBM is a member.  IBM has added specific components and functionality to OpenStack to fully exploit IBM hardware and software and these include drivers for: IBM storage devices, PowerVM, KVM and IBM network devices.  IBM has also included some IBM “added value” functionality which includes management API additions, scheduler enhancements, management console GUI additions, and a simplified install.  Since the IBM SmartCloud offerings are also based on OpenStack this also makes cloud bursting into the IBM SmartCloud (as well as any other cloud based on OpenStack) easier except where there is a dependency on the added value functionality.

One of the interesting areas is the support provided by the Platform Resource Scheduler for the placement of workloads.  The policies supported make it possible to define that workloads are placed in a wide variety of ways including: pack workload on fewest physical servers or spread across several, load balancing and memory balancing, keep workloads physically close or physically separate.

IBM sees organizations moving to SDEs incrementally rather that in a big bang approach.  The stages they see are virtualization, elastic data scaling, elastic transaction scaling, policy based optimization and finally application aware infrastructure.

In KuppingerCole’s opinion SDCI (Software Defined Computing Infrastructure) is the next big thing.  Martin Kuppinger wrote about this at the end of 2013. IBM’s SDE fits into this model and has the potential to allow end user organizations to make better use their existing IT infrastructure and to provide greater flexibility to meet the changing business needs.  It is good that IBM’s SDE is based on standards; however there is still a risk of lock-in since the standards in this area are incomplete and are still emerging.   My colleague Rob Newby has also written about the changes that are needed for organizations to successfully adopt SDCI.  In addition it will require a significant measure of technical expertise to successful implement in full.

For more information on this subject there are sessions on Software Defined Infrastructure and a Workshop on Negotiating Cloud Standards Jungle at EIC May 12th to 16th in Munich.


Author info

Mike Small
Fellow Analyst
Profile | All posts
KuppingerCole Blog
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Register now
Internet of Things
It is its scale and interoperability that fundamentally differentiate the Internet of Things from existing isolated networks of various embedded devices. And this scale is truly massive. Extrapolating the new fashion of making each and every device connected, it is estimated that by 2020, the number of “things” in the world will surpass 200 billion and the IoT market will be worth nearly $9 trillion.
KuppingerCole EXTEND
KC EXTEND shows how the integration of new external partners and clients in your IAM can be done while at the same time the support of the operational business is ensured.
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on Google+

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing

Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2015 KuppingerCole