Blog posts by Mike Small
Oracle and Saleforce.com CEOs, Larry Ellison and Marc Benioff, hosted a briefing call on June 27th to announce a partnership between these two highly successful companies. What does this partnership cover and what does it mean?
Salesforce.com is built on Oracle technology and so Salesforce is very dependent upon Oracle. Marc Benioff confirmed that Salesforce has confidence in the latest releases from Oracle including Oracle 12C database, Oracle Linux and Oracle Exadata. Larry Ellison announced that this partnership will ensure that there will be out of the box integration of Salesforce.com CRM with Oracle HCM and Oracle financial applications. However there will be no cross selling of each other’s products – each company’s sales force will continue to sell only its own products.
So what are the benefits for customers? It is very difficult to quantify the benefits but qualitatively the integrations will be more reliable, more secure and have better performance than ad hoc ones. At the moment organizations have to build their own integration and this is costly to create and costly to maintain. This partnership should remove these costs and hence is good for the large number of organizations that are using or will decide to use Salesforce.com CRM together with the other Oracle applications.
It looks like Oracle has conceded that organizations which have adopted Saleforce.com CRM will not be persuaded to migrate to Oracle CRM. At the same time Oracle is assured of a significant continuing revenue stream for its products from Salesforce.com.
Salesforce.com has stated that is aim is to help organizations to get closer to their customers, partners and associates and is transforming itself from a CRM provider to a platform provider. So it would appear that it is not interested in competing with the other Oracle applications.
This is where it becomes interesting – on the call it was made clear that the partnership does not cover the platform. Both CEOs described the importance of a platform that will support the explosion of data from networked devices. However both companies have their own evolving platform to provide a solution for that problem. So here the competition continues.
On June 24th, IBM announced that it is to acquire SoftLayer Technologies Inc. and at the same time announced the formation of a new Cloud Services division. Following the close of this acquisition the new division will combine SoftLayer with IBM SmartCloud into a global platform.
So what is special about SoftLayer, why is this important and what does it mean?
SoftLayer Technologies Inc., which was founded in 2005, has over 100,000 devices under management which makes it one of the largest privately held computing infrastructure providers in the world. SoftLayer has redefined the delivery of cloud computing through its IMS (Infrastructure Management System). Most cloud providers deliver their compute services using virtualization which keeps the customer one step removed from the underlying hardware. The CloudLayer IMS, which is around 3 million lines of code, makes it possible to offer raw hardware on demand as a pay per use cloud service. This has been very attractive for certain kinds to certain kinds of applications, for example gaming, that require very intensive compute or I/O performance. In effect this system makes it possible to offer anything now available in a data centre as a cloud service.
IBM already is one of the world’s leading cloud service providers—with cloud revenue expected to reach $7 billion annually by the end of 2015. The acquisition is intended to strengthen IBM’s existing SmartCloud portfolio, by providing a broader range of choices to enterprises moving their workloads to the cloud and is better able to meet the needs those organizations that were founded on or in the cloud.
This acquisition extends the IBM SmartCloud beyond server and storage virtualization by making “bare metal” computing hardware available as a cloud service. It extends the way in which IBM will be able to deliver the kinds of services that it always delivered through its GTS organization. It confirms that the cloud is now a mainstream way of delivering IT services.
A word of caution concerns the proprietary nature of SoftLayer’s home grown IMS. IBM has announced that it intends to expand SoftLayer’s cloud offerings to be OpenStack compliant consistent with its SmartCloud portfolio and historic commitment to open standards. KuppingerCole believes that organizations using the cloud need to take care to avoid lock-in, and standards provide an important way to ensure this. It is important that IBM delivers on this commitment.
A recent report commission by CA Technologies Inc. looks at the growth of the use of cloud services and the evolving attitudes to the security of these. This report shows some interesting findings: For instance: Europe is catching up with the US, with “38% of the European respondents using cloud for two to three years.” As compared with “55% of the companies in the US have been in the cloud for three or more years”.
This finding is confirmed by the recent announcement by salesforce.com that they have signed an agreement to establish European data centre in the UK in 2014. According Marc Benioff, Chairman and CEO, salesforce.com “Europe was salesforce.com’s fastest growing region in our fiscal year 2013, delivering constant currency revenue growth of 38%”. This same press release includes a forecast from IDC “that Europe’s public cloud software market will grow three times faster than other IT segments, at a CAGR of 30% to reach €23.9 billion by 2017”.
One of the reasons for opening the European data centre given by salesforce.com at their Cloudforce event in London on May 2nd was to answer security concerns of EU governments and organizations relating to the location of their data. While security concerns remain a key issue for organizations adopting the cloud - the CA Technologies Inc. report discusses the “Security Contradiction”. According to this report “Ninety-eight percent of enterprises surveyed reported that the cloud met or exceeded their expectations for security”. At the KuppingerCole European Identity & Cloud Conference held in Munich May 14-17, 2013 one session given by UK organization described that they had moved to the cloud primarily for security reasons.
So – according to these reports - it would seem that the cloud is blossoming in Europe and that customers believe that cloud providers are coming good on their promises around security. However our advice remains “Trust but Verify” - using the cloud inherently involves an element of trust between the organization using the cloud service and CSP. This trust must not be unconditional and it is vital to ensure that the trust can be verified. Organizations still need to have a fast reliable and risk based approach to selecting cloud services as described in our Advisory Note: Selecting your cloud provider - 70742.
The past couple of weeks must have been an anxious time for the customers of the outsourcing service run by 2e2 which went into administration on January 29th. This impacted on a range of organizations including hospitals. The good news today is that the Daisy Group plc. has been appointed to manage the 2e2 Data Centre business. Organizations are now almost totally dependent on their IT services to operate. It is tempting to think that outsourcing the service absolves you of any responsibility. This is not the case; an organization using a cloud service is still responsible for the continuity of its business. The lesson to be learned from this is that while organizations may hope for the best they need to plan for the worst!
A previous example of the need for business continuity planning occurred some years ago. On the 29th of March of 2004 a fire in tunnels under the city of Manchester had a major impact on telecommunications in the North of England. Emergency services were hit and mobile phone services disrupted; it was estimated that 130,000 ‘phone lines were affected. It was not until April 5th of that year that services were back to normal.
Most organizations depend heavily upon the public telephone network and this network is normally one of the most reliable services so how did they cope with this disruption? The organizations that had an up to date and tested disaster recovery plan (mostly the large ones) were able to continue their operations. The small organizations without a plan were badly hit.
Smaller organizations, ones that are not able to afford their own highly resilient data centres, should benefit the most from the resilience offered by the larger cloud service providers. However, as the example above illustrates, small organizations tend not to have a business continuity plan. In addition not all large organizations have included cloud services in their plan.
Organizations need to determine the business needs for the continuity of any services and data moved to the cloud. They should have policies, processes and procedures in place to ensure that the business requirements for business continuity are met. These policies and procedures involve not only the CSP, but also the customer as well as intermediate infrastructure such as telecommunications and power supplies. They should form part of a complete business continuity plan. Such a plan is part of the operations of what KuppingerCole defines as the “IT Management and Security” layer within IT organization, which is described in the KuppingerCole Scenario: Understanding IT Service and Security Management – 70173.
Here are some points that need to be considered. For a more detailed view see KuppingerCole Advisory Note: Avoiding Lock-in and Availability Risks in the Cloud - 70171
End to End Infrastructure: Use of the Cloud depends upon the infrastructure to be available from end to end. Not only does the equipment and services at the CSP have to be operational but also the network and the customer equipment need to be available and working. Therefore the Cloud customer, as well as the CSP, needs to ensure the availability of components under their control as well as having appropriate contingency plans.
Service and Data Availability: the data or the service may become unavailable for many reasons. These include misconfigurations and bugs as well as hardware failures; in addition it may be corrupted or be erased. The CSP may offer several approaches to minimize the risk of data becoming unavailable. However - if timely access to the data is important – ensure that you understand the promised time to recovery. In some circumstances the Cloud customer may need to perform a backup themselves to ensure the required level of business continuity.
Theft or Seizure: The equipment that is used to provide the Cloud service may be stolen or seized by law enforcement because of the activities of co-tenants. These can both lead to a loss of availability of the Cloud service.
Supplier Failure: The cloud service may become unavailable due to the failure of the CSP or of one of their providers. The CSP may go out of business for many reasons ranging from withdrawal from the market through to financial bankruptcy. The CSP may also outsource some of the services that it depends upon and its own supply chain could fail with the failure of one of these providers. Whatever the reasons the impact of this failure on for the cloud customer could be very high.
Power Loss and Natural Disasters: The cloud service provided depends upon the availability of power for systems as well as air-conditioning and other ancillary services for the data centre. An example of this was the lightning strike in Dublin that caused the Amazon and Microsoft Cloud to go offline in 2011.
For more details on best practices for cloud computing attend European Identity & Cloud Conference held in Munich during May 2013. This will feature a one day workshop on Cloud Provider Assurance. This workshop uses real life scenarios to lead the participants through the steps necessary to assure that cloud services meet their organization’s business requirements.
KuppingerCole research confirms that “security, privacy and compliance issues are the major inhibitors preventing organizations from moving to a private cloud.” Our report on Cloud Provider Assurance provides information in depth on how to manage these issues. Here is a summary of our top ten tips on negotiating and assuring cloud services.
- Consistent IT governance is critical: The cloud is just an alternative way of obtaining IT services and, for most organizations; it will be only one component of the overall complex IT service infrastructure. IT Governance provides a way to manage, secure, integrate, orchestrate and assure services from diverse sources in a consistent and effective way.
- Adopt best practices that are relevant to your organization from one or more of the frameworks or industry standards that are available. These represent the combined knowledge and experience of the best brains in the industry. However – be selective – not everything will apply to your organization. Whatever standards you choose – select a CSP (Cloud Service Provider) that conforms to these standards.
- Understand the business requirements for the cloud service – security, privacy and compliance needs follow directly from these. There is no absolute level of assurance for a cloud service – it needs to be as secure, compliant and cost effective as dictated by the business needs – no more and no less.
- Implement a standard process for selecting cloud services: This should enable fast, simple, reliable, standardized, risk-oriented selection of cloud service providers. Without this there will be a temptation for lines of business to acquire cloud services directly without fully considering the needs for assurance.
- Manage Cloud Contracts – beware of CSP standard terms and conditions and consider carefully when to accept them. If the CSP standard contract satisfies the business needs – that is fine. If not accept nothing less than you would from your in house IT! If the CSP won’t negotiate try going via an integrator.
- Classify data and applications in terms of their business impact, the sensitivity of the data and regulatory requirement needs. This helps the procurement process by setting many of the major parameters for the cloud service and the needs for monitoring and assurance in advance.
- Division of responsibilities: when adopting a cloud service make sure you understand what your responsibilities are as well as those of the CSP. For example, in most cases under European law, the organization using a cloud service is the “data controller” and remains responsible for personal data held in the cloud.
- Independent Certification of CSP: Look for regular independent certification that the service parameters which are relevant to your business need are being met. Typically external audits are only performed once or twice per annum and so whilst they are important they only provide snapshots of the service.
- Continuous Assurance: To provide continuous assurance of the cloud service, require the CSP to provide regular access to monitoring data that allows you to monitor performance against the service parameters.
- Trust but Verify - Using the cloud inherently involves an element of trust between the organization using the cloud service and CSP. However - this trust must not be unconditional and it is vital to ensure that the trust can be verified.
Was 2012 a big year for IT security breaches?
Whilst I don’t have quantitative information on exactly how many data breaches there were during 2012. However, during this period, there were many prosecutions, enforcement notices and monetary penalties issued by the ICO (UK Information Commissioner's Office). These included a record monetary penalty of £325,000 for a hospital in the UK where discs containing patient data were sold on the internet , a penalty of £150,000 for Greater Manchester Police where an officer lost an memory stick with unencrypted information relating to more than 1000 people linked to serious crimes, and a penalty of £120,000 was issued to council where sensitive information about a child protection legal case was emailed to the wrong person. There have also been a number of cases of Hacktivism and a worrying trend towards "ransom ware" – and example being where extortionists encrypted patient data belonging to an Australian hospital and demanded $5000 to restore access.
Does this mean that the IT security industry losing the battle against the hackers?
In terms of IT security technology there is a continuing arms race. As new kinds of security are developed the criminals find alternative tools, tactics and procedures to overcome these. This challenge needs to be considered against a wider scope than one of technology. As long as criminals can make money at – what they consider to be an acceptable level of risk – they will continue. The challenges include the lack of consistent laws and enforcement across the globe and the ability of criminals to process and bank their ill-gotten gains. As an example of this Sophos was able to trace the gang behind the “Koobface” malware but there was no chance of being able to prosecute themin the UK.
What are the biggest IT security threats facing companies in 2013?
The single biggest threat is getting the owners and holders of information to recognize its value and their responsibilities. What is needed is a much greater degree of “information stewardship” to take appropriate care of information – to treat it like money. The examples from the ICO show that there are still too many organizations that fail to take adequate care of the information they hold. In addition cyber criminals often seem to be better at recognising the value of information than owners. The cyber criminals are evolving their tools, techniques and processes to focus their attacks on the highest value targets. So organizations need to guard against and prepare for these kinds of event. This means a change of culture as well as applying the best technology.
The KuppingerCole advisory note: From Data Leakage Prevention (DLP) to Information Stewardship – 70587 provides more details on this subject. This subject will also be covered at the European Identity & Cloud Conference held in Munich during May 2013
Adopting cloud computing means moving from “hands on” management of IT services within the organization to “hands off” IT management using governance, service level agreements and contracts. This approach sits uneasily with many IT people whose education, training and experience are in the delivery of services rather than negotiation and governance. Nevertheless the IT department is an important player in ensuring that an organization gets what it needs from the cloud. IT Service and Security Management are key components of the KuppingerCole IT paradigm which identifies the important elements necessary to successfully adopting and assuring cloud services.
An interesting article on negotiating cloud contracts was recently published in the Stanford Technology Law Review. This article provides a comprehensive list of the concerns of organizations adopting the cloud and a detailed analysis of cloud contract terms. This article suggests that: “a multiplicity of approaches are emerging, rather than a de facto ‘cloud’ model, with market participants developing a range of cloud services with different contractual terms, priced at different levels, and embracing standards and certifications that aid legal certainty and compliance”.
According to this paper the most negotiated terms are:
- “provider liability,
- service level agreements,
- data protection and security,
- termination rights,
- unilateral amendments to service features,
- and intellectual property rights”
KuppingerCole research confirms that “Cloud security issues (84.4%) and Cloud privacy and compliance issues (84.9%) are the major inhibitors preventing organizations from moving to a private Cloud.” Our report on Cloud Provider Assurance also provides information on how to assure the technical elements cloud services which lead to the concerns mentioned in the Stanford paper. In summary - it is important to follow the old Russian maxim, which was often quoted by President Ronald Regan: “trust but verify”. Using the cloud inherently involves an element of trust between the consumer and the provider of the cloud service. However - this trust is not unconditional and it is essential to ensure that the trust can be verified.
The Stanford paper highlights the risk that end users within an organization will bypass internal governance and procurement processes and procure cloud services directly. It describes this as the “click through” trap. The KuppingerCole model for cloud service management emphasizes the need for a quick and user friendly process for requesting cloud based services and assuring that they meet the needs and the risk appetite of the organization. This process should be set up ahead of time in collaboration between all of the stakeholders including governance, risk, legal and procurement.
This process should:
- Identify the business requirements for the cloud based solution.
- Determine the security and governance needs based on these business requirements. Some applications will be more business critical than others.
- Develop scenarios to understand the security threats and weaknesses. Use these to determine the response to these risks in terms of requirements for controls and questions to be answered. Considering these risks may lead to the conclusion that the risk of moving to the Cloud is too high.
- Make clear which party (customer/provider) is responsible for all important aspects.
- Specify what measures are needed to confirm that the required service is being delivered and make sure that these are measured and action is taken.
Cloud computing provides organisations with an alternative way of obtaining IT services. However many organisations are reluctant to adopt the Cloud because of concerns over information security and loss of control. This presentation covers assurance approaches to managing the Cloud including CSA Controls Matrix, SSAE16/ISAE3401, BITS Shared Assessments and ISO 27001.
Listen to the podcast now:
Or download the audio file directly:
Download (mp3, 12:21, 11 Mb)
There is an old joke that circulated amongst IT professionals during the 1980s – this joke goes as follows. A man goes up to an ATM puts his card in the machine and requests some cash. The machine accepts his card and PIN but doesn’t give out any cash. He goes into the bank and tells a cashier what has happened. The cashier replies – “that’s strange because we just had brand new software installed this morning”. This joke is probably not funny if you bank with RBS in the UK.
I normally write about IT security issues so – why is it that this entry is about managing change. Well - security is about confidentiality, integrity and AVAILABILITY. Good IT security ensures that you have access to the information that you are entitled to whenever and wherever you need it. One of the most frequent causes of non-availability is poorly managed changes. In the world of software – a change is often a change for the worse.
The older the software system the more difficult it is to patch and most of the retail banking systems are very old. The people that originally wrote it may be long gone; the change you are applying is probably on top of many previous changes. You did your best and it looks like it should work but unfortunately you didn’t fully understand the complex interactions that now exist within the program. So you test it, and your test contains all the expected cases plus all the previously detected bugs that have been fixed. However these tests don’t include every possible case and so when it goes live – whoops the impossible happens and the system crashes. If you are lucky this unlikely event only causes minor damage. If you are unlucky – as seems to have been the case with the RBS systems – this unlikely event causes major damage. It becomes the nightmare of IT security: a low probability, high impact event.
Now you have to recover from the problem. Can you roll back the software to the last working version? Are you able to restart or re-run the failed transactions? How can you make sure that you don’t repeat the successfully processed transactions? You need to have planned for all of these contingencies BEFORE you applied the change. You need to have tested your plan BEFORE you applied the change.
Now it may well be that RBS did all that it could and should have done – only a detailed investigation will reveal whether there were avoidable shortcomings. Nevertheless RBS’s experience should be a reminder to all of us in the IT industry to be careful about managing change to IT systems. It shows the need for IT professionals to really understand the impact they have on the business.
The fundamental role of IT within an organization is simple to describe: It must provide the IT services that business requires in the way business wants them – nothing more, nothing less. Unfortunately, many corporate IT departments tend to concentrate more on technology than on the needs of the business. This is a major paradigm shift for many IT professionals. To explain this business led approach to managing IT services KuppingerCole has written a research note "The Future of IT Organizations".
If you were asked to think of an IT security firm perhaps IBM would not be top of the list. However IBM has a significant set of products in this market and it manages the security of its customers’ outsourced and cloud systems, as well as that of its very large internal IT operations. Following the acquisition of Q1 Labs late last year IBM is reorganizing to bring together all the security products under one division. Well large companies are forever re-organizing so why does this change matter? In short this is important because it reflects the increasing level of cyber risk and the recognition of this risk within the boardroom of the organization that are customers of IBM.
Over the past 12 months there have been a number of widely reported cyber-attacks on large organizations and these attacks have been intended to steal information of significant value or to cause commercial damage. The organizations affected include Sony whose PlayStation Network was targeted and the details of 77 million users compromised, RSA has offered to replace the SecurID tokens following a compromise of information relating to those tokens, and according to the Verizon 2012 Data Breach Investigations Report there has been a huge rise in politically motivated attacks. Even the head of MI5, the UK’s internal security and agency, has said it is working to counter “astonishing” levels of cyber-attacks on UK industry. The trend, identified in the Verizon report, is a large increase in data breaches stemming from external agents. So is this a watershed for boardrooms to take an interest in cyber- security?
According to a study conducted using double blind interviews by the IBM Centre for Applied Insights with 138 security leaders, that “while many security organizations remain in crisis response mode, some security leaders have moved to take a more proactive position, taking steps to reduce future risk.”:
- Business leaders are increasingly concerned with [IT sic] security issues.
- Budgets are expected to increase,
- Attention is shifting towards risk management.
- External threats are the primary security challenge.
- Mobile security is a major focus.
- Influencers: those that have business influence and authority – who rank themselves highly in maturity and preparedness.
- Protectors: who recognize the importance of information security – but who lack measurement insight and budget authority needed.
- Responders: who do not have the resources or business influence to drive significant change.
So what about security products? Well IBM has chosen focus at the higher levels of IT security management rather than low level threat protection. The rationale behind this is that threats to organizations are both targeted and persistent. If the threat is blocked in one way the attacker will continue to look for other approaches that bypass the block. Therefore behavioural analysis of what is happening around and inside the organization’s network and systems is a better indicator of an attack in progress, and this often provides the security intelligence needed to counter these threats.
The other area that IBM has focussed on is mobile security. The increasing trend towards BYOD and the proliferation of tablets and other end user devices that can be connected to the corporate network has increased the risks of data loss. Although people value their smartphone they are not careful with them. (According to a study by Plaxo – 19% of people reported that they had dropped their smartphone down a toilet!). When the device is lost the data it contains is often more valuable than the device itself. In the KuppingerCole’s opinion BYOD brings many challenges and the key to mobile security is to start from a data centric position rather than a device centric one. Understand what data you have and then to make sure that you protect it properly. IBM say that their strategy in this area comes from ”following the data” – if so that is good news.
So – in summary – the risk of cyber-threats to organizations is increasing, and it is clear that IT security professionals need to do a better job of explain these risks in business terms. KuppingerCole’s view is that IT Organizations have to adapt to become much more business aware or they will fail. This includes, but is not limited to security challenges. It is good to see IBM is providing a lead in this area.
Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]