Blog posts by Mike Small
Adopting Cloud computing can save money, you need to avoid the seven deadly sins.
The Cloud provides an increasingly popular way of procuring IT services that offers many benefits including increased flexibility as well as reduced cost. It extends the spectrum of IT service delivery models beyond managed and hosted services to a form that is packaged and commoditized. However - many organizations are sleepwalking into the Cloud. Moving to the Cloud may outsource the provision of the IT service, but it does not outsource the customer’s responsibilities. There are issues that may be forgotten or ignored when adopting the cloud computing.
In medieval times the Christian church created the concept of the seven deadly vices to explain the human weaknesses that lead to sins. These are: wrath, greed, sloth, pride, lust, envy and gluttony sometimes known as the seven deadly sins. Of these vices one above all can lead to problems with Cloud computing. The deadly vice of Cloud computing is sloth which leads to inattention to details like:
- Not knowing you are using the Cloud: it is easy to buy a Cloud service using a credit card – your organization may be using the Cloud without you knowing it. When you buy the Cloud service that way it is likely that you have agreed to the terms and conditions set by the provider and these may not be appropriate for your needs. You should to ensure that there is a proper process for obtaining a Cloud service and that this is followed.
- Not assuring legal and regulatory compliance: many organizations have invested heavily to ensure that their internal IT systems comply with the legal and regulatory requirements for their type of business. You need to check that if you move these systems into the Cloud that you will not lose this compliance.
- Not knowing what data is in the cloud: one of the key legal requirements for many organizations is compliance with data privacy laws. These mandate where personally identifiable data can be held and how it must be processed. If you don’t know what data you are moving to the Cloud you could be in trouble. This problem has become more acute because of the explosion in the amount of unstructured data like spread sheets, presentations and documents. It is essential that you identify and classify data you are moving to the Cloud to manage risks and ensure compliance.
- Not managing identity and access to the cloud: controlling who can access what is even more important when data and applications are accessed via the Internet. Managing identity and access remains the responsibility of the customer when the data and application is moved to the Cloud. The best way to achieve this is through the use of identity federation based on standards like SAML and ADFS.
- Not managing business continuity and the cloud: organizations adopting the Cloud need to determine the business needs for continuity of any services and/or data being moved to the Cloud. To support this they should have policies, processes and procedures in place to ensure that theses business requirements are met. These involve not only the Cloud Service Provider, but also the customer as well as intermediate infrastructure such as telecommunications and power supplies.
- Becoming Locked-in to one provider: it is often claimed that the Cloud provides flexibility but how easy is it to change Cloud Service Provider? There are a number of factors that can make changing provider difficult. There may be contractual costs incurred on termination of the service contract. The ownership of the data held in the Cloud may not be clear and return of the data on termination of contract may be costly or slow. When data is returned it may not be in a form that can easily be used or migrated. Cloud services (built using Cloud Platforms, PaaS in particular) may be based on a proprietary architecture and interfaces making it very difficult to migrate to another provider.
- Not managing your Cloud provider: you need to manage your Cloud provider just like any other outsourced IT service provider. This means defining and agreeing metrics via service level agreements and then making sure that these are achieved. You customer may wish to perform an audit of the provider but it may not be practical for the provider to allow every customer to perform their own audit. Certification of providers by a trusted third party is a way to satisfy this need. However it is important to understand what these service organization controls (SOC) reports cover.
Is your location private? If you have installed an App on a smartphone it is almost certain that your location is being tracked. So should you care? Are you giving away details of your movements too cheaply? Is being able to track where your children are a benefit or a risk? To find the answers to these and other questions, on December 12th I attended “A Fine Balance 2011: Location and Cyber privacy in the digital age” sponsored by the UK Knowledge Transfer Network.
The title to this article is taken from the lyrics of a 1983 song by “The Police” that was used as the basis of a talk by Richard Hollis, CEO of Orthus and a director of ISACA. In his talk he explained the business value of geo-location information to increase revenue as well as to reduce cost, and the difficulty individuals have to opt out from having their location tracked. He gave a number of examples of the use of location data including; a US car rental firm that adds an extra charge if the car has exceeded 79mph for a period longer than 2 minutes, and a French company that saved on fraudulent expenses claims by tracking employees’ locations. He also described how he discovered that his new bank debit card contained an RFID chip, allowing the bank to track his presence. When he enquired of all the major UK banks he found that he was unable to opt out from this or find a bank that didn’t use this technology. Hollis believes that companies like Google have made billions of dollars from tracking where you went on the internet and they expect to make more from tracking your physical location. The downside of this data is that it is valuable to criminals; for example knowing you are not at home is valuable to thieves.
Stewart Room, a partner at Field Fisher Waterhouse LLP, outlined the legal basis for privacy. In Europe, the relevant legal framework is the data protection directive (95/46/EC). This applies where personal data are being processed as a result of the processing of location data. The e-privacy directive (2002/58/EC, as revised by 2009/136/EC) applies to the processing of base station data by public electronic communication services and networks (telecom operators).
Location data is defined in the above as being: “any data processed in an electronic communications network or by an electronic communications service, indicating the geographic position of the terminal equipment of a user of a publicly available electronic communications service”.
Location data is covered by general rules on data protection and can only be processed anonymously or with informed consent. But how informed is the consent that is given? Jonathan Bamford, Head of Strategic Liaison at the ICO described an end user agreement for the use of an App that was over 10,000 words in length. He also reported that the EU Working Party set up under Article 29 of EU Directive 95/46/EC has published a document on this subject: Opinion 13/2011 on Geolocation services on smart mobile devices. He noted that this document states – “Typically, companies that provide location services and applications based on a combination of base station, GPS and WiFi data are information society services. As such they are explicitly excluded from the e-Privacy directive,..” At the end of his presentation the audience was invited to vote on a number of issues including what approach should the ICO take to deal with this emerging problem.
Prof Jonathan Raper then presented his vision for a location data broker. This would provide a service that would securely store data on the location and movements of individuals. It would then only make this information available to other organizations with the consent of the individuals concerned and share any monetary value. It would also be able to provide confirmation of individual’s whereabouts in the case of disputes.
Chris Atkinson, from the UK Council for Child Internet safety, then discussed Safeguarding children’s privacy in social media. She posed the question “are children vulnerable innocents or tech savvie natives?” In the UK 50% of children aged 12-15 own a smart phone in comparison to only 27% of adults. In the EU 1 in 5 9-12 year olds have a profile on Facebook, in spite of there being a requirement to be 13 years or older (due to the US child protection laws). Most of these younger children do it with the help of their parents. 52% of 11-18 year olds are aware of geo-location services and 48% like their friends to know where they are.
At the end of this event I had more questions than answers. Geo-location information on individuals seems to be in widespread use. It is for example, funding the development of Apps and people want the services provided by the Apps but would prefer not to pay for them directly. Online marketing is willing to pay to know where you are and that is fine if it is done lawfully and transparently. I still worry that this geo-location data could be misused and personally I prefer not to knowingly provide it.
For more debate on this subject why not attend the European Identity Conference on April 17-20 in Munich.
If you think that China only manufactures socks – read on to learn how Chinese software and European Cloud expertise plans to deliver ERP and CRM to mid-sized enterprises in EMEA.
On November 8th, 2011 – the European IT services company ATOS and the Chinese software company UFIDA INTERNATIONAL HOLDINGS, LTD. announced the formation of a Joint Venture, YUNANO™ which will address the growing Cloud market in Europe and China, targeting midsize organizations.
UFIDA is a Chinese software company, registered in Shanghai, which was founded in 1988 and has a focus on software for Accounting, ERP, and CRM. It has around 12,000 employees and has R&D centres in Beijing, Nanjing, Shanghai and Shenzhen. YUNANO which in transliteration means “cloud & safe” will be a software company with a focus on delivering ERP and CRM to mid sized enterprises via Software as a Service. The software will be provided by UFIDA and the SaaS delivery will be provided via ATOS using the ATOS Cloud IaaS and PaaS platforms. The target market will be in EMEA and, since one major concern in this market is the location of data to comply with privacy rules, the ATOS infrastructure is located in Europe. The system integration and any process re-engineering necessary will be delivered through local VARs and these are currently being recruited.
Why is the focus on SaaS and mid-sized enterprises? This area is seen to have a very high growth rate. According to research carried out on behalf of the joint venture in house ERP in mid sized enterprises has only around 3% annual growth forecast, while ERP via SaaS annual growth is forecast to be around 90%. This is because of the ease and convenience that the SaaS model delivers to mid sized enterprises – many of which are subsidiaries of larger organizations and already have to contend with multiple complex systems.
So – does YUNANO intend to replace the likes of SAP and Oracle? The answer given is NO! Their focus is to help mid-sized organizations like L’Oreal who are strongly committed to SAP by providing simple and easy to use solution to their needs without replacing the existing systems.
When will the solution be available? The target is for CRM and ERP software to be localized in English, the initial VARs to have been recruited and for the first sales wins to have been achieved by the end of 2012. French and German localization should be available by late 2012 early 2013.
Will this be a success? KuppingerCole will be monitoring this joint venture over the coming months and will provide updates.
Cloud computing provides organisations with an alternative way of obtaining information technology services and offers many benefits including increased flexibility as well as cost reduction. But man many organisations are reluctant to adopt the cloud because of concerns over information security and a loss of control over the way IT services are delivered. These fears have been exacerbated by recent events reported in the press including outages by Amazon and the three-day loss of Blackberry services from RIM. So what approach can be taken to ensure that the benefits of the cloud outweigh the risks?
To understand the risks involved, it is important to understand that the cloud is not a single model. It covers a wide spectrum of services and delivery models ranging from in-house virtual servers to software accessed by multiple organizations over the internet. The risks of the cloud depend upon both the service model and the delivery model adopted. When moving to the cloud it is important that the business requirements for the move are understood and that the service is selected meets these needs. Taking a good governance approach is the key to safely embracing the benefits that it provides. You must identify the business requirements for the solution. This seems obvious, but many organisations are using the cloud without knowing it.
It is wise to determine the service needs based on the business requirements. Some applications will be more business critical than others. And bodies must also develop scenarios to understand the security threats and weaknesses. Use these to determine the response to these risks in terms of requirements for controls and questions to be answered. Considering these risks may lead to the conclusion that the risk of moving to the cloud is too high. Finally, organisations must understand what the accreditations and audit reports offered by the provider mean and actually cover.
The risks associated with cloud computing depend on both the service model and the delivery model adopted. The common security concerns are ensuring the confidentiality, integrity and availability of the services and data delivered through the external environment. Particular issues that need attention include ensuring compliance and avoiding lock-in. To manage risk, an organisation moving to the cloud should make a risk assessment using one of the several methodologies available. An independent risk assessment of cloud computing was undertaken by the European Network Information and Security Agency. It identified 35 risks which are classified according to their probability and their impact. When the risks important to your organisation have been identified, these lead to the questions you need to ask the provider.
I propose the following questions. How is legal and regulatory compliance assured? Where will my data be geographically located? How securely is my data handled? How is service availability assured? How is identity and access managed? How is my data protected against privileged user abuse? What levels of isolation are supported? How are the systems protected against internet threats? How are activities monitored and logged? What certification does your service have?
The service provider may respond to these questions with reports from auditors and certifications. It is important to understand what these reports cover. Note that these reports are based on the statement of the service that the organisation claims to provide - they are not an assessment against best practice. A service organisation may also provide an auditor's report based on established criteria - which cover security, availability, processing integrity, privacy, and confidentiality. A typical auditor's report on a cloud service will simply refer to which of the five areas are covered by the report and it is up to the customer to evaluate whether the principle and criteria are appropriate for their needs.
Cloud Computing can reduce costs by providing alternative models for the procurement and delivery of IT services. Although, organisations need to consider the risks involved in a move to the cloud. The information security risks associated depend upon both the service model and the delivery model adopted. The common security concerns of a cloud computing approach are maintaining the confidentiality, integrity and availability of data. The best approach to managing risk is one of good IT governance, covering both cloud and internal IT services.
Originally published at PublicServiceEurope.com
The UK National Identity Card ceased to be a valid legal document on 21 January 2011. What does this mean for e-Government in the UK? In October 2010 Martha Lane Fox – the founder of Lastminute.com and UK Government’s digital champion – delivered a report on delivering government services via the web. As a result of this report the Right Honourable Francis Maude, the minister responsible, launched a study “Ensuring Trust in Digital Services” through the Technology Strategy Board. On October 31st, 2011 I attended a series of presentations and demonstration describing the results of this study. This is not a new idea and Professor Brian Collins – who chaired the sessions – reminded us that these issues have been under discussion since the 1990's and that papers published then are still relevant! More recently the Foresight Cyber Trust and Crime project covered areas including identity and authenticity. Francis Maude introduced the session with the statement that "if a government service can be delivered online it should be delivered online, and achieving this depends upon identity". He confirmed that the UK Government will not deal with the question of citizen online identity through a centralized ID repository or ID card. Rather it will depend upon private sector services for identity assurance. For this to be successful it depends upon technology and market drivers. Identity federation technology has been around for nearly ten years (the S2ML standard – the precursor of SAML - was first ratified by OASIS in November 2002). The real issue is to identify the commercial drivers that will make this practical. As a result the government has earmarked £10M pound to support this strategy because it is seen as essential to enable the government to save money as well as improve services. David Rennie – the Cabinet Office Lead for the ID Assurance programme – described the challenges. These include the problems of trust, the registration overhead, fraud, the security arms race and the need to limit the propagation of personal data. The solution must be customer centric, decentralized and based on standards allowing collaboration. Identity assurance provides a means of improving the customer experience while mitigating risks of fraud. For it to work it also depends upon governance, certification and dispute resolution which must be doen by the governement. The challenges of ID assurance extend beyond the identity of the individual. Joan Wood – Director, Online Services & Digital Development at HMRC (the UK Tax authority) described the need to support businesses and business intermediaries as well as individuals. Employers and payroll service providers report data on payments made to individuals as well as their tax deducted at source. Financial services organizations make tax returns on behalf of individuals as well as companies, and all of these use cases need to be accounted for. Currently, on January 31st – the deadline for filing tax returns – the HMRC Web Site is the 3rd busiest in the world processing 0.5 million submissions. Mike Bracken – Executive Directory, Government Digital Services and identity - explained the benefits to the government of the programme. In one year the UK Government call centres received 690m telephone calls at a cost of £6.28 per call. It is estimated that 150m of these calls were due to failed online transactions – so improving online services could lead to massive savings. But what's in it for the identity services providers and what liabilities would they incur? Mr Bracken replied that these issues were being discussed with potential organizations. One of the demonstrations at the event was from EnCoRe – “Ensuring Consent and Revocation”. This demonstration addressed the challenge of securely aggregating personal information while ensuring privacy. The example being to allow an individual to obtain a parking permit online. This process currently requires an individual to visit local government offices armed with paper information. To obtain the parking permit the person needs to prove they are who they are – this involves the DWP government department who can confirm the person’s NI number (social security number). Then the person’s address needs to be confirmed through various sources – usually a recent utility bill or local tax payments. The relationship between the vehicle the person and the address needs to be confirmed through the Driver and Vehicle Licensing Agency. All of this need to be achieved in a way that the aggregated information about the person is not revealed but allowing a subsequent allegation fraud to be investigated. So far so good – but how do recent events like the DigiNotar and RSA hacks impact on these aims? Both of these events showed that identity providers are not immune from advanced persistent threats. If one individual has their identity stolen it is a problem for that individual, if an identity provider is compromised it is a major disaster for everyone who relies on it. In the case of DigiNotar – it is believed that the faked certificates were used to intercept the online activities of Iranian citizens . This makes issues like aggregation of personal data pale into insignificance. It has also created an enormous cost for the Dutch government whose websites relied upon the certificates issued by this authority. In the case of RSA – the cost has been to its reputation and the need to offer a replacement Secure-ID token to their entire user base. So being an identity provider has its risks and if the government relies on third parties it may end up picking up the pieces when things go wrong. This is a very interesting programme and with the first deliveries expected in 2012/2013. KuppingerCole's opinion is that it is best to architect systems to avoid any single point of failure – and this applies to identity assurance. A good approach is one of versatile authentication. Here the authentication demanded depends upon the risk assessed at the time the transaction is being requested. So a low value transaction being requested from a known mac address at its usual IP address is viewed as being lower risk than a high value transaction being requested from an abnormal geographical location and an unusual computer. In the latter case the authentication system would require additional verification of the identity of the requestor (preferabley from an alternate ID provider). Security must be designed in such a way that it does not depend on a single entity. How this could be solved on the Internet is the real challenge for ID Assurance.
WHAT HAPPENED? On July 19th, Rupert Murdoch, proprietor of one of the world’s largest news organizations News International, apologized for phone hacking by reporters at the News of the World, and is quoted as saying “this is the humblest day of my life” to a committee of MP’s in London. What does this teach us about information governance?
On Sunday July 10th, 2011 the News of the World published it last edition. This paper had been publishing for 168 years and was the top selling Sunday newspaper in the UK. The closure came following revelations of how the newspaper had allegedly obtained personal information using illegal methods such as phone hacking. The News of the World had a long history of exposing corruption in business and politics as well as the personal scandals of celebrities. It had been very effective at finding and revealing many stories of wrongdoing and corruption with a genuine public interest. However the events leading up to the closure began in 2005 when the News of the World published details of Prince Williams’s health. These details could only have originated from mobile ‘phone messages having been intercepted and this led to a police investigation. Two years later, a reporter working for the newspaper and a private investigator were sent to prison for phone hacking. It was reported that the pair were considered to have been acting alone, and the investigation ended.
Over a period of time it emerged that the ‘phones of further prominent people had been hacked. Then there were allegations that the lists of ‘phone numbers included those of victims of crime and including victims of the 7/7 London bombing.
This led to the re-opening of the police enquiry and an enquiry by a committee MP's.
WHAT IS THE PROBLEM? News International is an organization that employs 53,000 people around the world – so how can one newspaper that made up less than 1% of the organization have led to such trouble? When MP Alan Keen asked: “It became clear from the first couple of questions to you Rupert Murdoch, that you'd been kept in the dark quite a bit on some of these real serious issues, is there more?” Rupert Murdoch replied: “Nobody kept me in the dark, I may have been lax in not asking, but [the News of the World] was such a tiny part of our business.”
So how could it be that Mr Murdoch did not know? Clearly some people believe that the man at the top should take responsibility for everything that happens. But should someone at the top of an organization of this size be expected to monitor every employee, or is there a better way? I believe the answer lies with better governance.
Firstly - it is difficult to understand how anyone could believe that obtaining the information described above can be explained as being legal because it is in the public interest. Secondly the fact that reporters and investigators were able to get hold of some of the information raises the question of how well the information was being cared for by organizations that held it. So - I beleive that the problem is one of information governance.
INFORMATION GOVERNANCE Governance sets the framework within which an organization operates, it sets the ethical tone. It sets the policies and the organizational structure needed to ensure the execution of the strategic goals. With good governance misdeeds are prevented, or at least detected earlier, and processes are in place to ensure proper communication from the management to the staff and gives transparency of what is happening to the board.
Balancing the rights of individual privacy against the need for a free press is not easy. However all organizations need to take care of the information they hold and ensure that they comply with laws and best practice. The best approach for this is one of information governance. Information governance sets the policies, procedures, practices and organizational structures that ensure that information is legally obtained and properly managed. Good governance ensures that there is a consistent approach to risks and compliance across different lines of business and multiple laws and regulations. It can reduce costs by avoiding multiple, ad hoc, approaches to compliance and risk management.
Organizations with good information governance will know what information they hold and will have a process for training staff on how to legally obtain information and to keep this information secure.
CONCLUSION Organisations that collect information on individuals, even the news media, need to make sure that they behave ethically and comply with privacy legislation. Organizations that hold information on individuals need to take care that this information is handled properly and that staff are trained to detect and resist unauthorized attempts to get hold of this information. Basically this is down to good information governance.
Can users do a good job of classifying unstructured data? Tim Upton, president of Titus told the attendees at NISC in St Andrews Scotland that he believes they can. He cited figures that indicate most data breaches are due to mistakes rather than deliberate misuse or theft.
It should be noted that Titus provides software that allows users to do just that when they create an e-mail, document, presentation or other similar kinds of files. When they create the object the software will prompt them to classify it according to a predefined set of categories. These categories can match a multi-level security approach (unclassified, classified, secret, top secret) beloved by the military or a more informal classification like “Do not e-mail, blog or tweet”.
This software has been very popular with the public sector in the UK. This sector has become very sensitive to losing personal following a widely publicized data loss by the UK tax authority HMRC. In 2007 the HMRC suffered what was then the largest single data loss, when the personal data of 25 million people was lost in the post. This has now been overtaken by Sony who is reported to have had the personal data of around 100 million users compromised.
Would the Titus product have prevented these losses? Based on how these losses occurred it doesn’t seem very likely – but this doesn't mean that it is a bad idea. The first step on the road to protecting data is to classify it; you can’t protect what you don’t know you have. Unstructured data is a real problem; while organizations will often take trouble to classify application data, vast volumes of unstructured data circulate around the organization and some of that inevitably leaks out.
So – classification is the first step, but it is not the end of the road. Once you have classified data you need to put controls in place to prevent the data from being processed or shared inappropriately. Without these controls the classification is worthless. The Titus product adds the user’s classification into the metadata associated with the file and this can be used by other software such as DLP and network appliances to control movement of the file.
At another presentation Stewart Room, a partner with the legal firm Field Fisher Waterhouse, told the attendees about how the legal framework around information security in the UK is changing. This change is being driven by public interest in the tide of data breaches that are being reported in the press. This new framework will make an organization obliged to deliver reasonable security wherever it holds information, and reasonable will be defined against accepted standards. There will be a preference for transparency, i.e. if there is a data breach you should come clean. There will be more severe legal sanctions and penalties for legal breaches. Worryingly Room predicts that in the future we could expect there to be lawyers setting specifically up to litigate on behalf of people who have had their data breached to extract damages from the organization responsible using the personal injuries claim practice model (no win no fee).
So what is reasonable security? Room explained that normally government and the courts look to the professions to define reasonable practice. Most professions are represented by a small number of bodies; however in the UK there are over one hundred bodies involved with IT security. This will make it difficult to set the definition of reasonable and the risk is that a definition will be imposed on IT security practitioners.
Symantec recently announced their Endpoint Management Strategy and Release 7.1 of the Altiris product.
Managing the software patch level and software licenses on desktops, laptops, and mobile devices is a significant workload for organizations. This work is essential to protect the devices, the information that they contain and to comply with licensing and other matters. However it does not, in itself, add organizational value.
This kind of management is technically very challenging and needs sophisticated tools to meet these challenges. According to DHL’s Jan Trnka Global Altiris Architect, End User Services, DHL has around 100,000 laptops and desktops that need to be managed, and these are located in every country where DHL provides delivery services. That is in virtually every country in the world! Many of these devices are connected over low bandwidth networks and distributing software patches needs to be carefully planned and controlled. According to Mr Trnka, DHL have successfully used Symantec’s Altiris Client Management software for this task for some years.
Some organizations are looking to avoid these complexities by moving to virtual workspaces – where the software is delivered to the point of use on demand. In 2008 Symantec acquired AppStream – a company providing technology for optimizing the delivery of applications through the various virtual workspace environments. This does not avoid the need for management but reduces the need to manage the individual endpoints. However in the IT world – new technologies are always added on top of what already is deployed. So, according to Symantec, the integration of the management of virtual workspaces with that of physical ones is needed. This integration is one of their key visions.
Following this theme – Symantec see the next growth area as being the management of mobile devices. These are becoming ubiquitous and their use as personal and enterprise devices is often blurred. Symantec used as an example the a large US financial institution; this is using the Symantec platform and VeriSign components to provide secure banking from these devices including banking a cheque based on a photo of the cheque taken by the device! Therefore, Symantec’s vision is to evolve their endpoint management and security products to encompass mobile devices as well as servers, desktops and laptops.
What about identity management? The content of a desktop, laptop or mobile device depends heavily upon the identity of the user and the role that they perform. For mobile devices - the service for that device needs to be provisioned when the user joins the organization and de-provisioned when the user leaves. So it is no surprise that the Symantec Service Desk - which has become work-flow centric - is claimed to have been extended to solve problems like user onboarding and offboarding and elimination of "ghost accounts" in AD. This looks more like creep into the provisioning product space than real management of digital identities.
Clearly Symantec have a mature product set for managing endpoints and a clear strategy of how this will evolve to meet the challenges of virtualization and mobility Although Symantec is well known for its desktop security software, notably the Norton Range of products, its product range has evolved over time and this evolution is the key to its success. Since 2000 Symantec acquired VERITAS and Altiris extending its product range to include storage and endpoint management tools. More recently it added the VeriSign security business: this includes the Secure Sockets Layer Certificate Services, the Public Key Infrastructure Services, the VeriSign Trust Services and the VeriSign Identity Protection (VIP) Authentication Service. Symantec’s vision for the management of virtual workspaces and mobile devices represents another step in its continuing evolution. Ultimately, the test of this vision will be in its execution.
In Brussels on March 22nd Neelie Kroes, Vice-President of the European Commission responsible for Digital Agenda European Cloud Computing Strategy, made a speech at the opening of the Microsoft Centre for Cloud Computing and Interoperability. In this she said “...to offer a true utility in a truly competitive digital single market, users must be able to change their cloud provider easily. It must be as fast and easy as changing one’s internet or mobile phone provider has become in many places...” So what are the difficulties to achieving that goal and how far are we away from it now?
Well – it depends upon what Cloud model you consider and which Cloud service you are using. For an e-mail service – then standards like SMTP and MIME make it very easy to switch or even to use multiple providers at the same time, providing you download your e-mail data. If you hold your e-mails in the Cloud then it is a different story. The same is true if you use any other Cloud service which holds your data for example: file backup, word processing, accounting etc.
So here is the rub – connection standards make it easy to connect to any Cloud service. However moving data between Cloud providers is much more difficult. For most practical purposes the only way to move data is to download it to your own computer and then upload to the new provider. This may also involve a lot of work to reformat data into a different standard.
In the last week Amazon announced their “Cloud Player”, which allows users to play songs across a number of computers and Android smart phones. Music lovers will be able to upload most of their existing music library – including tracks bought through Apple's iTunes – to Amazon, as well as buy new songs for digital playback. This service has opened another concern – who owns the music (i.e. data) in the Cloud. Amazon said it has sidestepped legal uncertainties about allowing users to upload music from their computer – some of which may have been downloaded illegally – by the service being the equivalent of any other storage device, such as an external hard drive. This means that if you decide to switch to another service say from Google later – you may need to download and then upload to the new provider!
So – there is a long way to go before it will be possible to switch Cloud provider as quickly and easily as your mobile phone service. The problems include legal issues relating to ownership of data and service agreements that allow users to painlessly transfer their data between Cloud providers when they switch.
Identity Management – Process or Technology?
One line of thinking has been that the major cause of identity theft and data loss is poor process and that strengthening the process is the key approach. Strong processes are indeed required but a strong process can be undermined by a weakness in technology.
The electronic identity of someone depends upon the process for managing that establishing that identity. Even biometrics depends upon the identity of the person being confirmed through a process or paper trail.
However the mechanism for proving the identity (authentication) needs to be chosen according to the risk. Traditionally this risk was fixed by the circumstances under which the identity is used – for example to access email internally. A password is cheap but relatively weak; however stronger forms like smart cards are expensive. The RSA SecureID was a nice compromise.
Wrongly assessing the risk or choosing the wrong technology undermines the process. The recent closure of the European Carbon Trading Market is an example of what happens when this goes wrong. Most operations at Europe’s 30 registries for greenhouse-gas emissions were suspended on Jan. 19 after a Czech trader reviewing his $9 million account found “nothing was there.” The EU estimates permits worth as many as 29 million Euros ($39 million) may be missing. Was this process or technology?
Now that systems are regularly accessed via the internet, for example by mobile employees or adoption of the Cloud, a more resilient technology is needed. An emerging solution to this is “versatile authentication” – where multiple factors like: the location of the request, the time, the value of the transaction, are taken into account. A versatile approach can be quickly reconfigured to take account of a new vulnerability to demand further proof of identity.
During the 1980 and 1990 the value of sharing information through “Groupware” was very high and the need for security was downgraded. The normal access mechanism implemented in most environments is called “Discretionary Access Control” or DAC. In this – if you have legitimate access to some information – you have discretion over what you do with it. You can copy it, print it e-mail it etc. This makes it easy for someone who has access to steal or misuse information. During the 1970 a stronger form of access control was invented called “Mandatory Access Control” or DAC. In this data is tagged so that only people authorized to access it are able to, and it is not possible for one person to copy the data to give it to another unauthorised person. This approach has now been reinvented under the name of Data Loss Prevention and Digital Rights Management technology.
Many organizations have poor processes for identifying valuable information and poor technology to prevent that information from leaking. See the recent example of a former Goldman Sachs programmer who stole key intellectual property. http://www.bloomberg.com/news/2011-03-16/ex-goldman-programmer-aleynikov-s-conviction-is-upheld-by-trial-judge.html
Abuse of Privilege
The infrastructure upon which cloud computing is built needs to be managed and maintained. To perform these tasks the servers, platforms and applications need powerful administrator accounts. These accounts are used by the Cloud Service provider to perform essential administration, yet they represent a potential risk because they allow powerful actions which include: bypassing normal access controls to read application data and changing or erasing entries in the system log. Managing the identity of these administrators is a critical issue for information security in the Cloud.
Distributed systems technology has an inherent weakness - the privileged accounts. Many organizations do not have process in place to compensate for this. See the recent example of an administrator who held the City of San Francisco to ransom. http://www.pcworld.com/businesscenter/article/148469/it_admin_locks_up_san_franciscos_network.html
Privilege Management (PxM) technology is an emerging solution to manage this weakness.
Bottom line - strong process always needs to be backed by good technology. Many of the technologies in use today have significant weaknesses and vendors need to work to remove these.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]