Blog posts by Mike Small

Classify Your Data [Not Protectively Marked]

Can users do a good job of classifying unstructured data? Tim Upton, president of Titus told the attendees at NISC in St Andrews Scotland that he believes they can. He cited figures that indicate most data breaches are due to mistakes rather than deliberate misuse or theft.

It should be noted that Titus provides software that allows users to do just that when they create an e-mail, document, presentation or other similar kinds of files. When they create the object the software will prompt them to classify it according to a predefined set of categories. These categories can match a multi-level security approach (unclassified, classified, secret, top secret) beloved by the military or a more informal classification like “Do not e-mail, blog or tweet”.

This software has been very popular with the public sector in the UK. This sector has become very sensitive to losing personal following a widely publicized data loss by the UK tax authority HMRC. In 2007 the HMRC suffered what was then the  largest single data loss, when the personal data of 25 million people was lost in the post. This has now been overtaken by Sony who is reported to have had the personal data of around 100 million users compromised.

Would the Titus product have prevented these losses? Based on how these losses occurred it doesn’t seem very likely – but this doesn't mean that it is a bad idea. The first step on the road to protecting data is to classify it; you can’t protect what you don’t know you have. Unstructured data is a real problem; while organizations will often take trouble to classify application data, vast volumes of unstructured data circulate around the organization and some of that inevitably leaks out.

So – classification is the first step, but it is not the end of the road. Once you have classified data you need to put controls in place to prevent the data from being processed or shared inappropriately. Without these controls the classification is worthless. The Titus product adds the user’s classification into the metadata associated with the file and this can be used by other software such as DLP and network appliances to control movement of the file.

At another presentation Stewart Room, a partner with the legal firm Field Fisher Waterhouse, told the attendees about how the legal framework around information security in the UK is changing. This change is being driven by public interest in the tide of data breaches that are being reported in the press. This new framework will make an organization obliged to deliver reasonable security wherever it holds information, and reasonable will be defined against accepted standards. There will be a preference for transparency, i.e. if there is a data breach you should come clean. There will be more severe legal sanctions and penalties for legal breaches. Worryingly Room predicts that in the future we could expect there to be lawyers setting specifically up to litigate on behalf of people who have had their data breached to extract damages from the organization responsible using the personal injuries claim practice model (no win no fee).

So what is reasonable security? Room explained that normally government and the courts look to the professions to define reasonable practice. Most professions are represented by a small number of bodies; however in the UK there are over one hundred bodies involved with IT security. This will make it difficult to set the definition of reasonable and the risk is that a definition will be imposed on IT security practitioners.

Symantec Bets on Virtual Workspaces and Mobility

Symantec recently announced their Endpoint Management Strategy and Release 7.1 of the Altiris product. 

Managing the software patch level and software licenses on desktops, laptops, and mobile devices is a significant workload for organizations.  This work is essential to protect the devices, the information that they contain and to comply with licensing and other matters.  However it does not, in itself, add organizational value.

This kind of management is technically very challenging and needs sophisticated tools to meet these challenges.  According to DHL’s Jan Trnka Global Altiris Architect, End User Services, DHL has around 100,000 laptops and desktops that need to be managed, and these are located in every country where DHL provides delivery services.  That is in virtually every country in the world!  Many of these devices are connected over low bandwidth networks and distributing software patches needs to be carefully planned and controlled.  According to Mr Trnka, DHL have successfully used Symantec’s Altiris Client Management software for this task for some years.

Some organizations are looking to avoid these complexities by moving to virtual workspaces – where the software is delivered to the point of use on demand.  In 2008 Symantec acquired AppStream – a company providing technology for optimizing the delivery of applications through the various virtual workspace environments.   This does not avoid the need for management but reduces the need to manage the individual endpoints.   However in the IT world – new technologies are always added on top of what already is deployed.  So, according to Symantec, the integration of the management of virtual workspaces with that of physical ones is needed.  This integration is one of their key visions.

Following this theme – Symantec see the next growth area as being the management of mobile devices.  These are becoming ubiquitous and their use as personal and enterprise devices is often blurred.  Symantec used as an example the a large US financial institution; this is using the Symantec platform and VeriSign components to provide secure banking from these devices including banking a cheque based on a photo of the cheque taken by the device!  Therefore, Symantec’s vision is to evolve their endpoint management and security products to encompass mobile devices as well as servers, desktops and laptops.

What about identity management?  The content of a desktop, laptop or mobile device depends heavily upon the identity of the user and the role that they perform.  For mobile devices - the service for that device needs to be provisioned when the user joins the organization and de-provisioned when the user leaves.  So it is no surprise that the Symantec Service Desk - which has become work-flow centric - is claimed to have been extended to solve problems like  user onboarding and offboarding and elimination of "ghost accounts" in AD.   This looks more like creep into the provisioning product space than real management of digital identities.

Clearly Symantec have a mature product set for managing endpoints and a clear strategy of how this will evolve to meet the challenges of virtualization and mobility Although Symantec is well known for its desktop security software, notably the Norton Range of products, its product range has evolved over time and this evolution is the key to its success.   Since 2000 Symantec acquired VERITAS and Altiris extending its product range to include storage and endpoint management tools.  More recently it added the VeriSign security business:  this includes the Secure Sockets Layer Certificate Services, the Public Key Infrastructure Services, the VeriSign Trust Services and the VeriSign Identity Protection (VIP) Authentication Service.  Symantec’s vision for the management of virtual workspaces and mobile devices represents another step in its continuing evolution.   Ultimately, the test of this vision will be in its execution.

Switching Cloud Provider

In Brussels on March 22nd Neelie Kroes, Vice-President of the European Commission responsible for Digital Agenda European Cloud Computing Strategy, made a speech at the opening of the Microsoft Centre for Cloud Computing and Interoperability. In this she said “...to offer a true utility in a truly competitive digital single market, users must be able to change their cloud provider easily. It must be as fast and easy as changing one’s internet or mobile phone provider has become in many places...” So what are the difficulties to achieving that goal and how far are we away from it now?

Well – it depends upon what Cloud model you consider and which Cloud service you are using. For an e-mail service – then standards like SMTP and MIME make it very easy to switch or even to use multiple providers at the same time, providing you download your e-mail data. If you hold your e-mails in the Cloud then it is a different story. The same is true if you use any other Cloud service which holds your data for example: file backup, word processing, accounting etc.

So here is the rub – connection standards make it easy to connect to any Cloud service. However moving data between Cloud providers is much more difficult.  For most practical purposes the only way to move data is to download it to your own computer and then upload to the new provider. This may also involve a lot of work to reformat data into a different standard.

In the last week Amazon announced their “Cloud Player”, which allows users to play songs across a number of computers and Android smart phones. Music lovers will be able to upload most of their existing music library – including tracks bought through Apple's iTunes – to Amazon, as well as buy new songs for digital playback. This service has opened another concern – who owns the music (i.e. data) in the Cloud. Amazon said it has sidestepped legal uncertainties about allowing users to upload music from their computer – some of which may have been downloaded illegally – by the service being the equivalent of any other storage device, such as an external hard drive. This means that if you decide to switch to another service say from Google later – you may need to download and then upload to the new provider!

So – there is a long way to go before it will be possible to switch Cloud provider as quickly and easily as your mobile phone service. The problems include legal issues relating to ownership of data and service agreements that allow users to painlessly transfer their data between Cloud providers when they switch.

Identity Management - Process or Technology

Identity Management – Process or Technology?

RSA recently announced SEC 8-K filing a security breach, relating its SecureID authentication technology.   This reopens the question of which is the most important factor in identity management – processes or technology?

One line of thinking has been that the major cause of identity theft and data loss is poor process and that strengthening the process is the key approach. Strong processes are indeed required but a strong process can be undermined by a weakness in  technology.

 

Authentication:

The electronic identity of someone depends upon the process for managing that establishing that identity. Even biometrics depends upon the identity of the person being confirmed through a process or paper trail.

However the mechanism for proving the identity (authentication) needs to be chosen according to the risk. Traditionally this risk was fixed by the circumstances under which the identity is used – for example to access email internally. A password is cheap but relatively weak; however stronger forms like smart cards are expensive. The RSA SecureID was a nice compromise.

Wrongly assessing the risk or choosing the wrong technology undermines the process. The recent closure of the European Carbon Trading Market is an example of what happens when this goes wrong. Most operations at Europe’s 30 registries for greenhouse-gas emissions were suspended on Jan. 19 after a Czech trader reviewing his $9 million account found “nothing was there.” The EU estimates permits worth as many as 29 million Euros ($39 million) may be missing. Was this process or technology?

Now that systems are regularly accessed via the internet, for example by mobile employees or adoption of the Cloud, a more resilient technology is needed. An emerging solution to this is “versatile authentication” – where multiple factors like: the location of the request, the time, the value of the transaction, are taken into account. A versatile approach can be quickly reconfigured to take account of a new vulnerability to demand further proof of identity.

Data Leakage

During the 1980 and 1990 the value of sharing information through “Groupware” was very high and the need for security was downgraded.  The normal access mechanism implemented in most environments is called “Discretionary Access Control” or DAC.  In this – if you have legitimate access to some information – you have discretion over what you do with it.  You can copy it, print it e-mail it etc.  This makes it easy for someone who has access to steal or misuse information.  During the 1970 a stronger form of access control was invented called “Mandatory Access Control” or DAC.  In this data is tagged so that only people authorized to access it are able to, and it is not possible for one person to copy the data to give it to another unauthorised person.  This approach has now been reinvented under the name of Data Loss Prevention and Digital Rights Management technology.

Many organizations have poor processes for identifying valuable information and poor technology to prevent that information from leaking.  See the recent example of a former Goldman Sachs programmer who stole key intellectual property. http://www.bloomberg.com/news/2011-03-16/ex-goldman-programmer-aleynikov-s-conviction-is-upheld-by-trial-judge.html

Abuse of Privilege

The infrastructure upon which cloud computing is built needs to be managed and maintained.  To perform these tasks the servers, platforms and applications need powerful administrator accounts.  These accounts are used by the Cloud Service provider to perform essential administration, yet they represent a potential risk because they allow powerful actions which include: bypassing normal access controls to read application data and changing or erasing entries in the system log.  Managing the identity of these administrators is a critical issue for information security in the Cloud.

Distributed systems technology has an inherent weakness - the privileged accounts.  Many organizations do not have process in place to compensate for this.  See the recent example of an administrator who held the City of San Francisco to ransom. http://www.pcworld.com/businesscenter/article/148469/it_admin_locks_up_san_franciscos_network.html

Privilege Management (PxM) technology is an emerging solution to manage this weakness.

Bottom line - strong process always needs to be backed by good technology.  Many of the technologies in use today have significant weaknesses and vendors need to work to remove these.

EU Privacy Direction

Last week I had the privilege of attending a seminar at which Peter Hustinx, the EU Privacy Commissioner outlined the future approach on personal data protection in the European Union.   This approach includes “a right to be forgotten” as well as mandatory data breach reporting.  

Given that the WikiLeaks website has recently released 2.5 million documents that were supposedly “private” reports by US embassies - you might ask “what does privacy mean?”  Well privacy in this context is more narrowly defined to be privacy of personal information.

In the EU privacy is based on the European Convention on Human Rights, article 8 of this convention guarantees a right to privacy:

  1. Everyone has the right for his private and family life, his home and his correspondence.
  2. There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.
So the correspondence leaked by WikiLeaks is only covered by this insofar as it reveals information about individuals in a way that they did not consent to.  But the WikiLeaks case is still very relevant because it demonstrates the challenges that are posed by the ability of IT systems to store massive amounts of data and for this data to be widely disseminated through the internet.   The challenge for organizations is to be able to share information for legitimate purposes while preventing that information from leaking out or being used for illegitimate purposes.

15 years ago the EU led the world in the area of privacy legislation, however rapid technological developments and globalisation have profoundly changed the world around us, and brought new challenges for the protection of personal data. To meet these challenges, in early November, the EU published a document describing the direction for privacy and data protection.  This document contains the following introductory paragraph:

“Today technology allows individuals to share information about their behaviour and preferences easily and make it publicly and globally available on an unprecedented scale.  Social networking sites, with hundreds of millions of members spread across the globe, are perhaps the most obvious, but not the only, example of this phenomenon. ‘Cloud computing’ - i.e., Internet-based computing whereby software, shared resources and information are on remote servers (‘in the cloud’) could also pose challenges to data protection, as it may involve the loss of individuals' control over their potentially sensitive information when they store their data with programs hosted on someone else's hardware. A recent study confirmed that there seems to be a convergence of views – of Data Protection Authorities, business associations and consumers' organisations – that risks to privacy and the protection of personal data associated with online activity are increasing.”

The strategy sets out proposals on how to modernise the EU framework for data protection rules through a series of key goals:

  • Strengthening individuals rights (Directive 95/46/EC) so that the collection and use of personal data is limited to the minimum necessary.  To improve the notion of informed consent, to consider mandatory breach notification (like for e-Privacy Directive 2002/58/EC (amended by Directive 2009/136/EC) which applies to Telecommunication providers. To provide a “right to be forgotten” when their data is no longer needed or they want their data to be deleted.
  • Enhancing the internal market dimension. Data Protection in the EU has a strong internal market dimension, i.e., the need to ensure the free flow of personal data between Member States within the internal market. As a consequence, the Directive’s harmonisation of national data protection laws is not limited to minimal harmonisation but amounts to harmonisation that is generally complete.
  • Revising the data protection rules in the area of police and judicial cooperation in criminal matters.  There is a need to have a ‘comprehensive protection scheme’ and to strengthen the EU's stance in protecting the personal data of the individual in the context of all EU policies, including law enforcement and crime prevention.
  • The global dimension of data protection. Clarifying and simplifying the rules for international data transfers.  Data processing is globalised and calls for the development of universal principles for the protection of individuals with regard to the processing of personal data.
In conclusion The Commission will propose legislation in 2011 aimed at revising the legal framework for data protection with the objective of strengthening the EU's stance in protecting the personal data of the individual in the context of all EU policies, including law enforcement and crime prevention, taking into account the specificities of these areas. Non legislative measures, such as encouraging self-regulation and exploring the feasibility of EU privacy seals, will be pursued in parallel.

So what does this all mean for organizations and individuals?  There is no doubt that mandatory data breach notification will focus the minds of organizations on the security of their IT systems.  Much has been made of theft of data by cyber criminals, however while this is important, misuse of data by insiders is also a significant problem.  I would expect to see an increased interest in “Data Leak Prevention” technology which can control the transmission of data based on its content and encryption to control access to data which gets “lost”.

From the perspective of individuals – the direction does little to protect people from themselves.  The person using a social networking site remains at liberty to give away personal information about themselves – even to their own detriment, as has been illustrated by many recent news stories.  They can also send ill judged messages that are publicly visible using Twitter – which have on occasions led to criminal convictions.  Perhaps the “right to be forgotten” could include these classes of data?

Discover KuppingerCole

KuppingerCole PLUS

Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

AI for the Future of Your Business Learn more

AI for the Future of Your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00