English   Deutsch   Русский   中文    

KuppingerCole Blog

Consent – Context – Consequence

May 21, 2015 by Martin Kuppinger

Consent and Context: They are about to change the way we do IT. This is not only about security, where context already is of growing relevance. It is about the way we have to construct most applications and services, particularly the ones dealing with consumer-related data and PII in the broadest sense. Consent and context have consequences. Applications must be constructed such that these consequences can be taken.

Imagine the EU comes up with tighter privacy regulations in the near future. Imagine you are a service provider or organization dealing with customers in various locations. Imagine your customers being more willing to share data – consent with sharing – when they remain in control of data. Imagine that what Telcos must already do, e.g. in at least some EU countries, becoming mandatory for other industries and countries: Handing over customer data to other Telcos and “forgetting” about large parts of that data rapidly.

There are many different scenarios where regulatory changes or changing expectations of customers mandate changes in applications. Consent (and regulations) increasingly control application behavior.

On the other hand, there is context. Mitigating risks is tightly connected to understanding the user context and acting accordingly. The days of black and white security are past. Depending on the context, an authenticated user might be authorized to do more or less.

Simply said: Consent and context have – must mandatorily have – consequences in application behavior. Thus, application (and this includes cloud services) design must take consent and context into account. Consent is about following the principles of Privacy by Design. An application designed for privacy can be opened up if the users or regulations allow. This is quite easy, when done right. Far easier than, for example, adapting an application to tightening privacy regulations. Context is about risk-based authentication and authorization or, in a broader view, APAM (Adaptive, Policy-based Access Management). Again, if an application is designed for adaptiveness, it easily can react to changing requirements. An application with static security is hard to change.

Understanding Consent, Context, and Consequences can save organizations – software companies, cloud service providers, and any organization developing its own software – a lot of money. And it’s not only about cost savings, but agility – flexible software makes business more agile and resilient to changes and increases time-to-market.


Google+

Venom, or the Return of the Virtualized Living Dead

May 21, 2015 by Matthias Reinwarth

The more elderly amongst us might remember a family of portable, magnetic disk based storage media, with typical capacities ranging from 320 KB to 1.44 MB, called Floppy Disc. These were introduced in the early 1970s then faced their decline in the late 1990s, with today’s generation of Digital Natives most probably not having seen this type of media in the wild.

Have you ever thought it possible in 2015, that your virtual machines, your VM environment, your network and thus potentially your complete IT infrastructure might be threatened by a vulnerable floppy disk controller? Or even worse: by a virtualized floppy disk controller? No? Or that the VM you are running at your trusted provider of virtualization solutions might have been in danger of being attacked by an admin of a VM running on the same infrastructure for the last 11 years?

But this is exactly what has been uncovered this week with the publication of a vulnerability called Venom, CVE-2015-3456 (with Venom being actually an acronym for “Virtualized Environment Neglected Operations Manipulation”). The vulnerability has been identified, diligently documented, and explained by Jason Geffner of CrowdStrike.

Affected virtualization platforms include Xen, VirtualBox and QEMU, but it is the original open source QEMU virtual floppy disc controller code, that has been re-used in several virtualization environments, which has been identified as the alleged origin of the vulnerability.

As a floppy disk driver still is typically included in a VM configuration by default and the issue is within the hypervisor code, almost any installation of the identified platforms is expected to be affected, no matter which underlying hosting operating system has been chosen. Although no exploits have been yet documented prior to the publication, this should be expected to change soon.

The immediately required steps are obvious:

  • If you are hosting a virtualization platform for yourselves or your organization, make sure that you’re running a version that is not affected or otherwise apply the most recent patches. A patch for your host OS and virtualizing platform should be already available. And do it now.
  • In case you are running one or more virtual machines at providers using one of the affected platforms, make sure that appropriate measures have been taken for mitigating this vulnerability. And do it now!

More importantly this vulnerability again puts a spotlight on the reuse of open source software within other products, especially commercial products or those used widely in commercial environments. Very much like the heart bleed bug or shellshock this vulnerability once more proves that relying on the given quality of open source code cannot be considered appropriate. This vulnerability has been out in the wild for more than 11 years now.

Open source software comes with the great opportunity of allowing code inspection and verification. But just because code is open does not mean that code is secure unless somebody actually takes a look (or more).

Improving application and code security has to be on the agenda right now. This is true for both commercial and open source software. Appropriate code analysis tools and services for achieving this are available. Intelligent mechanisms for static and dynamic code vulnerability analyses have to be integrated effectively within any relevant software development cycles. This is not a trending topic, but it should be. The responsibility for achieving this is surely a commercial topic, but it is also a political topic and a topic that has to be discussed in the various OSS communities. Venom might not be as disruptive as heart bleed, but the next heart bleed is out there and we should try to get at least some of them fixed before they are exploited.

And while we’re at it, why not change the default for including floppy disks in new VMs from “yes” to “no”, just for a start…


Google+

100%, 80% or 0% security? Make the right choice!

May 19, 2015 by Martin Kuppinger

Recently, I have had a number of conversations with end user organizations, covering a variety of Information Security topics but all having the same theme: There is a need for certain security approaches such as strong authentication on mobile devices, secure information sharing, etc. But the project has been stopped due to security concerns: The strong authentication approach is not as secure as the one currently implemented for desktop systems; some information needs to be stored in the cloud; etc.

That’s right, IT Security people stopped Information Security projects due to security concerns.

The result: There still is 0% security, because nothing has been done yet.

There is the argument, that insecure is insecure. Either something is perfectly secure or it is insecure. However, when following that path, everything is insecure. There are always ways to break security, if you just invest sufficient criminal energy.

It is time to move away from our traditional black-and-white approach to security. It is not about being secure or insecure, but, rather, about risk mitigation. Does a technology help in mitigating risk? Is it the best way to achieve that target? Is it a good economic (or mandatory) approach?

When thinking in terms of risk, 80% security is obviously better than 0% security. 100% might be even better, but also worse, because it’s costly, cumbersome to use, etc.

It is time to stop IT security people from inhibiting improvements in security and risk mitigation by setting unrealistic security baselines. Start thinking in terms of risk. Then, 80% of security now and at fair cost are commonly better than 0% now or 100% sometime in the future.

Again: There never ever will be 100% security. We might achieve 99% or 98% (depending on the scale we use), but cost grows exponentially. The limit of cost is infinite for security towards 100%.


Google+

Telcos: Making Use of Consumer Identities

May 12, 2015 by David Goodman

For years telcos have been sitting on a wealth of user data. Where market penetration of connected devices, from smartphones to tablets, has reached saturation point, telcos have developed a billing relationship and a profile for most of the population including family groups and businesses. Anyone under the age of 30 has grown up in a mobile connected world with a service provider, over familiar from blanket public exposure through media advertising and sports sponsorship – the telco’s presence is rarely far away. This degree of familiarity could be expected to offer a high level of trust in telcos that in turn ought to provide plentiful opportunities for offering targeted and relevant value-added services, leading to business growth away from traditional revenue sources. To date, this has rarely been the case. However, the role of telcos is changing, due principally to the commoditization of services and the arrival of the over-the-top (OTT) players, who are able to offer VoIP services as well as a myriad of applications that today may only require the telco for Internet connectivity. As a result, more attention is now being paid to boost customer loyalty, to reduce churn and to target new services more effectively than in the past through experience-based marketing.

The operator’s data sources are as diverse as billing and payments, the CRM database, core network sessions, self-care applications, text messages and trouble tickets. This raw information can then be translated with good analytics into operational optimization and enrichment, both for the individual user as well as the network as a whole. The most transparent business opportunities are driven by insights based on user behaviour which when connected to business processes can drive actions. When automated and real-time, decision-making becomes quicker and more efficient.

Real-time data from network events and elsewhere can be analysed to assess symptoms and causes of issues, which when shared with a customer-facing team can dramatically reduce the time required to resolve calls and to repair faults. Real-time analytics of network traffic can also be used to predict and prevent cell congestion when there is a high concentration of users at, say, a football match or a concert. Likewise telcos can track the movement of vehicles and offer a premium service for avoiding traffic snarl-ups.

In some cases, the societal benefits are tangible as in the case when Telefonica used mobile data to measure the spread of a swine flu epidemic in Mexico, enabling the government to reduce virus propagation by 10%. Or when, after a massive earthquake in Mexico, Telefonica researchers captured mobile data records that, once anonymized and aggregated, allowed visualizations of the density of calls in the different parts of the city to be built, immediately depicting the areas most affected by the earthquake.

However, users are becoming increasingly aware of what data all companies store about them, and the telcos are no exception. It is only a matter of time before telcos are required to seek customer consent before selling their data insights to third parties.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

Risk and Governance in Analytics

May 12, 2015 by Mike Small

There is now an enormous quantity of data which is being generated in a wide variety of forms. However this data, in itself, has little meaning or value; it needs interpretation to make it useful. Analytics are the tools, techniques and technologies that can be used to analyze this data into information with value. These analytics are now being widely adopted by organizations to improve their performance. However what are the security and governance aspects of the use of these tools?

For example Dunnhumby which was created in 1989, by a husband and wife team, to help businesses better understand their customers by being 'voyeurs of the shopping basket'. Within a few years, they were working with Tesco to develop their Clubcard loyalty program. The insights from this help Tesco stock the right products, optimize prices, run relevant promotions and communicate personalized offers for customers across all contact channels.

However another side to this kind of analysis was described in the NY Times article How Companies Learn Your Secrets - NYTimes.com. According to this article a statistician working for the US retailer Target figured out how to identify customers in the second trimester of their pregnancy based on buying habits and other customer data. The targeted advertising based on this led to an angry father complaining to a store manager about advertising for baby clothes and cribs being sent to his daughter who was still in high school. It turned out that the analytics had worked out she was in fact pregnant but she had not told her father.

These examples based on loyalty cards illustrate the value of data analytics but the problem is now even more difficult. This is because of the amount of data that is being generated through smart devices and Apps vastly exceeds that from the occasional use of a loyalty card.

So where is the line between improving service to customers and invading their privacy? At what point does the aggregation and analysis of data become a threat rather than a benefit? These are difficult questions to answer and regulations and the law provide little help. For example when a customer in the UK accepts a customer loyalty card they accept the terms and conditions. These will almost certainly include an agreement that the card provider can use the data collected through its use in a wide variety of ways. Most people do not read the small print – they simply want the loyalty rewards. Those who do read the small print are unlikely to understand the full implication of what they are agreeing to. However under the data protection laws this agreement is considered to be “informed consent”. So is this a fair bargain? Based on the take up of loyalty cards in the UK - for most people it is.

So from the point of view of an organization that wants to get closer to its customers, to provide better products, to become more competitive data analytics are a powerful tool. According to Erik Brynjolfsson Professor at the MIT Sloan School of Management: “Companies with ‘data driven decision making’ actually show higher performance”. Working with Lorin Hitt and Heekyung Kim, Professor Brynjolfsson analyzed 179 large publicly-traded firms and found that the ones that adopted this method are about 5% more productive and profitable than their competitors. Furthermore, the study found a relationship between this method and other performance measures such as asset utilization return on equity and market value.

But what are the risks to the organization in using these forms of analytics? Firstly it is important to be sure of the accuracy of the data.

Can you be sure of the source of the data which originates from outside of your organization and outside of your control? Many consumers take steps to cloak their identity by using multiple personas, the Internet of Things may provide a rich source of data but without guarantees regarding its provenance or accuracy. If you are sure of the data what about the conclusions from analysis?

Can the analytics process provide an explanation of why it has reached the conclusions that you can understand? If not be careful before you bet the farm on the results.

Are you sure that you have permission to use the data at all and in that way in particular? In the EU there are many rules regarding the privacy of personal information. An individual gives data to a third party (known as the data controller) for a specific purpose. The data controller is required to only hold the minimum data and to only process it for the agreed purpose.

If you are going to use analytics it is a decision which should involve the board of directors. They should set the business objectives for its use, define the policies for its governance, and their appetite for risks relating to its use.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

Managing the relationships for the new ABC: Agile Business, Connected

May 12, 2015 by Martin Kuppinger

Over the past years, we talked a lot about the Computing Troika with Cloud Computing, Mobile Computing, and Social Computing. We raised the term of the Identity Explosion, depicting the exponential growth of identities organizations have to deal with. We introduced the need for a new ABC: Agile Business, Connected. While agility is a key business requirement, connected organizations are a consequence of both the digital transformation of business and of mobility and IoT.

This rapid evolution in consequence means that we also have to transform our understanding of identities and access. We still see a lot of IAM projects focusing on employees. However, it is about employees, business partners, customers and consumers, leads, prospects etc. when we look at human identities.

 
Fig. 1: People, organizations, devices, and things are becoming connected –
organizations will have to deal with more identities and relations than ever before.

Even more, human identities are becoming only a fraction of the identities we have to deal with. People use devices, which communicate with backend services. Things are becoming increasingly connected. Everything and everyone, whether being a human, a device, a service, or a thing, have their own identity.

Relationships can become quite complex. A device might be used by multiple persons. A vehicle is not only connected to the driver or manufacturer, but to many other parties such as insurance companies, leasing companies, police, dealer, garage, inhabitants, other vehicles, etc. Not so speak about the fact that the vehicle for itself consists of many things that frequently interact.

Managing access to information requires a new thinking around identities and access. Only that will enable us to manage and restrict access to information as needed. Simply said: Identity and Access Management is becoming bigger than ever before and it is one of the essential foundations to make the IoT and the digital transformation of businesses work.

 
Fig. 2: APIs will increasingly connect everything and everyone – it becomes essential
to understand the identity context in which APIs are used.

In this context, APIs (Application Programming Interfaces) play a vital role. While I don’t like that term, being far too technical, it is well established in the IT community. APIs – the communication interfaces of services, apps (on devices) and what we might call thinglets (on things) – are playing the main role in this new connected world. Humans interact with some services directly, via browser. They use the UI of their devices to access apps. And they might even actively interact with things, even while these commonly act autonomously.

But the communication than happens between apps, devices, and services, using these APIs. For managing access to information via services, devices and things, we need in particular a good understanding of the relationship between them and the people and organizations. Without that, we will fail in managing information security in the new ABC.

Understanding and managing relations of a massive number of connected people, things, devices, and services is today’s challenge for succeeding with the new ABC: Agile Businesses, Connected.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

European Identity & Cloud Awards 2015

May 07, 2015 by Jennifer Haas

The European Identity & Cloud Awards 2015 were presented last night by KuppingerCole at the 9th European Identity & Cloud Conference (EIC). These awards honor outstanding projects and initiatives in Identity & Access Management (IAM), Governance, Risk Management and Compliance (GRC), as well as Cloud Security.

Numerous projects have been nominated by vendors and end-user companies during the last 12 months. Winners have been chosen by KuppingerCole analysts from among the most outstanding examples of applications and ideas in the areas of IAM, GRC, and Cloud security. Additionally, two Special Awards have been given.

In the category Best Innovation / New Standard, the award was granted to AllJoyn. The system provides a standard that is essential for the IoT (Internet of Things). This allows devices to advertise and share their abilities with other devices they can connect to. 

The European Identity & Cloud Award for the category Best Innovation in eGovernment / eCitizen went to SkIDentity, which provides a bridge between various strong authentication mechanisms, in particular national eID cards from various European countries, and backend systems. SkIDentity runs as a cloud-based service which allows user authentication with various authenticators to a single service. 

Another award has been given to DNA Ltd in the category Best B2B Identity Project. DNA Ltd. is Finland’s largest cable operator with 3 million customers. Additionally, the company has 80,000 corporate customers that, e.g., need to view and manage their services within active contracts. Based on the new “Minun Yritykseni” (“My Company”) service, customers can easily manage all their contracts. 

The award for Best Approach on Improving Governance and Mitigating Risks in 2015 went to the University Hospital of Nantes. They have implemented a comprehensive IAM solution supporting a variety of use cases. At the core is SSO functionality, enabling strong authentication based on a multi-services card that also supports requirements such as rapid, contactless sign-on. Furthermore, as part of the project a comprehensive user self-service for entitlements, but also other types of assets, has been implemented.

In the category Best Access Governance / Intelligence Project, the award was granted to Nord/LB for implementing an IAM project focusing on realizing a modern, workflow-structured and role-based IAM system that mitigates existing risks and improves governance capabilities. An important element within its design and deployment was a clear segregation between management of identities and access and the technical operation.

dm-drogerie markt achieved the award for the Best IAM Project. This project stands for an IAM implementation that is focused on an environment well beyond the internal IT and the employees. While It supports the IAM and IAG requirements within the headquarters and central services, it also supports the more than 3,000 branch offices with their 110,000 user accounts and access of external suppliers, another group of 3,000 users.

PostNL was honored with the Best Cloud Security Award for its Cloud IAM-Projekt, leverage a fully EU-based Cloud Identity Service leveraged to support the requirements of user single sign-on, user management, and governance. A particular focus is on the ability of access governance across cloud services as well, that is supported by the implementation.

Finally, MEECO received a Special Award for their wise implementation of a Life Management Platform. It allows user controlled management and sharing of information and gives the user full control of their personal data. The second Special Award was given to Dialog Axiata & Sri Lanka Telecom Mobitel for the project Mobile Connect. The combined Dialog-Mobitel Mobile Connect solution is a fast, secure log-in system for mobile authentication that enables people to access their online accounts with just one click. 

The European Identity & Cloud Awards honor projects that promote the awareness for and business value of professional Identity Management and Cloud Security.


Google+

Redesigning access controls for IAM deployments?

Apr 20, 2015 by Martin Kuppinger

A few weeks ago I read an article in Network World, entitled “A common theme in identity and access management failure: lack of Active Directory optimization”. Essentially, it is about the fact that Microsoft Active Directory (AD) commonly needs some redesign when starting an IAM (Identity and Access Management) project. Maybe yes, and maybe no.

In fact, it is common that immature, chaotic, or even “too mature” (e.g. many years of administrative work leaving their traces with no one cleaning up) access control approaches in target systems impose a challenge when connecting them to an IAM (and Governance) system. However, there are two points to consider:

  1. This is not restricted to AD, it applies to any target system.
  2. It must not be allowed to lead to failures in your IAM deployment.

I have frequently seen this issue with SAP environments, unless they already have undergone a restructuring, e.g. when implementing SAP Access Control (formerly SAP GRC Access Control). In fact, the more complex the target system and the older it is, the more likely it is that the structure of access controls, be they roles or groups or whatever, is anything but perfect.

There is no doubt that redesign of the security model is a must in such situations. The question is just about “when” this should happen (as Jonathan Sander, the author of the article mentioned above, also states). In fact, if we would wait for all these security models to be redesigned, we probably never ever would see an IAM program succeeding. Some of these redesign projects take years – and some (think about mainframe environments) probably never will take place. Redesigning the security model of an AD or an SAP environment is a quite complex project by itself, despite all the tools supporting this.

Thus, organizations typically will have to decide about the order of projects. Should they push their IAM initiative or do the groundwork first? There is no single correct answer to that question. Frequently, IAM projects are so much under pressure that they have to run first.

However, this must not end in the nightmare of a failing project. The main success factor for dealing with these situations is having a well thought-out interface between the target systems and the IAM infrastructure for exposing entitlements from the target systems to IAM. At the IAM level, there must be a concept of roles (or at least a well thought-out concept for grouping entitlements). And there must be a clear definition of what is exposed from target systems to the IAM system. That is quite easy for well-structured target systems, where, for instance, only global groups from AD or business roles from SAP might become exposed, becoming the smallest unit of entitlements within IAM. These might appear as “system roles” or “system-level roles” (or whatever term you choose) in IAM.

Without that ideal security model in the target systems, there might not be that single level of entitlements that will become exposed to the IAM environment (and I’m talking about requests, not about the detailed analysis as part of Entitlement & Access Governance which might include lower levels of entitlements in the target systems). There are two ways to solve that issue:

  1. Just define these entitlements, i.e. global groups, SAP business roles, etc. first as an additional element in the target system, map them to IAM, and then start the redesign of the underlying infrastructure later on.
  2. Or accept the current structure and invest more in mappings of system roles (or whatever term you use) to the higher levels of entitlements such as IT-functional roles and business roles (not to mix up with SAP business roles) in your IAM environment.

Both approaches work and, from my experience, if you understand the challenge and put your focus on the interface, you will be quickly able to identify the best way to handle the challenge of executing your IAM program while still having to redesign the security model of target systems later on. In both cases, you will need a good understanding of the IAM-level security model (roles etc.) and you need to enforce this model rigidly – no exceptions here.


Google+

The New Meaning of “Hacking your TV”

Apr 13, 2015 by Alexei Balaganski

After a long list of high-profile security breaches that culminated in the widely publicized Sony Pictures Entertainment hack last November, everyone has gradually become used to this type of news. If anything, they only confirm the fact that security experts have known for years: the struggle between hackers and corporate security teams is fundamentally asymmetrical. Regardless of its size and budgets, no company is safe from such attacks simply because a security team has to cover all possible attack vectors, and a hacker needs just a single overlooked one.

Another important factor is the ongoing trend in the IT industry of rapidly growing interconnectivity and gradual erosion of network perimeters caused by adoption of cloud and mobile services, with trends such as “Industry 4.0”, i.e. connected manufacturing, and IoT with billions of connected devices adding to this erosion. All this makes protecting sensitive corporate data increasingly difficult and this is why the focus of information security is now shifting from protecting the perimeter towards real-time security intelligence and early detection of insider threats within corporate networks. Firewalls still play a useful role in enterprise security infrastructures, but, to put it bluntly, the perimeter is dead.

Having that in mind, the latest news regarding a hack of the French television network TV5Monde last Wednesday look even more remarkable. Apparently, not just their web site and social media accounts were taken over by hackers calling themselves “Cybercaliphate” and claiming allegiance to the Islamic State, they also managed to disrupt their TV broadcasting equipment for several hours. Political implications of the hack aside, the first thing in the article linked above that attracted my attention was the statement of the network’s director Yves Bigot: “At the moment, we’re trying to analyse what happened: how this very powerful cyber-attack could happen when we have extremely powerful and certified firewalls.”

Now, we all know that analyzing and attributing a cyber-attack is a very difficult and time-consuming process, so it’s still too early to judge whether the attack was indeed carried out by a group of uneducated jihadists from a war-torn Middle-Eastern region or is was a job of a hired professional team, but one thing that’s immediately clear is that it has nothing to do with firewalls. The technical details of the attack are still quite sparse, but according to this French-language publication, the hackers utilized a piece of malware written in Visual Basic to carry out their attack. In fact, it’s a variation of a known malware that is detected by many antivirus products and its most probable delivery vectors could be an unpatched Java vulnerability or even an infected email message. Surely, the hackers probably needed quite a long time to prepare their attack, but they are obviously not highly-skilled technical specialists and were not even good enough at hiding their tracks.

In fact, it would be completely safe to say that the only people to blame for the catastrophic results of the hack are TV5Monde’s own employees. After deploying their “extremely powerful firewalls” they seemingly didn’t pay much attention to protecting their networks from insider threats. According to this report, they went so far as to put sticky notes with passwords on walls and expose them on live TV!

We can also assume with certain confidence that their other security practices were equally lax. For example, the fact that all their social media accounts were compromised simultaneously probably indicates that the same credentials were used for all of them (or at least that the segregation of duties principle isn’t a part of their security strategy). And, of course, complete disruption of their TV service is a clear indication that their broadcasting infrastructure simply wasn’t properly isolated from their corporate network.

We will, of course, be waiting for additional details and new developments to be published, but it is already clear that the case of Sony hack apparently wasn’t as educational for TV5Monde as security experts have probably hoped. Well, some people just need to learn from their own mistakes. You, however, don't have to.

The first thing every organization’s security team has to realize is that the days of perimeter security are over. The number of possible attack vectors on corporate infrastructure and data has increased dramatically, and the most critical ones (like compromised privileged accounts) are actually working from within the network. Combined with much stricter compliance regulations, this means that not having a solid information security strategy can have dramatic financial and legal consequences.

For a quick overview of top 10 security mistakes with potentially grave consequences I recommend having a look at the appropriately titled KuppingerCole’s Leadership Brief: 10 Security Mistakes That Every CISO Must Avoid published just a few days ago. And of course, you’ll find much more information on our website in form of research documents, blog posts and webinar recording.

 

 


Google+

AWS Announces Machine Learning Service

Apr 10, 2015 by Mike Small

AWS has recently announced the Amazon Machine Learning service – what is this and what does it mean for customers? 

Organizations now hold enormous quantities of data and more data in a wide variety of forms is rapidly being generated.  Research has shown that organizations that base their decision making and processes on data are more successful than those that do not.  However interpretation and analysis is needed to transform this data into useful information.  Data analysis and interpretation is not easy and there are many tools on the market to help to transform raw data into valuable information. 

The challenge that most organizations face is that the special skills needed to analyze their data and these skills are not widely available.  In addition, to make use of the data the analysis and results need to be tightly integrated with the existing data sources and applications.  However, in general, software developers do not have the required data analysis skills.  AWS believe that their newly launched Amazon Machine Learning service will overcome these two challenges. 

AWS leveraged the data analysis tools and techniques that were developed for the Amazon.com retail organization when designing and building the ML service.  These are the underlying tools that try to anticipate the interests of buyers so as to direct them to the item they want and hence to make a purchase more likely.  Given the success of Amazon.com these tools and techniques ought to be very useful to the organizations wanting to get closer to their retail customers. 

In addition according to AWS,  the service can be used without the need for expertise in the area of data analytics.  The service provides features that can be used by software developers to build a model based on imperfect data; to validate that the predictions from the model are accurate and then to deploy that model in a way that can easily be integrated without change to existing applications.  AWS shared an anecdotal example in which their service was able to create a model in 20 minutes which had the same accuracy as a model that took two software developers a month to create manually. 

As you would expect the new service is tightly integrated with AWS data sources such as Amazon S3, Amazon Redshift and Amazon RDS. It can be invoked to provide predictions in real-time; for example, to enable the application to detect fraudulent transactions as they come in 

However there are the security and governance aspects of the use of this kind of tool.  The recent KuppingerCole Newsletter on Data Analytics discussed the problem of how to draw the line between improving service to customers and invading their privacy.  At what point does the aggregation and analysis of data become a threat rather than a benefit?  These are difficult questions to answer and regulations and the law provide little help.   

However from the point of view of an organization that wants to get closer to its customers, to provide better products, and to become more competitive data analytics are a powerful tool.   In the past the limiting factor has been the skills involved in the analysis and machine learning is a way to overcome this limitation. 

Using this form of analytics does have some risks.  Firstly it is important to be sure of the accuracy of the data.  This is especially true if the data comes from a source which is outside of your control.  Secondly can you understand the model and conclusions from the analytics process; an explanation would be nice?   If not be careful before you bet the farm on the results.  Correlations and associations are not cause and effect – make sure the results are valid.  Finally are you sure that you have permission to use the data at all and in that way in particular?  Privacy rules can limit the use you can make of personal data. 

Overall, AWS Machine learning provides an attractive solution to enable an organization to become more data driven.  However it is important to set the business objectives for the use of this approachto define the policies for its governance, and the appetite for risks relating to its use.


Google+


top
KuppingerCole Blog
By:
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Register now
Spotlight
Analytics
There is now an enormous quantity of data which is being generated in a wide variety of forms. However this data, in itself, has little meaning or value; it needs interpretation to make it useful. Analytics are the tools, techniques and technologies that can be used to analyze this data into information with value. These analytics are now being widely adopted by organizations to improve their performance. However what are the security and governance aspects of the use of these tools?
KuppingerCole Services
KuppingerCole offers clients a wide range of reports, consulting options and events enabling aimed at providing companies and organizations with a clear understanding of both technology and markets.
Links
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on Google+

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing

 GenericIAM
Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2015 KuppingerCole