English   Deutsch   Русский   中文    

KuppingerCole Blog

Safety vs. security – or both?

Jul 07, 2015 by Martin Kuppinger

When it comes to OT (Operational Technology) security in all its facets, security people from the OT world and IT security professionals quickly can end up in a situation of strong disagreement. Depending on the language they are talking, it might even appear that they seem being divided by a common language. While the discussion in English quickly will end up with a perceived dichotomy between security and safety, e.g. in German it would be “Sicherheit vs. Sicherheit”.

The reason for that is that OT thinking traditionally – and for good reason – is about safety of humans, machines, etc. Other major requirements include availability and reliability. If the assembly line stops, this can quickly become expensive. If reliability issues cause faulty products, it also can cost vast amounts of money.

On the other hand, the common IT security thinking is around security – protecting systems and information and enforcing the CIA – confidentiality, integrity, and availability. Notably, even the perception of the common requirement of availability is slightly different, with IT primarily being interested in not losing data while OT looking for always up. Yes, IT also frequently has requirements such as 99.9% availability. However, sometimes this is unfounded requirement. While it really costs money if your assembly line is out of service, the impact of HR not working for a business day is pretty low.

While IT is keen on patching systems to fix known security issues, OT in tendency is keen on enforcing reliability and, in consequence, availability and security. From that perspective, updates, patches, or even new hardware and software versions are a risk. That is the reason for OT frequently relying on rather old hardware and software. Furthermore, depending on the type of production, maintenance windows might be rare. In areas with continuous production, there is no way of quickly patching and “rebooting”.

Unfortunately, with smart manufacturing and the increased integration of OT environments with IT, the risk exposure is changing. Furthermore, OT environments for quite a long time have become attack targets. Information about such systems is widely available, for instance using the Shodan search engine. The problem: The longer software remains unpatched, the bigger the risk. Simply said: The former concept of focusing purely on safety (and reliability and availability) no longer works in connected OT. On the other hand, the IT thinking also does not work. Many of us have experienced problems and downtimes to due erroneous patches.

There is no simple answer, aside that OT and IT must work hand in hand. It’s, cynically said, not about “death by patch vs. death by attacker”, but about avoiding death at all. From my perspective, the CISO must be responsible for both OT and IT – split responsibilities, ignorance, and stubbornness do not help us in mitigating risks. Layered security, virtualizing existing OT and exposing it as standardized devices with standardized interfaces appears being a valid approach, potentially leading the way towards SDOT (Software-defined OT). Aside of that, providers of OT must rethink their approaches, enabling updates even with small maintenance windows or at runtime, while enforcing stable and reliable environments. Not easy to do, but a premise when moving towards smart manufacturing or Industry 4.0.

One thing to me is clear: Both parties can learn from each other – to the benefit of all.


Google+

Security and Operational Technology / Smart Manufacturing

Jul 07, 2015 by Mike Small

Industry 4.0 is the German government’s strategy to promote the computerization of the manufacturing industry. This strategy foresees that industrial production in the future will be based on highly flexible mass production processes that allow rich customization of products. This future will also include the extensive integration of customers and business partners to provide business and value-added processes. It will link production with high-quality services to create so-called “hybrid products”.

At the same time, in the US, the Smart Manufacturing Leadership Coalition is working on their vision for “Smart Manufacturing”. In 2013 the UK the Institute for Advanced Manufacturing, which is part of the University of Nottingham, received a grant of £4.6M for a study on Technologies for Future Smart Factories.

This vision depends upon the manufacturing machinery and tools containing embedded computer systems that will communicate with each other inside the enterprise, and with partners and suppliers across the internet. This computerization and communication will enable optimization within the organizations, as well as improving the complete value adding chain in near real time through the use of intelligent monitoring and autonomous decision making processes. This is expected to lead to the development of completely new business models as well as exploiting the considerable potential for optimization in the fields of production and logistics.

However there are risks, and organizations adopting this technology need to be aware of and manage these risks. Compromising the manufacturing processes could have far reaching consequences. These consequences include the creation of flawed or dangerous end products as well as disruption of the supply chain. Even when manufacturing processes based on computerized machinery are physically isolated they can still be compromised through maladministration, inappropriate changes and infected media. Connecting these machines to the internet will only increase the potential threats and the risks involved.

Here are some key points to securely exploiting this vision:

  • Take a Holistic Approach: the need for security is no longer confined to the IT systems, the business systems of record but needs to extend to cover everywhere that data is created, transmitted or exploited. Take a holistic approach and avoid creating another silo.
  • Take a Risk based approach: The security technology and controls that need to be built should be determined by balancing risk against rewards based on the business requirements, the assets at risk together with the needs for compliance as well as the organizational risk appetite. This approach should seek to remove identifiable vulnerabilities and put in place appropriate controls to manage the risks.
  • Trusted Devices: This is the most immediate concern since many devices that are being deployed today are likely to be in use, and hence at risk, for long periods into the future. These devices must be designed and manufactured to be trustworthy. They need an appropriate level of physical protection as well as logical protection against illicit access and administration. It is highly likely that these devices will become a target for cyber criminals who will seek to exploit any weaknesses through malware. Make sure that they contain protection that can be updated to accommodate evolving threats.
  • Trusted Data: The organization needs to be able to trust the data from this. It must be possible to confirm the device from which the data originated, and that this data has not been tampered with or intercepted. There is existing low power secure technology and standards that have been developed for mobile communications and banking, and these should be appropriately adopted or adapted to secure the devices.
  • Identity and Access Management – to be able to trust the devices and the data they provide means being able to trust their identities and control access. There are a number of technical challenges in this area; some solutions have been developed for some specific kinds of device however there is no general panacea. Hence it is likely that more device specific solutions will emerge and this will add to the general complexity of the management challenges.

More information on this subject can be found in Advisory Note: Security and the Internet of Everything and Everyone - 71152 - KuppingerCole


Google+

OT, ICS, SCADA – What’s the difference?

Jul 07, 2015 by Graham Williamson

Operational Technology (OT) refers to computing systems that are used to manage industrial operations as opposed to administrative operations. Operational systems include production line management, mining operations control, oil & gas monitoring etc.

ot_ics_scada.jpg

Industrial control systems (ICS) is a major segment within the operational technology sector. It comprises systems that are used to monitor and control industrial processes. This could be mine site conveyor belts, oil refinery cracking towers, power consumption on electricity grids or alarms from building information systems. ICSs are typically mission-critical applications with a high-availability requirement.

Most ICSs fall into either a continuous process control system, typically managed via programmable logic controllers (PLCs), or discrete process control systems (DPC), that might use a PLC or some other batch process control device.

Industrial control systems (ICS) are often managed via a Supervisory Control and Data Acquisition (SCADA) systems that provides a graphical user interface for operators to easily observe the status of a system, receive any alarms indicating out-of-band operation, or to enter system adjustments to manage the process under control.

Supervisory Control and Data Acquisition (SCADA) systems display the process under control and provide access to control functions. A typical configuration is shown in Figure 1 - Typical SCADA Configuration Figure 1.

scada_configuration.pg.jpg
Figure 1 - Typical SCADA Configuration

The main components are:

  • SCADA display unit that shows the process under management in a graphic display with status messages and alarms shown at the appropriate place on the screen. Operators can typically use the SCADA system to enter controls to modify the operation in real-time. For instance, there might be a control to turn a valve off, or turn a thermostat down.
  • Control Unit that attaches the remote terminal units to the SCADA system. The Control unit must pass data to and from the SCADA system in real-time with low latency.
  • Remote terminal units (RTUs) are positioned close to the process being managed or monitored and are used to connect one or more devices (monitors or actuators) to the control unit, a PLC can fulfil this requirement. RTUs may be in the next room or hundreds of kilometres away.
  • Communication links can be Ethernet for a production system, a WAN link over the Internet or private radio for a distributed operation or a telemetry link for equipment in a remote area without communications facilities.

There are some seminal changes happening in the OT world at the moment. Organisations want to leverage their OT assets for business purposes, they want to be agile and have the ability to make modifications to their OT configurations. They want to take advantage of new, cheaper, IP sensors and actuators. They want to leverage their corporate identity provider service to authenticate operational personnel. It’s an exciting time for operational technology systems.


Google+

The need for an "integrated identity" within hybrid cloud infrastructures

Jul 02, 2015 by Matthias Reinwarth

Yes, you might have heard it in many places: "Cloud is the new normal". And this is surely true for many modern organisations, especially start-ups or companies doing all or parts of their native business within the cloud. But for many other organisations this new normal is only one half of normal.

A lot of enterprises currently going through the process of digital transformation are maintaining their own infrastructure on premises and are looking into extending their business into the cloud. This might be done for various reasons, for example for the easier creation of infrastructure allowing rapid scalability and the ability to replace costly infrastructure which is not mission-critical to be implemented within the traditional organisational perimeter.

For many organisations it is simply not an option to move completely to the cloud for various good reasons including the protection of intellectual property within the traditional infrastructure or the necessity to maintain legacy infrastructure which in turn is business critical. For this type of enterprises, typically large and with a decent history regarding their IT, of which many are in highly regulated sectors, the future infrastructure paradigm has to be the hybrid cloud, at least for the near or medium-term future.

Cloud service providers are required to offer standardized technological approaches for this type of customers. A seamless, strategic approach to extending the existing on-premises infrastructure into the cloud is an important prerequisite for this type of customers. This is true for the actual network connectivity basis and it is especially true for the administration, the operation and the security aspects of modern IT infrastructures.

For every company that already has a well-defined IAM/IAG infrastructure and the relevant maintenance and governance processes in place it is essential that Identity Management for and within the cloud is well integrated into the existing processes. Many successful, corporate IAM systems build upon the fact, that enterprise–internal data silos have been broken up and have been integrated into an overall identity and Access Management system. For the maintenance of the newly designed cloud infrastructure it obviously does not make any sense to create a new silo of identity information for the cloud. Maintaining technical and business accounts for cloud usage is in the end a traditional identity management task. Designing the appropriate entitlement structure and assigning the appropriate access rights to the right people within the cloud while adhering to best practice like the least privilege principle is in the end a traditional Access Management task. Defining, implementing and enforcing appropriate processes to control and govern assigned access rights to identities across a hybrid infrastructure are in the end traditional access governance and access intelligence tasks.

Providers of traditional, on premises IAM infrastructures and cloud service providers alike have to support this class of customer organisations to fulfil their hybrid security and hence their IAM/IAG needs. CSPs like Amazon Web Services embrace this hybrid market by providing overall strategies for hybrid cloud infrastructures including a suitable identity, access and security approach. The implementation of a concept for an "integrated identity" across all platforms, be they cloud or on premises, is therefore a fundamental requirement. Leveraging mechanisms like inbound and outbound federation, the deployment of open standards like SAML 2.0, the availability of APIs for integrative access to the AWS IAM/IAG functionality and the integration of existing policies into the AWS IAM policies implemented as JSON files are important steps towards this "integrated identity". For the access intelligence and access governance side the AWS CloudTrail component can provide detailed logs down to an API-call-per-user-level for the existing cloud infrastructure. Such extensive logs can then be evaluated by means of an existing Access Intelligence, an existing Real-Time Security Intelligence (RTSI) solution or by deploying the AWS analytics mechanisms like Lambda.

It is obvious that these are "only" building blocks for solutions, not a fully designed solution architecture. But we're one step closer to the design and implementation for an appropriate solution for each individual enterprise. Covering all relevant aspects of security and IT GRC inside and outside the cloud will be one key challenge for the deployment of cloud infrastructures for this type of organisations.

Hybrid architectures might not be the final target architecture for some organisations, but for the next years they will form an important deployment scenario for many large organisations. Vendors and implementation partners alike have to make sure that easily deployable, standardised mechanisms are in place to extend an existing on-premises IAM seamlessly into the cloud, providing the required levels of security and governance. And since we are talking about standards and integration: This will have to work seamlessly for other, probably upcoming types of architectures as well, e.g. for those where the step towards cloud based IAM systems deploying Azure Active Directory has already been taken.


Google+

From Hybrid Cloud to Standard IT?

Jun 18, 2015 by Mike Small

I have recently heard from a number of cloud service providers (CSP) telling me about their support for a “hybrid” cloud. What is the hybrid cloud and why is it important? What enterprise customers are looking for is a “Standard IT” that would allow them to deploy their applications flexibly wherever is best. The Hybrid Cloud concept goes some way towards this.

There is still some confusion about the terminology that surrounds cloud computing and so let us go back to basics. The generally accepted definition of cloud terminology is in NIST SP-800-145. According to this there are three service models and four deployment models. The service models being IaaS, PaaS and SaaS. The four deployment models for cloud computing are: Public Cloud, Private Cloud, Community Cloud and Hybrid Cloud. So “Hybrid” is related to the way cloud services are deployed. The NIST definition of the Hybrid Cloud is:

“The cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).”

However sometimes Hybrid is used to describe a cloud strategy – meaning that the organization using the cloud will use cloud services for some kinds of application but not for others. This is a perfectly reasonable strategy but not quite in line with the above definition. So I refer to this as a Hybrid Cloud Strategy.

In fact this leads us on to the reality for most enterprises is that the cloud is just another way of obtaining some of their IT services. Cloud services may be the ideal solution for development because of the speed with which they can be obtained. They may be good for customer interaction services because of their scalability. They may be the best way to perform data analytics needing the occasional burst of very high performance computing. Hence, to the enterprise, the cloud becomes another added complexity in their already complex IT environment.

So the CSPs have recognised that in order to tempt the enterprises to use their cloud services they need recognise this complexity challenge that enterprises face and provide help to solve it. So the “Hybrid” cloud that will be attractive to enterprises needs to:

* Enable the customer to easily migrate some parts of their workload and data to a cloud service. This is because there may be some data that is required to remain on premise for compliance or audit reasons.

* Orchestrate the end to end processing which may involve on premise as well as services from other cloud providers.

* Allow the customer to assure the end to end security and compliance for their workload.

When you look at these requirements it becomes clear that standards are going to be a key component to allow this degree of flexibility and interoperability. The standards needed go beyond the support for Hypervisors, Operating Systems, Databases and middleware to include the

deployment, management and security of workloads in a common way across on premise and cloud deployments as well as between cloud services from different vendors.

There is no clear winner in the standards yet – although OpenStack has wide support including from IBM, HP and Rackspace – but one of the challenges is that vendors offer versions of this with their own proprietary additions. Other important vendors have their own proprietary offerings that they would like customers to adopt including AWS, Microsoft and VMWare. So the game is not over yet, but the industry should recognize that the real requirement is for a “Standard IT” that can easily be deployed in whatever way is most appropriate at any given time.


Google+

Why Cybersecurity and Politics Just Don’t Mix Well

Jun 12, 2015 by Alexei Balaganski

With the number of high-profile security breaches growing rapidly, more and more large corporations, media outlets and even government organizations are falling victim to hacking attacks. These attacks are almost always widely publicized, adding insult to already substantial injury for the victims. It’s no surprise that the recent news and developments in the field of cybersecurity are now closely followed and discussed not just by IT experts, but by the general public around the world.

Inevitably, just like any other sensational topic, cybersecurity has attracted politicians. And whenever politics and technology are brought together, the resulting mix of macabre and comedy is so potent that it will make every security expert cringe. Let’s just have a look at the few of the most recent examples.

After the notorious hack of Sony Pictures Entertainment last November, which was supposedly carried out by a group of hackers demanding not to release a comedy movie about a plot to assassinate Kim Jong-Un, United States intelligence agencies were quick to allege that the attack was sponsored by North Korea. For some time, it was strongly debated whether a cyber-attack constitutes an act of war and whether the US should retaliate with real weapons.

Now, every information security expert knows that attributing hacking attacks is a long and painstaking process. In fact, the only known case of a cyber-attack more or less reliably attributed to a state agency until now is Stuxnet, which after several years of research has been found out to be a product of US and Israeli intelligence teams. In case of the Sony hack, many security researchers around the world have pointed out that it was most probably an insider job having no relation to North Korea at all. Fortunately, cool heads in the US military have prevailed, but the thought that next time such an attack can be quickly attributed to a nation without nuclear weapons is still quite chilling…

Another repercussion of the Sony hack has been the ongoing debate about the latest cybersecurity ‘solutions’ the US and UK governments have come up with this January. Among other crazy ideas, these proposals include introducing mandatory backdoors into every security tool and banning certain types of encryption completely. Needless to say, all this is served under the pretext of fighting terrorism and organized crime, but is in fact aimed at further expanding government capabilities of spying on their own citizens.

Unfortunately, just like any other technology plan devised by politicians, it won’t just not work, but will have disastrous consequences for the whole society, including ruining people’s privacy, making every company’s IT infrastructure more vulnerable to hacking attacks (exploiting the same government-mandated backdoors), blocking significant part of academic research, not to mention completely destroying businesses like security software vendors or cloud service providers. Sadly, even in Germany, the country where privacy is considered an almost sacred right, the government is engaged in similar activities as well.

Speaking about Germany, the latest, somewhat more lighthearted example of politicians’ inability to cope with cybersecurity comes from the Bundestag, the German federal parliament. After another crippling cyber-attack on its network in May, which allowed hackers to steal large amount of data and led to a partial shutdown of the network, the head of Germany’s Federal Office for Information Security has come up with a great idea. Citing concerns for mysterious Russian hackers still lurking in the network, it has been announced that the existing infrastructure including over 20,000 computers has to be completely replaced. Leaving aside the obvious question – are the same people that designed the old network really able to come up with a more secure one this time? – one still cannot but wonder whether millions needed for such an upgrade could be better spent somewhere else. In fact, my first thought after reading the news was about President Erdogan’s new palace in Turkey. Apparently, he just had to move to a new 1,150-room presidential palace simply because the old one was infested by cockroaches. It was very heartwarming to hear the same kind of reasoning from a German politician.

Still, any security expert cannot but continue asking more specific questions. Was there an adequate incident and breach response strategy in place? Has there been a training program for user security awareness? Were the most modern security tools deployed in the network? Was privileged account management fine-grained enough to prevent far-reaching exploitation of hijacked administrator credentials? And, last but not the least: does the agency have budget for hiring security experts with adequate qualifications for running such a critical environment?

Unfortunately, very few details about the breach are currently known, but judging by the outcome of the attack, the answer for most of these questions would be “no”. German government agencies are also known for being quite frugal with regards to IT salaries, so the best experts are inevitably going elsewhere.

Another question that I cannot but think about is what if the hackers have utilized one of the zero-day vulnerability exploits that the German intelligence agency BND is known to have purchased for their own covert operations? That would be a perfect example of “karmic justice”.

Speaking of concrete advice, KuppingerCole provides a lot of relevant research documents. You should probably start with the recently published free Leadership Brief: 10 Security Mistakes That Every CISO Must Avoid and then dive deeper into specific topics like IAM & Privilege Management in the research area of our website. Our live webinars, as well as recordings from past events can also provide a good introduction into relevant security topics. If you are looking for further support, do not hesitate to talk to us directly!


Google+

Who will become the Google, Facebook or Apple of Life Management Platforms?

Jun 09, 2015 by Dave Kearns

A Life Management Platform (LMP) allows individuals to access all relevant information from their daily life and manage its lifecycle, in particular data that is sensitive and typically paper-bound today, like bank account information, insurance information, health information, or the key number of their car.

Three years ago, at EIC 2012, one of the major topics was Life Management Platforms (LMPs), which was described as “a concept which goes well beyond the limited reach of most of today’s Personal Data Stores and Personal Clouds. It will fundamentally affect the way individuals share personal data and thus will greatly influence social networks, CRM (Customer Relationship Management), eGovernment, and many other areas.”

In talking about LMPs, Martin Kuppinger wrote: “Life Management Platforms will change the way individuals deal with sensitive information like their health data, insurance data, and many other types of information – information that today frequently is paper-based or, when it comes to personal opinions, only in the mind of the individuals. They will enable new approaches for privacy and security-aware sharing of that information, without the risk of losing control of that information. A key concept is “informed pull” which allows consuming information from other parties, neither violating the interest of the individual for protecting his information nor the interest of the related party/parties.” (Advisory Note: Life Management Platforms: Control and Privacy for Personal Data - 70608)

It’s taken longer than we thought, but the fundamental principle that a person should have direct control of the information about themselves is finally taking hold. In particular, the work of the User Managed Access (UMA) group through the Kantara Initiative should be noted.

Fueled by the rise of cloud services (especially personal, public cloud services) and the explosive growth in the Internet of Things (IoT) which all leads to the concept we’ve identified as the Internet of Everything and Everyone (IoEE), Life Management Platforms – although not known by that name – are beginning to take their first, hesitant baby steps with everyone from the tech guru to the person in the street. The “platform” tends to be a smart, mobile device with a myriad of applications and services (known as “apps”) but the bottom line is that the data, the information, is, at least nominally, under control of the person using that platform. And the real platform is the cloud-based services, fed and fueled by public, standard Application Programming Interfaces (APIs) which provide the data for that mobile device everyone is using.

Social media, too, has had an effect. Using Facebook login, for example, to access other services people are learning to look closely at what those services are requesting (“your timeline, list of friends, birthday”) and, especially, what the service won’t do (“the service will not post on your behalf”). There’s still work to be done there, as the conditions are not negotiable yet but must be either accepted or rejected – but more flexible protocols will emerge to cover that. There’s also, of course, the fact that Facebook itself “spies” on your activity. Slowly, grudgingly, that is changing – but we’re not there yet. The next step is for enterprises to begin to provide the necessary tools that will enable the casual user to more completely control their data – and the release of their data to others – while protecting their privacy. Google (via Android), Apple (via IOS) and even Microsoft (thru Windows Mobile) are all in a position to become the first mover in this area – but only if they’re ready to fundamentally change their business model or complement their business models by an alternative approach. Indeed, some have taken tentative steps in that direction, while others seem to be headed in the opposite direction. Google and Facebook (and Microsoft, via Bing) do attempt to monetize your data. Apple tends to tell you what you want, then not allow you to change it.

But there are suggestions that users may be willing to pay for more control over their information- either in cash, or in licensing its re-use under strict guidelines. So who will step up? We shouldn’t ignore Facebook, of course, but – without a mobile operating system – they are at a disadvantage compared to the other three. And maybe, lurking in the wings, there’s an as yet undiscovered (or overlooked – yes, there are some interesting approaches) vendor ready to jump in and seize the market. After all, that’s what Google did (surprising Yahoo!) and Facebook did (supplanting MySpace) so there is precedent for a well-designed (i.e., using Privacy by Design principles) start-up to sweep this market. Someone will, we’re convinced of that. And just as soon as we’ve identified the key player, we’ll let you know so you can be prepared.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

Life Management Platforms: Players, Technologies, Standards

Jun 09, 2015 by Alexei Balaganski

When KuppingerCole outlined the concept of Life Management Platforms several years ago, the perspective of numerous completely new business models based on user-centric management of personal data may have seemed a bit too farfetched to some. Although the very idea of customers being in control of their digital lives has been actively promoted for years by the efforts of ProjectVRM and although even back then the public demand for privacy was already strong, the interest in the topic was still largely academic.

Quite a lot has changed during these years. Explosive growth of mobile devices and cloud services has significantly altered the way businesses communicate with their partners and customers. Edward Snowden’s revelations have made a profound impression on the perceived importance of privacy. User empowerment is finally no longer an academic concept. The European Identity and Cloud Conference 2015 featured a whole track devoted to user managed identity and access, which provided an overview of recent developments as well as notable players in this field.

Qiy Foundation, one of the veteran players (in 2012, we have recognized them as the first real implementation of the LMP concept) has presented their newest developments and business partnerships. They were joined by Meeco, a new project centered around social channels and IoT devices, which has won this year’s European Identity and Cloud Award.

Such industry giants as Microsoft and IBM has presented their latest research in the field of user-managed identity as well. Both companies are doing extensive research targeted on technologies implementing the minimal disclosure principle fundamental for the Life Management Platform concept. Both Microsoft’s U-Prove and IBM’s Identity Mixer projects are aimed at giving users cryptographically certified, yet open and easy to use means of disclosing their personal information to online service providers in a controlled and privacy-enhancing manner. Both implement a superset of traditional Public Key Infrastructure functionality, but instead of having a single cryptographic public key, users can have an independent pseudonymized key for each transaction, which makes tracking impossible, yet still allows to verify any subset of personal information user may choose to share with a service provider.

Qiy Foundation, having the advantage of a very early start, already provides their own design and reference implementation of the whole stack of protocols and legal frameworks for an entire LMP ecosystem. Their biggest problem - and in fact the biggest obstacle for the whole future development in this area - is the lack of interoperability with other projects. However, as the LMP track and workshop at the EIC 2015 have shown, all parties working in this area are clearly aware of this challenge and are closely following each other’s developments.

In this regard, the role of Kantara Initiative cannot be overestimated. Not only this organization has been developing UMA, an important protocol for user-centric access management, privacy and consent, they are also running the Trust Framework Provider program, which ensures that various trust frameworks around the world are aligned with government regulations and each other. Still, looking at the success of the FIDO Alliance in the field of strong authentication, we cannot but hope to see in the nearest future some kind of a body uniting major players in the LMP field, driven by the shared vision and understanding that interoperability is the most critical factor for future developments.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

The business case for user empowerment

Jun 09, 2015 by Martin Kuppinger

At the end of the day, every good idea stays and falls with the business model. If there is no working business model, the best idea will fail. Some ideas appear at a later time and are successful then. Let’s take tablets. I used a Windows tablets back in the days of Windows XP, way before the Apple iPad arrived. But it obviously was too early for widespread adoption (and yes, it was a different concept than the iPad, but one that is quite popular these days again).

So, when talking about user empowerment, the first question must be: Is there a business case? I believe it is, more than ever before. When talking about user empowerment, we are talking about enabling the user to control their data. When looking at the fundamental concept we have outlined initially back in 2012 as Life Management Platforms (there is an updated version available, dating late 2013), this includes the ability of sharing data with other parties in a controlled way. It furthermore is built on the idea on having a centralized repository for personal information – at least logically centralized, physically it might be distributed.

Such centralized data store simplifies management of personal information, from scanned contracts to health data collected via one of these popular activity-tracking wristbands. Furthermore, users can manage their preferences, policies, etc. in a single location.

Thus, it seems to make a lot of sense for instance for health insurance companies to support the concept of Life Management Platforms.

However, there might be a counterargument: The health insurance company wants to get a full grip on the data. But is this really in conflict with supporting user empowerment and concepts such as Life Management Platforms? Honestly, I believe that there is a better business case for Health Insurance Companies in supporting user empowerment. Why?

  1. They get rid of the discussion what should happen to e.g. the data collected with an activity tracker once a customer moves to another health insurance company – the user can just revoke access to that information (OK, the health insurance still will have to care for its copies, but that is easier to solve, particularly within advanced approaches).
  2. However, the customer still might allow access to a pseudonymised version of that data – he (or she) might even do so without being a customer at all, which then would allow health insurance companies to gain access to more statistical information, allowing them to better shape rates and contracts. There might be even a centralized statistical service for the industry, collecting data across all health insurance companies.
  3. Anyway, the most compelling argument from my perspective is another one: It is quite simple to connect to a Life Management Platform. Supporting a variety of activity trackers and convincing the customers that they must rely on a specific one isn’t the best approach. Just connecting to a service that provides health data in a standardized way is simpler and cheaper. And the customer can use the activity tracker he wants or already does – if he wants to share the data and benefit from better rates.

User empowerment does not stand in stark contrast to the business model of most organizations. It is only in conflict with the business models of companies such as Facebook, Google, etc. However, in many cases, the organizations such as retailers, insurance companies, etc. do not really benefit from relying on the data these companies collect – they pay for it and they might even pay twice through unwillingly collecting information that is then sold to the competition.

For most organizations, supporting user empowerment means simplified access to information and less friction by privacy discussions. Yes, the users can revoke access – but companies also might build far better relationships with customers and thus minimize that risk. There are compelling business cases today. And, in contrast to 2012, the world appears being ready for solutions that force user empowerment.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

EMC to acquire Virtustream

May 27, 2015 by Mike Small

On May 26th EMC announced that it is to acquire the privately held company Virtustream. What does this mean and what are the implications?

Virtustream is both a software vendor and a cloud service provider (CSP). Its software offering includes a cloud management platform xStream, an infrastructure assessment product Advisor, and the risk and compliance management software, ViewTrust. It also offers Infrastructure as a Service (IaaS) with datacentres in the US and Europe. KuppingerCole identified Virtustream as a “hidden gem” in our report: Leadership Compass: Infrastructure as a Service - 70959

The combination of these products has been used by Virtustream to target the Fortune 500 companies and help them along their journey to the cloud. Legacy application often have very specific needs that are difficult to reproduce in the vanilla cloud and risk and compliance issues are the top concerns when migrating systems of record to the cloud.

In addition the Virtustream technology works with VMWare to provide an extra degree of resource optimization through their Micro Virtual Machine (µVM) approach. This approach uses smaller units of allocation for both memory and processor which removes artificial sizing boundaries, makes it easier to track resources consumed, and results in less wasted resources.

The xStream cloud management software enables the management of hybrid clouds through a “single pane of glass” management console using open published APIs. It is claimed to provides enterprise grade security with integrity built upon the capabilities in the processors. Virtustream was the first CSP to announce support for NIST draft report IR 7904 Trusted Geolocation in the Cloud: Proof of Concept Implementation. This allows the user to control the geolocation of their data held in the cloud.

EMC already provides their Federation Enterprise Hybrid Cloud Solution — an on premise private cloud offering that provides a stepping stone to public cloud services. EMC also recently entered the cloud service market with an IaaS service vCloud Air based on VMWare. Since many organization already use VMWare to run their IT on premise, it was intended to make it possible to migrate these workloads without change to the cloud. An assessment of vCloud Air is also included in our Leadership Compass Report on Infrastructure as a Service – 70959.

The early focus by CSPs was on DevOps but the market for enterprise grade cloud solutions is a growth area as large organizations look to save costs by datacentre consolidation and “cloud sourcing” IT services. However success in this market needs the right combination of consultancy services, assurance and trust to succeed. Virtustream seems to have met with some success in attracting US organizations to their service. The challenge for EMC is to clearly differentiate between the different cloud offerings they now have and to compete with the existing strong players in this market. As well as the usual challenges of integrating itself into the EMC group, Virtustream may also find it difficult to focus on both providing cloud services as well as developing software.


Google+


top
KuppingerCole Blog
By:
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Register now
Spotlight
Operational Technology / Industry 4.0
Industry 4.0 is the German government’s strategy to promote the computerization of the manufacturing industry. This strategy foresees that industrial production in the future will be based on highly flexible mass production processes that allow rich customization of products.
KuppingerCole Services
KuppingerCole offers clients a wide range of reports, consulting options and events enabling aimed at providing companies and organizations with a clear understanding of both technology and markets.
Links
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on Google+

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing
Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2015 KuppingerCole