English   Deutsch   Русский   中文    

Blog posts by Martin Kuppinger

Data Security Intelligence – better understanding where your risks are

Apr 08, 2015 by Martin Kuppinger

Informatica, a leader in data management solutions, introduced a new solution to the market today. The product named Secure@Source also marks a move of Informatica into the Data (and, in consequence, Database) Security market. Informatica already has solutions for data masking in place, which is one of the elements in data security. However, data masking is only a small step and it requires awareness of the data that needs protection.

In contrast to traditional approaches to data security – Informatica talks about “data-centric security” – Informatica does not focus on technical approaches alone e.g. for encrypting databases or analyzing queries. As the name of their approach implies, they are focusing on protection of the data itself.

The new solution builds on two pillars. One is Data Security Intelligence, which is about discovery, classification, proliferation analysis, and risk assessment for data held in a variety of data sources (and not only in a particular database). The other is Data Security Controls, which as of now includes persistent and dynamic masking of data plus validation and auditing capabilities.

The target is reducing the risk of leakage, attacks, and other data-related incidents for structured data held in databases and big data stores. The approach is understanding where the data resides and applying adequate controls, particularly masking of sensitive data. This is all based on policies and includes e.g. alerting capabilities. It also can integrate data security events from other sources and work together with external classification engines.

These interfaces will also allow third parties to attach tokenization and encryption capabilities, along with other features. It will also support advanced correlation, for instance by integrating with an IAM solution, and thus adding user context, or by integrating with DLP solutions to secure the endpoint.

Informatica’s entry into the Information Security market is, in our view, a logical consequence of where the company is already positioned. While it provides deep insight into where sensitive data resides - in other words its source - and a number of integrations, we’d like to see a growing number of out-of-the-box integrations, for instance, with security analytics, IAM, or encryption solutions. While there are integration points for partners, relying too much on partners for all these various areas might make the solution quite complex for customers. While this makes sense for IAM, security analytics such as SIEM or RTSI (Real-time Security Intelligence) and other areas such as encryption might become integral parts of future releases of Informatica Secure@Source.

Anyway, Informatica is taking the right path for Information Security by focusing on security at the data level instead of focusing on devices, networks, etc. – the best protection is always at the source.


Google+

Facebook profile of the German Federal Government thwarts efforts to improve data protection

Mar 05, 2015 by Martin Kuppinger

There is a certain irony that the federal government has almost simultaneously launched a profile on Facebook with the change of the social network’s terms of use. While the Federal Minister of Justice, Heiko Maas, is backing up consumer organizations with their warnings of Facebook, the Federal Government has taken the first step in setting up its own Facebook profile.

With the changes in the terms of use, Facebook has massively expanded its ability to analyze the data of users. Data is also stored which is left behind by users on pages outside of Facebook for use in targeted advertising and possibly other purposes. On the other hand, the user has the possibility of better managing personal settings for his/her own privacy. The bottom line: it remains clear that Facebook is collecting even more data in a hard to control manner.

Like Federal Minister of Justice Maas says, "Users do not know which data is being collected or how it is being used."

For this reason alone, it is difficult to understand why the Federal Government is taking this step right at this moment. After all, it has been able to do its work so far without Facebook.

With its Facebook profile, the Federal Government is ensuring that Facebook is, for example, indirectly receiving information on the political interests and preferences of the user. Since it is not clear just how this information could be used today or in the future, it is a questionable step.

If one considers the Facebook business model, it can also have an imminent negative impact. Facebook's main source of income is from targeted advertising based on the information that the company has collected on its users. With the additional information that will be available via the Federal Government's Facebook profile, for example, interest groups can, in the future, selectively advertise on Facebook to track their goals.

Here it is apparent, as with many businesses, that the implications of commercial Facebook profiles are frequently not understood. On the one hand, there is the networking with interested Facebook users. Their value is often overrated - these are not customers, not leads and NOT voters, but at best people with a more or less vague interest. On the other hand, there is information that a company, a government, a party or anyone else with a Facebook profile discloses to Facebook: Who is interested in my products, my political opinions (and which ones) or for my other statements on Facebook?

The Facebook business model is exactly that - to monetize this information - today more than ever before with the new business terms. For a company, this means that the information is also available to the competition. You could also say that Facebook is the best possibility of informing the competition about a company's (more or less interested) followers. In marketing, but also in politics, one should understand this correlation and weigh whether it is worth paying the implicit price for the added value in the form of data that is interesting to competitors.

Facebook may be "in" - but it is in no way worth it for every company, every government, every party or other organization.

End users have to look closely at the new privacy settings and limit them as much as possible if they intend to stay on Facebook. In the meantime, a lot of the communication has moved to other services like WhatsApp, so now is definitely the time to reconsider the added value of Facebook. And sometimes, reducing the amount of communication and information that reaches one is also added value.

The Federal Government should in any case be advised to consider the actual benefits of its Facebook presence. 50,000 followers are not 50,000 voters by any means - the importance of this number is often massively overrated. The Federal Government has to be clear about the contradiction between its claim to strong data protection rules and its actions. To go to Facebook now is not even fashionable any more - it is plainly the wrong step at the wrong time.

According to KuppingerCole, marketing managers in companies should also exactly analyze which price they are paying for the anticipated added value of a Facebook profile - one often pays more while the actual benefits are much less. Or has the number of customers increased accordingly in the last fiscal year because of 100,000 followers? A Facebook profile can definitely have its uses. But you should always check carefully whether there is truly added value.


Google+

Gemalto feels secure after attack - the rest of the world does not

Feb 25, 2015 by Martin Kuppinger

In today’s press conference regarding the last week’s publications on a possible compromise of SIM cards from Gemalto by the theft of keys the company has confirmed security incidents during the time frame mentioned in the original report. It’s difficult to say, however, whether their other security products have been affected, since significant parts of the attack, especially in the really sensitive part of their network, did not leave any substantial traces. Gemalto therefore makes a conclusion that there were no such attacks.

According to the information published last week, back in 2010 a joint team of NSA and GCHQ agents has carried out a large-scale attack on Gemalto and its partners. During the attack, they have obtained secret keys that are integrated into SIM cards on the hardware level. Having the keys, it’s possible to decrypt mobile phone calls as well as create copies of these SIM cards and impersonate their users on the mobile provider networks. Since Gemalto, according to their own statements, produces 2 billion cards each year, and since many other companies have been affected as well, we are facing a possibility that intelligence agencies are now capable of global mobile communication surveillance using simple and nonintrusive methods.

It’s entirely possible that Gemalto is correct with their statement that there is no evidence for such a theft. Too much time has passed since the attack and a significant part of the logs from the affected network components and servers, which are needed for the analysis of such a complex attack, are probably already deleted. Still, this attack, just like the theft of so called “seeds” from RSA in 2011, makes it clear that manufacturers of security technologies have to monitor and upgrade their own security continuously in order to minimize the risks. Attack scenarios are becoming more sophisticated – and companies like Gemalto have to respond.

Gemalto itself recognizes that more has to be done for security and incident analysis: "Digital security is not static. Today's state of the art technologies lose their effectiveness over time as new research and increasing processing power make innovative attacks possible. All reputable security products must be re-designed and upgraded on a regular basis". In other words, one can expect that the attacks were at least partially successful - not necessarily against Gemalto itself, but against their customers and other SIM card manufacturers. There is no reason to believe that new technologies are secure. According to the spokesperson for the company, Gemalto is constantly facing attacks and outer layers of their protection have been repeatedly breached. Even if Gemalto does maintain a very high standard in security, the constant risks of new attack vectors and stronger attackers should not be underestimated.

Unfortunately, no concrete details were given during the press conference, what changes to their security practices are already in place and what are planned, other than a statement regarding continuous improvement of these practices. However, until the very concept of a “universal key”, in this case the encryption key on a SIM card, is fundamentally reconsidered, such keys will remain attractive targets both for state and state-sponsored attackers and for organized crime.

Gemalto considers the risk for the secure part of their infrastructure low. Sensitive information is apparently kept in isolated networks, and no traces of unauthorized access to these networks have been found. However, the fact that there were no traces of attacks does not mean that there were no attacks.

Gemalto has also repeatedly pointed out that the attack has only affected 2G network SIMs. There is, however, no reason to believe that 3G and 4G networks must be safer, especially not against massive attacks of intelligence agencies. Another alarming sign is that, according to Gemalto, certain mobile service providers are still using insecure transfer methods. Sure, they are talking about “rare exceptions”, but it nevertheless means that unsecured channels still exist.

The incident at Gemalto has once again demonstrated that the uncontrolled actions of intelligence agencies in the area of cyber security poses a threat not only to fundamental constitutional principles such as privacy of correspondence and telecommunications, but to the economy as well. The image of companies like Gemalto and thus their business success and enterprise value are at risk from such actions.

Even more problematic is that the knowledge of other attackers is growing with each published new attack vector. Stuxnet and Flame have long been well analyzed. It can be assumed that the intelligence agencies of North Korea, Iran and China, as well as criminal groups have studied them long ago. The act can be compared to leaking of atomic bomb designs, with a notable difference: you do not need plutonium, just a reasonably competent software developer to build your own bomb. Critical infrastructures are thus becoming more vulnerable.

In this context, one should also consider the idea of German state and intelligence agencies to procure zero-day exploits in order to carry out investigations of suspicious persons’ computers. Zero-day attacks are called that way because code to exploit a newly discovered vulnerability is available before the vendor even becomes aware of the problem, because they literally have zero days to fix it. In reality, this means that attackers are able to exploit a vulnerability long before anyone else discovers it. Now, if government agencies are keeping the knowledge about such vulnerabilities to create their own malware, they are putting the public and the businesses in a great danger, because one can safely assume that they won’t be the only ones having that knowledge. After all, why would sellers of such information make their sale only once?

With all due respect for the need for states and their intelligence agencies to respond to the threat of cyber-crime, it is necessary to consider two potential problems stemming from this approach. On one hand, it requires a defined state control over this monitoring, especially in light of the government’s new capability of nationwide mobile network monitoring in addition to already available Internet monitoring. On the other hand, government agencies finally need to understand the consequences of their actions: by compromising the security of IT systems or mobile communications, they are opening a Pandora's Box and causing damage of unprecedented scale.


Google+

Operational Technology: Safety vs. Security – or Safety and Security?

Feb 24, 2015 by Martin Kuppinger

In recent years, the area of “Operational Technology” – the technology used in manufacturing, in Industrial Control Systems (ICS), SCADA devices, etc. – has gained the attention of Information Security people. This is a logical consequence of the digital transformation of businesses as well as concepts like the connected (or even hyper-connected) enterprise or “Industry 4.0”, which describes a connected and dynamic production environment. “Industry 4.0” environments must be able to react to customer requirements and other changes by better connecting them. More connectivity is also seen between industrial networks and the Internet of Things (IoT). Just think about smart meters that control local power production that is fed into large power networks.

However, when Information Security people start talking about OT Security there might be a gap in common understanding. Different terms and different requirements might collide. While traditional Information Security focuses on security, integrity, confidentiality, and availability, OT has a primary focus on aspects such as safety and reliability.

Let’s just pick two terms: safety and security. Safety is not equal to security. Safety in OT is considered in the sense of keeping people from harm, while security in IT is understood as keeping information from harm. Interestingly, if you look up the definitions in the Merriam-Webster dictionary, they are more or less identical. Safety there is defined as “freedom from harm or danger: the state of being safe”, while security is defined as “the state of being protected or safe from harm”. However, in the full definition, the difference becomes clear. While safety is defined as “the condition of being safe from undergoing or causing hurt, injury, or loss”, security is defined as “measures taken to guard against espionage or sabotage, crime, attack, or escape”.

It is a good idea to work on a common understanding of terms first, when people from OT security and IT security start talking. For decades, they were pursuing their separate goals in environments with different requirements and very little common ground. However, the more these two areas become intertwined, the more conflicts occur between them – which can be best illustrated when comparing their views on safety and security.

In OT, there is a tendency to avoid quick patches, software updates etc., because they might result in safety or reliability issues. In IT, staying at the current release level is mandatory for security. However, patches occasionally cause availability issues – which stands in stark contrast to the core OT requirements. In this regard, many people from both sides consider this a fundamental divide between OT and IT: the “Safety vs. Security” dichotomy.

However, with more and more connectivity (even more in the IoT than in OT), the choice between safety and security is no longer that simple. A poorly planned change (even as simple as an antivirus update) can introduce enough risk of disruption of an industrial network that OT experts will refuse even to discuss it: “people may die because of this change”. However, in the long term, not making necessary changes may lead to an increased risk of a deliberate disruption by a hacker. A well-known example of such a disruption was the Stuxnet attack in Iran back in 2007. Another much more recent event occurred last year in Germany, where hackers used malware to get access to a control system of a steel mill, which they then disrupted to such a degree that it could not be shut down and caused massive physical damage (but, thankfully, no injuries or death of people).

When looking in detail at many of the current scenarios for connected enterprises and – in consequence – connected OT or even IoT, this conflict between safety and security isn’t an exception; every enterprise is doomed to face it sooner or later. There is no simple answer to this problem, but clearly, we have to find solutions and IT and OT experts must collaborate much more closely than they are (reluctantly) nowadays.

One possible option is limiting access to connected technology, for instance, by defining it as a one-way road, which enables information flow from the industrial network, but establishes an “air gap” for incoming changes. Thus, the security risk of external attacks is mitigated.

However, this doesn’t appear to be a long-term solution. There is increasing demand for more connectivity, and we will see OT becoming more and more interconnected with IT. Over time, we will have to find a common approach that serves both security and safety needs or, in other words, both OT security and IT security.


Google+

UMA and Life Management Platforms

Feb 20, 2015 by Martin Kuppinger

Back in 2012, KuppingerCole introduced the concept of Life Management Platforms. This concept aligns well with the VRM (Vendor Relationship Management) efforts of ProjectVRM, however it goes beyond in not solely focusing on the customer to vendor relationships. Some other terms occasionally found include Personal Clouds (not a very concrete term, with a number of different meanings) or Personal Data Stores (which commonly lack the advanced features we expect to see in Life Management Platforms).

One of the challenges in implementing Life Management Platforms until now has been the lack of standards for controlling access to personal information and of standard frameworks for enforcing concepts such as minimal disclosure. Both aspects now are addressed.

On one hand, we see technologies such as Microsoft U-Prove and IBM Idemix being ready for practical use, which recently has been demonstrated in an EU-funded project. On the other hand, UMA is close to final, a standard that allows managing authorization for information that is stored centrally. It moves control into the hands of the “data owner”, instead of the service provider.

UMA is, especially in combination with U-Prove and/or Idemix, an enabler for creating Life Management Platforms based on standard and COTS technology. Based on UMA, users can control what happens with their content. They can make decisions on whether and how to share information with others. On the other hand, U-Prove and Idemix allow enforcing minimal disclosure, based on the concepts of what we called “informed pull” and “controlled push”.

Hopefully we will see a growing number of offerings and improvements to existing platforms that make use of the new opportunities UMA and the other technologies provide. As we have written in our research, there is a multitude of promising business models that respect privacy – and not only for business models that destroy privacy. Maybe the release of UMA is the priming for successful Life Management Platform offerings.


Google+

Adaptive Policy-based Access Management (APAM): The Future of Authentication and Authorization

Feb 11, 2015 by Martin Kuppinger

It’s not RBAC vs. ABAC – it’s APAM.

Over the past several years, there have been a lot of discussions around terms such as RBAC (Role Based Access Control), ABAC (Attribute Based Access Control), Dynamic Authorization Management (DAM) and standards such as XACML. Other terms such as RiskBAC (Risk Based Access Control) have been introduced more recently.

Quite frequently, there has been a debate between RBAC and ABAC, as to whether attributes should or must replace roles. However, most RBAC approaches in practice rely on more than purely role (i.e. on other attributes), while roles are a common attribute in ABAC. In practice, it is not RBAC vs. ABAC, but rather a sort of continuum.

However, the main issue in trying to position ABAC as the antipode to RBAC is that attributes vs. roles is not what the discussion should be about. The difference is in how access is granted.

Some years ago, I introduced the term “Dynamic Authorization Management” for what some vendors called “Entitlement Management”, while others used the term of “Policy Management”. This has been about the contrast of doing authorizations based on statically defined entitlements (such as in system that rely on ACLs, i.e. Access Control Lists, e.g. Windows Server) and authorization decisions made at runtime based on policies and context information such as the user, his roles, etc. – in fact a number of attributes.

Even longer ago, the term PBAC had been introduced, With the A in PBAC standing for “admission”, because PBAC was a standard introduced at the network level.

However, you could also argue that systems such as the SAP ERP systems or Windows File Servers do authorizations dynamically, for instance in Windows by comparing ACLs with SIDs contained in the Kerberos token. Nevertheless, the entitlements are set statically. Admittedly, after various discussions with end users, the term “dynamic” appears to not be clear enough for distinguishing various approaches.

While common, static approaches at best translate policies in static entitlements, this step is lacking in what I now will call Adaptive Policy-based Access Management (APAM). And that is what really makes the difference: Policies, applied at runtime to make decisions based on “context” in the broadest sense. Whether these are roles, IP addresses, claims, or whatever – this is the essence of the entire discussion that we have seen going on for years now.

It is not a question of whether RBAC or ABAC is right. It is about moving towards APAM. The advantages of APAM are obvious: APAM by default is a security service, i.e. externalizes security from the applications (theoretically, such a concept might be implemented into applications, but there is little sense in doing so). APAM will automatically reflect policy changes. Policies, if APAM is implemented right, can be expressed in a business-friendly notation. APAM is adaptive, e.g. it takes the context into account. All the aspects we had discussed as advantages for Dynamic Authorization Management logically apply to APAM, because this is just a new term for what KuppingerCole previously named Dynamic Authorization Management. Admittedly, it is a better term.


Google+

UMA in the Enterprise: There’s far more potential for UMA

Feb 02, 2015 by Martin Kuppinger

UMA, the upcoming User Managed Access Protocol, is a profile of OAuth 2.0. The specification itself defines the role of UMA as follows:

“UMA defines how resource owners can control protected-resource access by clients operated by arbitrary requesting parties, where the resources reside on any number of resource servers, and where a centralized authorization server governs access based on resource owner policies. Resource owners configure authorization servers with access policies that serve as asynchronous authorization grants.”

Simply said: UMA allows someone to control access to his data which can reside on other’s servers. As the name “user managed” implies, not the owner of the server but the owner of the resource (commonly some form of data) controls access. As I already wrote in a recent post, there now is at least a standard protocol for enabling privacy and minimal disclosure, by enhancing user control and consent.

Most of the use cases and case studies published by the standards body focus on Business-to-Consumer (B2C) scenarios. However, there is a great potential for Business-to-Business (B2B) and Business-to-Employee (B2E) communication. One example is provided by the UMA working group, which concerns managing API security based on UMA. However, there are numerous other scenarios. All complex information sharing scenarios involving a number of parties, such as complex financial transactions, fall in that scope.

A while ago, we had an interesting use case presented by a customer. The customer organization (organization A) shares data which is held on a cloud service (service C) with partners (partner 1, partner 2). However, the CSP (Cloud Service Provider) is not in charge of authorizations. Every partner in fact is in charge of granting access to “his” resources/data held on that server. Real world, and a perfect fit for UMA.

Thus, I strongly recommend that you look at UMA not only from a privacy and user consent perspective, but also from the perspective of fostering better collaboration between businesses. Without any doubt, UMA is another important step forward in standardization, after the introduction of OAuth 2.0 some time ago. Hopefully, UMA will gain the same widespread adoption as quickly as OAuth 2.0.


Google+

Minimal disclosure becoming reality

Jan 21, 2015 by Martin Kuppinger

This week, the EU-funded project ABC4Trust, led by Prof. Dr. Kai Rannenberg, Goethe University Frankfurt, announced that they successfully implemented two pilot projects. The target of the project has been what Kim Cameron in his Seven Laws of Identity has defined as law #2, “Minimal disclosure for a constrained use”. It also observes law #1, “User control and consent”.

Using Microsoft’s U-Prove technology and IBM’s Idemix technology, the project enables pseudonymity of users based on what they call ABC: Attribute-based credentials. Instead of expecting a broad range of information about users, ABC4Trust focuses on the minimum information required for a specific use case, e.g. the information that someone successfully passed some exams instead of his full name and other personal information or just the fact that someone is above 18 years of age, instead of his full date of birth.

This aligns well with the upcoming UMA standard, a new standard, which is close to finalization. I will publish a post on UMA soon.

So there are working solutions enabling privacy while still confirming the minimum information necessary for a transaction. The biggest question obviously is: Will they succeed? I see strong potential for UMA, however the use cases in reality might be different from the ones being focused on in the development of UMA. I am somewhat skeptical regarding ABC4Trust, unless regulations mandate such solutions. Too many companies are trying to build their business on collecting personal data. ABC4Trust stands in stark contrast to their business models.

Thus, it will need more than academic showcases to verify the real-world potential of these technologies. However, such use cases exist. The concept of Life Management Platforms and more advanced approaches to Personal Data Stores will massively benefit from such technologies – and from standards such as UMA. Both help leveraging new business models that build on enforcing privacy.

Furthermore, ABC4trust shows that privacy and pseudonymity can be achieved. This might be an important argument for future privacy regulations – that privacy is not just theoretical, but can be achieved in reality.


Google+

How CSPs could and should help their EU customers in adopting the Cloud

Jan 16, 2015 by Martin Kuppinger

Many customers, especially in the EU (European Union) and particularly in Germany and some other countries, are reluctant regarding cloud adoption. There are other regions with comparable situations, such as the Middle East or some countries in the APAC region. Particularly public cloud solutions provided by US companies are seen skeptical.

While the legal aspect is not simple, as my colleague Karsten Kinast recently has pointed out, it can be solved. Microsoft, for instance, has contracts that take the specifics of EU data protection regulations into account and provide solutions. Microsoft provides information on this publicly on its website, such as here. This at least minimizes the grey area, even while some challenges, such as pending US court decisions, remain.

There are other challenges such as the traceability of where workloads and data are placed. Again, there are potential solutions for that, as my colleague Mike Small recently explained in his blog.

This raises a question: Why do CSPs struggle with the reluctance of many EU (and other) customers in adopting cloud services, instead of addressing the major challenges?

What the CSPs must do:

  • Find a deployment model that is in conformance with EU (and other) privacy and data protection laws – which is feasible.
  • Adapt the contracts to the specific regional laws and regulations – again, this can be done, as the Microsoft example proves.
  • Evaluate additional solutions such as traceability of workloads and data, as Mike Small has described in his blog post.
  • Define cloud contracts that take customer needs into account, particularly avoiding disruptiveness to the customer’s business. I have blogged about this recently.
  • Educate your customers openly, both regarding the legal and the technical aspects. The more CSPs do a good job on providing contracts and implementations, the faster reluctance will diminish.
There is some technical work to do. There is more work to do on the legal side. And yes, that will cost a CSP money. Their lawyers might even say they will give up some advantages. However, if your advantage is based on a potential disruptiveness to the customer’s business or slow adoption of the cloud services by customers, then the disadvantages might by far outweigh the advantages.

Thus, the recommendation to CSPs is simple: Make this a business decision, not a lawyer decision. Unilateral, not to say unfair, agreements are a business inhibitor. That is a lesson some of the company lawyers of US CSPs still need to learn.


Google+

The Art of Ignorance – is it really folly to be wise?

Jan 13, 2015 by Martin Kuppinger

Few days ago, IBM sent out a press release announcing that the company had patented the design for a “data privacy engine” that can protect personal data more efficiently and affordably as it is transferred between countries, in compliance with both organizational policies and local laws.

This announcement turns the spotlight on a challenge that multi-national organizations in particular are facing today: regulation sprawl. In the face of an increasing number of regulations, covering a broad variety of topics such as privacy, export regulations, anti-money laundering, and many others; staying compliant is not always easy.

While there are a number of common concepts in regulations, such as traceability, regulations both within a country and across different countries are often in conflict. The aspect IBM is focusing on provides good examples of this conflict. Some data that would be considered as personal data needing protection in Germany may not be classified this way in Mexico and vice versa. In another example; some years ago, Deutsche Bahn (the German state-owned railway) violated data protection regulations during an anti-fraud initiative. According to one set of regulations, they were required to act against fraud; however, when analyzing the flow of fraudulent payments they violated the data protection law.

While many organizations have established a governance organization that analyzes the range of regulations applying across all the various countries in which they operate, we still frequently observe another approach that might be described as “the art of ignorance”. This is especially true when it comes to cloud computing. Both cloud service providers and cloud customers seem to exercise that art.

There are still many cloud service providers, which do not have sufficient insight in local laws such as the data protection laws across the various EU countries. Thus, there is massive variation in the answers given to common questions such as: support for standard contracts that are in accordance with EU regulations and local law for personal data, and the location and operation of data centers. This is sometimes hard to understand because, obviously the better the answers, the more business these cloud service providers will obtain.

The same holds true for some (potential) consumers of cloud services. Some just avoid moving to the cloud due to the uncertainty they feel; while others just do it anyway despite the uncertainty hoping that the rewards will be worth it. However, to properly balance risk against reward, you need to understand the risks, both from a compliance perspective and from a technical and organizational perspective. (This could be understood as being part of governance anyway, for example, an organization might want to align with ISO 2700x and other standards). It is better to make the effort to understand the risks instead of just ignoring them or, on the other hand, to miss the opportunities cloud services offer just because of the uncertainty.

There is a famous quotation from a work by the English poet Thomas Gary “where ignorance is bliss, 'tis folly to be wise”. This was a reflection on the time during his youth, when he was allowed to be ignorant and content. However, organizations cannot afford the contented ignorance of youth. Ignorance is no excuse in the law and ignorance does not help to make good decisions balancing risk with reward. Organizations need knowledge to understand their obligations in order to understand the costs of compliance across all of their operations and markets. They need to understand the potential conflicts in order to plot a safe course through compliance with multiple regulations. Only through knowledge is it possible to manage risk and truly ensure that the rewards really balance the risks. In this case, ignorance is not bliss.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+


top
Author info

Martin Kuppinger
Founder and Principal Analyst
Profile | All posts
KuppingerCole Blog
By:
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Register now
Spotlight
Internet of Things
It is its scale and interoperability that fundamentally differentiate the Internet of Things from existing isolated networks of various embedded devices. And this scale is truly massive. Extrapolating the new fashion of making each and every device connected, it is estimated that by 2020, the number of “things” in the world will surpass 200 billion and the IoT market will be worth nearly $9 trillion.
KuppingerCole EXTEND
KC EXTEND shows how the integration of new external partners and clients in your IAM can be done while at the same time the support of the operational business is ensured.
Links
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on Google+

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing

 GenericIAM
Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2015 KuppingerCole