English   Deutsch   Русский   中文    

Blog posts by Martin Kuppinger

Consent – Context – Consequence

May 21, 2015 by Martin Kuppinger

Consent and Context: They are about to change the way we do IT. This is not only about security, where context already is of growing relevance. It is about the way we have to construct most applications and services, particularly the ones dealing with consumer-related data and PII in the broadest sense. Consent and context have consequences. Applications must be constructed such that these consequences can be taken.

Imagine the EU comes up with tighter privacy regulations in the near future. Imagine you are a service provider or organization dealing with customers in various locations. Imagine your customers being more willing to share data – consent with sharing – when they remain in control of data. Imagine that what Telcos must already do, e.g. in at least some EU countries, becoming mandatory for other industries and countries: Handing over customer data to other Telcos and “forgetting” about large parts of that data rapidly.

There are many different scenarios where regulatory changes or changing expectations of customers mandate changes in applications. Consent (and regulations) increasingly control application behavior.

On the other hand, there is context. Mitigating risks is tightly connected to understanding the user context and acting accordingly. The days of black and white security are past. Depending on the context, an authenticated user might be authorized to do more or less.

Simply said: Consent and context have – must mandatorily have – consequences in application behavior. Thus, application (and this includes cloud services) design must take consent and context into account. Consent is about following the principles of Privacy by Design. An application designed for privacy can be opened up if the users or regulations allow. This is quite easy, when done right. Far easier than, for example, adapting an application to tightening privacy regulations. Context is about risk-based authentication and authorization or, in a broader view, APAM (Adaptive, Policy-based Access Management). Again, if an application is designed for adaptiveness, it easily can react to changing requirements. An application with static security is hard to change.

Understanding Consent, Context, and Consequences can save organizations – software companies, cloud service providers, and any organization developing its own software – a lot of money. And it’s not only about cost savings, but agility – flexible software makes business more agile and resilient to changes and increases time-to-market.


Google+

100%, 80% or 0% security? Make the right choice!

May 19, 2015 by Martin Kuppinger

Recently, I have had a number of conversations with end user organizations, covering a variety of Information Security topics but all having the same theme: There is a need for certain security approaches such as strong authentication on mobile devices, secure information sharing, etc. But the project has been stopped due to security concerns: The strong authentication approach is not as secure as the one currently implemented for desktop systems; some information needs to be stored in the cloud; etc.

That’s right, IT Security people stopped Information Security projects due to security concerns.

The result: There still is 0% security, because nothing has been done yet.

There is the argument, that insecure is insecure. Either something is perfectly secure or it is insecure. However, when following that path, everything is insecure. There are always ways to break security, if you just invest sufficient criminal energy.

It is time to move away from our traditional black-and-white approach to security. It is not about being secure or insecure, but, rather, about risk mitigation. Does a technology help in mitigating risk? Is it the best way to achieve that target? Is it a good economic (or mandatory) approach?

When thinking in terms of risk, 80% security is obviously better than 0% security. 100% might be even better, but also worse, because it’s costly, cumbersome to use, etc.

It is time to stop IT security people from inhibiting improvements in security and risk mitigation by setting unrealistic security baselines. Start thinking in terms of risk. Then, 80% of security now and at fair cost are commonly better than 0% now or 100% sometime in the future.

Again: There never ever will be 100% security. We might achieve 99% or 98% (depending on the scale we use), but cost grows exponentially. The limit of cost is infinite for security towards 100%.


Google+

Managing the relationships for the new ABC: Agile Business, Connected

May 12, 2015 by Martin Kuppinger

Over the past years, we talked a lot about the Computing Troika with Cloud Computing, Mobile Computing, and Social Computing. We raised the term of the Identity Explosion, depicting the exponential growth of identities organizations have to deal with. We introduced the need for a new ABC: Agile Business, Connected. While agility is a key business requirement, connected organizations are a consequence of both the digital transformation of business and of mobility and IoT.

This rapid evolution in consequence means that we also have to transform our understanding of identities and access. We still see a lot of IAM projects focusing on employees. However, it is about employees, business partners, customers and consumers, leads, prospects etc. when we look at human identities.

 
Fig. 1: People, organizations, devices, and things are becoming connected –
organizations will have to deal with more identities and relations than ever before.

Even more, human identities are becoming only a fraction of the identities we have to deal with. People use devices, which communicate with backend services. Things are becoming increasingly connected. Everything and everyone, whether being a human, a device, a service, or a thing, have their own identity.

Relationships can become quite complex. A device might be used by multiple persons. A vehicle is not only connected to the driver or manufacturer, but to many other parties such as insurance companies, leasing companies, police, dealer, garage, inhabitants, other vehicles, etc. Not so speak about the fact that the vehicle for itself consists of many things that frequently interact.

Managing access to information requires a new thinking around identities and access. Only that will enable us to manage and restrict access to information as needed. Simply said: Identity and Access Management is becoming bigger than ever before and it is one of the essential foundations to make the IoT and the digital transformation of businesses work.

 
Fig. 2: APIs will increasingly connect everything and everyone – it becomes essential
to understand the identity context in which APIs are used.

In this context, APIs (Application Programming Interfaces) play a vital role. While I don’t like that term, being far too technical, it is well established in the IT community. APIs – the communication interfaces of services, apps (on devices) and what we might call thinglets (on things) – are playing the main role in this new connected world. Humans interact with some services directly, via browser. They use the UI of their devices to access apps. And they might even actively interact with things, even while these commonly act autonomously.

But the communication than happens between apps, devices, and services, using these APIs. For managing access to information via services, devices and things, we need in particular a good understanding of the relationship between them and the people and organizations. Without that, we will fail in managing information security in the new ABC.

Understanding and managing relations of a massive number of connected people, things, devices, and services is today’s challenge for succeeding with the new ABC: Agile Businesses, Connected.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

Redesigning access controls for IAM deployments?

Apr 20, 2015 by Martin Kuppinger

A few weeks ago I read an article in Network World, entitled “A common theme in identity and access management failure: lack of Active Directory optimization”. Essentially, it is about the fact that Microsoft Active Directory (AD) commonly needs some redesign when starting an IAM (Identity and Access Management) project. Maybe yes, and maybe no.

In fact, it is common that immature, chaotic, or even “too mature” (e.g. many years of administrative work leaving their traces with no one cleaning up) access control approaches in target systems impose a challenge when connecting them to an IAM (and Governance) system. However, there are two points to consider:

  1. This is not restricted to AD, it applies to any target system.
  2. It must not be allowed to lead to failures in your IAM deployment.

I have frequently seen this issue with SAP environments, unless they already have undergone a restructuring, e.g. when implementing SAP Access Control (formerly SAP GRC Access Control). In fact, the more complex the target system and the older it is, the more likely it is that the structure of access controls, be they roles or groups or whatever, is anything but perfect.

There is no doubt that redesign of the security model is a must in such situations. The question is just about “when” this should happen (as Jonathan Sander, the author of the article mentioned above, also states). In fact, if we would wait for all these security models to be redesigned, we probably never ever would see an IAM program succeeding. Some of these redesign projects take years – and some (think about mainframe environments) probably never will take place. Redesigning the security model of an AD or an SAP environment is a quite complex project by itself, despite all the tools supporting this.

Thus, organizations typically will have to decide about the order of projects. Should they push their IAM initiative or do the groundwork first? There is no single correct answer to that question. Frequently, IAM projects are so much under pressure that they have to run first.

However, this must not end in the nightmare of a failing project. The main success factor for dealing with these situations is having a well thought-out interface between the target systems and the IAM infrastructure for exposing entitlements from the target systems to IAM. At the IAM level, there must be a concept of roles (or at least a well thought-out concept for grouping entitlements). And there must be a clear definition of what is exposed from target systems to the IAM system. That is quite easy for well-structured target systems, where, for instance, only global groups from AD or business roles from SAP might become exposed, becoming the smallest unit of entitlements within IAM. These might appear as “system roles” or “system-level roles” (or whatever term you choose) in IAM.

Without that ideal security model in the target systems, there might not be that single level of entitlements that will become exposed to the IAM environment (and I’m talking about requests, not about the detailed analysis as part of Entitlement & Access Governance which might include lower levels of entitlements in the target systems). There are two ways to solve that issue:

  1. Just define these entitlements, i.e. global groups, SAP business roles, etc. first as an additional element in the target system, map them to IAM, and then start the redesign of the underlying infrastructure later on.
  2. Or accept the current structure and invest more in mappings of system roles (or whatever term you use) to the higher levels of entitlements such as IT-functional roles and business roles (not to mix up with SAP business roles) in your IAM environment.

Both approaches work and, from my experience, if you understand the challenge and put your focus on the interface, you will be quickly able to identify the best way to handle the challenge of executing your IAM program while still having to redesign the security model of target systems later on. In both cases, you will need a good understanding of the IAM-level security model (roles etc.) and you need to enforce this model rigidly – no exceptions here.


Google+

Data Security Intelligence – better understanding where your risks are

Apr 08, 2015 by Martin Kuppinger

Informatica, a leader in data management solutions, introduced a new solution to the market today. The product named Secure@Source also marks a move of Informatica into the Data (and, in consequence, Database) Security market. Informatica already has solutions for data masking in place, which is one of the elements in data security. However, data masking is only a small step and it requires awareness of the data that needs protection.

In contrast to traditional approaches to data security – Informatica talks about “data-centric security” – Informatica does not focus on technical approaches alone e.g. for encrypting databases or analyzing queries. As the name of their approach implies, they are focusing on protection of the data itself.

The new solution builds on two pillars. One is Data Security Intelligence, which is about discovery, classification, proliferation analysis, and risk assessment for data held in a variety of data sources (and not only in a particular database). The other is Data Security Controls, which as of now includes persistent and dynamic masking of data plus validation and auditing capabilities.

The target is reducing the risk of leakage, attacks, and other data-related incidents for structured data held in databases and big data stores. The approach is understanding where the data resides and applying adequate controls, particularly masking of sensitive data. This is all based on policies and includes e.g. alerting capabilities. It also can integrate data security events from other sources and work together with external classification engines.

These interfaces will also allow third parties to attach tokenization and encryption capabilities, along with other features. It will also support advanced correlation, for instance by integrating with an IAM solution, and thus adding user context, or by integrating with DLP solutions to secure the endpoint.

Informatica’s entry into the Information Security market is, in our view, a logical consequence of where the company is already positioned. While it provides deep insight into where sensitive data resides - in other words its source - and a number of integrations, we’d like to see a growing number of out-of-the-box integrations, for instance, with security analytics, IAM, or encryption solutions. While there are integration points for partners, relying too much on partners for all these various areas might make the solution quite complex for customers. While this makes sense for IAM, security analytics such as SIEM or RTSI (Real-time Security Intelligence) and other areas such as encryption might become integral parts of future releases of Informatica Secure@Source.

Anyway, Informatica is taking the right path for Information Security by focusing on security at the data level instead of focusing on devices, networks, etc. – the best protection is always at the source.


Google+

Facebook profile of the German Federal Government thwarts efforts to improve data protection

Mar 05, 2015 by Martin Kuppinger

There is a certain irony that the federal government has almost simultaneously launched a profile on Facebook with the change of the social network’s terms of use. While the Federal Minister of Justice, Heiko Maas, is backing up consumer organizations with their warnings of Facebook, the Federal Government has taken the first step in setting up its own Facebook profile.

With the changes in the terms of use, Facebook has massively expanded its ability to analyze the data of users. Data is also stored which is left behind by users on pages outside of Facebook for use in targeted advertising and possibly other purposes. On the other hand, the user has the possibility of better managing personal settings for his/her own privacy. The bottom line: it remains clear that Facebook is collecting even more data in a hard to control manner.

Like Federal Minister of Justice Maas says, "Users do not know which data is being collected or how it is being used."

For this reason alone, it is difficult to understand why the Federal Government is taking this step right at this moment. After all, it has been able to do its work so far without Facebook.

With its Facebook profile, the Federal Government is ensuring that Facebook is, for example, indirectly receiving information on the political interests and preferences of the user. Since it is not clear just how this information could be used today or in the future, it is a questionable step.

If one considers the Facebook business model, it can also have an imminent negative impact. Facebook's main source of income is from targeted advertising based on the information that the company has collected on its users. With the additional information that will be available via the Federal Government's Facebook profile, for example, interest groups can, in the future, selectively advertise on Facebook to track their goals.

Here it is apparent, as with many businesses, that the implications of commercial Facebook profiles are frequently not understood. On the one hand, there is the networking with interested Facebook users. Their value is often overrated - these are not customers, not leads and NOT voters, but at best people with a more or less vague interest. On the other hand, there is information that a company, a government, a party or anyone else with a Facebook profile discloses to Facebook: Who is interested in my products, my political opinions (and which ones) or for my other statements on Facebook?

The Facebook business model is exactly that - to monetize this information - today more than ever before with the new business terms. For a company, this means that the information is also available to the competition. You could also say that Facebook is the best possibility of informing the competition about a company's (more or less interested) followers. In marketing, but also in politics, one should understand this correlation and weigh whether it is worth paying the implicit price for the added value in the form of data that is interesting to competitors.

Facebook may be "in" - but it is in no way worth it for every company, every government, every party or other organization.

End users have to look closely at the new privacy settings and limit them as much as possible if they intend to stay on Facebook. In the meantime, a lot of the communication has moved to other services like WhatsApp, so now is definitely the time to reconsider the added value of Facebook. And sometimes, reducing the amount of communication and information that reaches one is also added value.

The Federal Government should in any case be advised to consider the actual benefits of its Facebook presence. 50,000 followers are not 50,000 voters by any means - the importance of this number is often massively overrated. The Federal Government has to be clear about the contradiction between its claim to strong data protection rules and its actions. To go to Facebook now is not even fashionable any more - it is plainly the wrong step at the wrong time.

According to KuppingerCole, marketing managers in companies should also exactly analyze which price they are paying for the anticipated added value of a Facebook profile - one often pays more while the actual benefits are much less. Or has the number of customers increased accordingly in the last fiscal year because of 100,000 followers? A Facebook profile can definitely have its uses. But you should always check carefully whether there is truly added value.


Google+

Gemalto feels secure after attack - the rest of the world does not

Feb 25, 2015 by Martin Kuppinger

In today’s press conference regarding the last week’s publications on a possible compromise of SIM cards from Gemalto by the theft of keys the company has confirmed security incidents during the time frame mentioned in the original report. It’s difficult to say, however, whether their other security products have been affected, since significant parts of the attack, especially in the really sensitive part of their network, did not leave any substantial traces. Gemalto therefore makes a conclusion that there were no such attacks.

According to the information published last week, back in 2010 a joint team of NSA and GCHQ agents has carried out a large-scale attack on Gemalto and its partners. During the attack, they have obtained secret keys that are integrated into SIM cards on the hardware level. Having the keys, it’s possible to decrypt mobile phone calls as well as create copies of these SIM cards and impersonate their users on the mobile provider networks. Since Gemalto, according to their own statements, produces 2 billion cards each year, and since many other companies have been affected as well, we are facing a possibility that intelligence agencies are now capable of global mobile communication surveillance using simple and nonintrusive methods.

It’s entirely possible that Gemalto is correct with their statement that there is no evidence for such a theft. Too much time has passed since the attack and a significant part of the logs from the affected network components and servers, which are needed for the analysis of such a complex attack, are probably already deleted. Still, this attack, just like the theft of so called “seeds” from RSA in 2011, makes it clear that manufacturers of security technologies have to monitor and upgrade their own security continuously in order to minimize the risks. Attack scenarios are becoming more sophisticated – and companies like Gemalto have to respond.

Gemalto itself recognizes that more has to be done for security and incident analysis: "Digital security is not static. Today's state of the art technologies lose their effectiveness over time as new research and increasing processing power make innovative attacks possible. All reputable security products must be re-designed and upgraded on a regular basis". In other words, one can expect that the attacks were at least partially successful - not necessarily against Gemalto itself, but against their customers and other SIM card manufacturers. There is no reason to believe that new technologies are secure. According to the spokesperson for the company, Gemalto is constantly facing attacks and outer layers of their protection have been repeatedly breached. Even if Gemalto does maintain a very high standard in security, the constant risks of new attack vectors and stronger attackers should not be underestimated.

Unfortunately, no concrete details were given during the press conference, what changes to their security practices are already in place and what are planned, other than a statement regarding continuous improvement of these practices. However, until the very concept of a “universal key”, in this case the encryption key on a SIM card, is fundamentally reconsidered, such keys will remain attractive targets both for state and state-sponsored attackers and for organized crime.

Gemalto considers the risk for the secure part of their infrastructure low. Sensitive information is apparently kept in isolated networks, and no traces of unauthorized access to these networks have been found. However, the fact that there were no traces of attacks does not mean that there were no attacks.

Gemalto has also repeatedly pointed out that the attack has only affected 2G network SIMs. There is, however, no reason to believe that 3G and 4G networks must be safer, especially not against massive attacks of intelligence agencies. Another alarming sign is that, according to Gemalto, certain mobile service providers are still using insecure transfer methods. Sure, they are talking about “rare exceptions”, but it nevertheless means that unsecured channels still exist.

The incident at Gemalto has once again demonstrated that the uncontrolled actions of intelligence agencies in the area of cyber security poses a threat not only to fundamental constitutional principles such as privacy of correspondence and telecommunications, but to the economy as well. The image of companies like Gemalto and thus their business success and enterprise value are at risk from such actions.

Even more problematic is that the knowledge of other attackers is growing with each published new attack vector. Stuxnet and Flame have long been well analyzed. It can be assumed that the intelligence agencies of North Korea, Iran and China, as well as criminal groups have studied them long ago. The act can be compared to leaking of atomic bomb designs, with a notable difference: you do not need plutonium, just a reasonably competent software developer to build your own bomb. Critical infrastructures are thus becoming more vulnerable.

In this context, one should also consider the idea of German state and intelligence agencies to procure zero-day exploits in order to carry out investigations of suspicious persons’ computers. Zero-day attacks are called that way because code to exploit a newly discovered vulnerability is available before the vendor even becomes aware of the problem, because they literally have zero days to fix it. In reality, this means that attackers are able to exploit a vulnerability long before anyone else discovers it. Now, if government agencies are keeping the knowledge about such vulnerabilities to create their own malware, they are putting the public and the businesses in a great danger, because one can safely assume that they won’t be the only ones having that knowledge. After all, why would sellers of such information make their sale only once?

With all due respect for the need for states and their intelligence agencies to respond to the threat of cyber-crime, it is necessary to consider two potential problems stemming from this approach. On one hand, it requires a defined state control over this monitoring, especially in light of the government’s new capability of nationwide mobile network monitoring in addition to already available Internet monitoring. On the other hand, government agencies finally need to understand the consequences of their actions: by compromising the security of IT systems or mobile communications, they are opening a Pandora's Box and causing damage of unprecedented scale.


Google+

Operational Technology: Safety vs. Security – or Safety and Security?

Feb 24, 2015 by Martin Kuppinger

In recent years, the area of “Operational Technology” – the technology used in manufacturing, in Industrial Control Systems (ICS), SCADA devices, etc. – has gained the attention of Information Security people. This is a logical consequence of the digital transformation of businesses as well as concepts like the connected (or even hyper-connected) enterprise or “Industry 4.0”, which describes a connected and dynamic production environment. “Industry 4.0” environments must be able to react to customer requirements and other changes by better connecting them. More connectivity is also seen between industrial networks and the Internet of Things (IoT). Just think about smart meters that control local power production that is fed into large power networks.

However, when Information Security people start talking about OT Security there might be a gap in common understanding. Different terms and different requirements might collide. While traditional Information Security focuses on security, integrity, confidentiality, and availability, OT has a primary focus on aspects such as safety and reliability.

Let’s just pick two terms: safety and security. Safety is not equal to security. Safety in OT is considered in the sense of keeping people from harm, while security in IT is understood as keeping information from harm. Interestingly, if you look up the definitions in the Merriam-Webster dictionary, they are more or less identical. Safety there is defined as “freedom from harm or danger: the state of being safe”, while security is defined as “the state of being protected or safe from harm”. However, in the full definition, the difference becomes clear. While safety is defined as “the condition of being safe from undergoing or causing hurt, injury, or loss”, security is defined as “measures taken to guard against espionage or sabotage, crime, attack, or escape”.

It is a good idea to work on a common understanding of terms first, when people from OT security and IT security start talking. For decades, they were pursuing their separate goals in environments with different requirements and very little common ground. However, the more these two areas become intertwined, the more conflicts occur between them – which can be best illustrated when comparing their views on safety and security.

In OT, there is a tendency to avoid quick patches, software updates etc., because they might result in safety or reliability issues. In IT, staying at the current release level is mandatory for security. However, patches occasionally cause availability issues – which stands in stark contrast to the core OT requirements. In this regard, many people from both sides consider this a fundamental divide between OT and IT: the “Safety vs. Security” dichotomy.

However, with more and more connectivity (even more in the IoT than in OT), the choice between safety and security is no longer that simple. A poorly planned change (even as simple as an antivirus update) can introduce enough risk of disruption of an industrial network that OT experts will refuse even to discuss it: “people may die because of this change”. However, in the long term, not making necessary changes may lead to an increased risk of a deliberate disruption by a hacker. A well-known example of such a disruption was the Stuxnet attack in Iran back in 2007. Another much more recent event occurred last year in Germany, where hackers used malware to get access to a control system of a steel mill, which they then disrupted to such a degree that it could not be shut down and caused massive physical damage (but, thankfully, no injuries or death of people).

When looking in detail at many of the current scenarios for connected enterprises and – in consequence – connected OT or even IoT, this conflict between safety and security isn’t an exception; every enterprise is doomed to face it sooner or later. There is no simple answer to this problem, but clearly, we have to find solutions and IT and OT experts must collaborate much more closely than they are (reluctantly) nowadays.

One possible option is limiting access to connected technology, for instance, by defining it as a one-way road, which enables information flow from the industrial network, but establishes an “air gap” for incoming changes. Thus, the security risk of external attacks is mitigated.

However, this doesn’t appear to be a long-term solution. There is increasing demand for more connectivity, and we will see OT becoming more and more interconnected with IT. Over time, we will have to find a common approach that serves both security and safety needs or, in other words, both OT security and IT security.


Google+

UMA and Life Management Platforms

Feb 20, 2015 by Martin Kuppinger

Back in 2012, KuppingerCole introduced the concept of Life Management Platforms. This concept aligns well with the VRM (Vendor Relationship Management) efforts of ProjectVRM, however it goes beyond in not solely focusing on the customer to vendor relationships. Some other terms occasionally found include Personal Clouds (not a very concrete term, with a number of different meanings) or Personal Data Stores (which commonly lack the advanced features we expect to see in Life Management Platforms).

One of the challenges in implementing Life Management Platforms until now has been the lack of standards for controlling access to personal information and of standard frameworks for enforcing concepts such as minimal disclosure. Both aspects now are addressed.

On one hand, we see technologies such as Microsoft U-Prove and IBM Idemix being ready for practical use, which recently has been demonstrated in an EU-funded project. On the other hand, UMA is close to final, a standard that allows managing authorization for information that is stored centrally. It moves control into the hands of the “data owner”, instead of the service provider.

UMA is, especially in combination with U-Prove and/or Idemix, an enabler for creating Life Management Platforms based on standard and COTS technology. Based on UMA, users can control what happens with their content. They can make decisions on whether and how to share information with others. On the other hand, U-Prove and Idemix allow enforcing minimal disclosure, based on the concepts of what we called “informed pull” and “controlled push”.

Hopefully we will see a growing number of offerings and improvements to existing platforms that make use of the new opportunities UMA and the other technologies provide. As we have written in our research, there is a multitude of promising business models that respect privacy – and not only for business models that destroy privacy. Maybe the release of UMA is the priming for successful Life Management Platform offerings.


Google+

Adaptive Policy-based Access Management (APAM): The Future of Authentication and Authorization

Feb 11, 2015 by Martin Kuppinger

It’s not RBAC vs. ABAC – it’s APAM.

Over the past several years, there have been a lot of discussions around terms such as RBAC (Role Based Access Control), ABAC (Attribute Based Access Control), Dynamic Authorization Management (DAM) and standards such as XACML. Other terms such as RiskBAC (Risk Based Access Control) have been introduced more recently.

Quite frequently, there has been a debate between RBAC and ABAC, as to whether attributes should or must replace roles. However, most RBAC approaches in practice rely on more than purely role (i.e. on other attributes), while roles are a common attribute in ABAC. In practice, it is not RBAC vs. ABAC, but rather a sort of continuum.

However, the main issue in trying to position ABAC as the antipode to RBAC is that attributes vs. roles is not what the discussion should be about. The difference is in how access is granted.

Some years ago, I introduced the term “Dynamic Authorization Management” for what some vendors called “Entitlement Management”, while others used the term of “Policy Management”. This has been about the contrast of doing authorizations based on statically defined entitlements (such as in system that rely on ACLs, i.e. Access Control Lists, e.g. Windows Server) and authorization decisions made at runtime based on policies and context information such as the user, his roles, etc. – in fact a number of attributes.

Even longer ago, the term PBAC had been introduced, With the A in PBAC standing for “admission”, because PBAC was a standard introduced at the network level.

However, you could also argue that systems such as the SAP ERP systems or Windows File Servers do authorizations dynamically, for instance in Windows by comparing ACLs with SIDs contained in the Kerberos token. Nevertheless, the entitlements are set statically. Admittedly, after various discussions with end users, the term “dynamic” appears to not be clear enough for distinguishing various approaches.

While common, static approaches at best translate policies in static entitlements, this step is lacking in what I now will call Adaptive Policy-based Access Management (APAM). And that is what really makes the difference: Policies, applied at runtime to make decisions based on “context” in the broadest sense. Whether these are roles, IP addresses, claims, or whatever – this is the essence of the entire discussion that we have seen going on for years now.

It is not a question of whether RBAC or ABAC is right. It is about moving towards APAM. The advantages of APAM are obvious: APAM by default is a security service, i.e. externalizes security from the applications (theoretically, such a concept might be implemented into applications, but there is little sense in doing so). APAM will automatically reflect policy changes. Policies, if APAM is implemented right, can be expressed in a business-friendly notation. APAM is adaptive, e.g. it takes the context into account. All the aspects we had discussed as advantages for Dynamic Authorization Management logically apply to APAM, because this is just a new term for what KuppingerCole previously named Dynamic Authorization Management. Admittedly, it is a better term.


Google+


top
Author info

Martin Kuppinger
Founder and Principal Analyst
Profile | All posts
KuppingerCole Blog
By:
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Register now
Spotlight
Analytics
There is now an enormous quantity of data which is being generated in a wide variety of forms. However this data, in itself, has little meaning or value; it needs interpretation to make it useful. Analytics are the tools, techniques and technologies that can be used to analyze this data into information with value. These analytics are now being widely adopted by organizations to improve their performance. However what are the security and governance aspects of the use of these tools?
KuppingerCole Services
KuppingerCole offers clients a wide range of reports, consulting options and events enabling aimed at providing companies and organizations with a clear understanding of both technology and markets.
Links
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on Google+

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing

 GenericIAM
Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2015 KuppingerCole