English   Deutsch   Русский   中文    

Blog posts by Felix Gaehtgens

Microsoft releases its privacy-enabling U-Prove technology

Mar 02, 2010 by Felix Gaehtgens

Microsoft has just announced the availability of U-Prove - an innovative privacy-enabling technology that it acquired almost exactly two years ago. This is a significant announcement, because of two reasons: first of all, the technology is in our opinion a gigantic enabler for many applications that have been held back because of privacy concerns, and second because Microsoft is releasing the technology to the world under its "Open Specifications Promise", allowing anybody to use and incorporate the technology royalty-free.

With the U-Prove technology, users can release authenticated information about themselves in a safe and secure way. U-Prove uses a complex set of encryption and signing rules to derive information from authenticated sources. For example, a government-issued electronic ID could contain many pieces of information about an individual, including name, address, birth date, gender and biometric information. Given this credential, U-Prove allows an extract to be created from this information that contains a minimum of the information required to make a transaction. Need to verify that you are older than 18? Need to certify that you are a citizen of a particular country, or live in a particular state/county/commune? U-Prove can create a signed cryptographic extract of this information, without releasing any other information - for example that you are older than 18 without having to specify your birth date - or that you have your registered address in Brussels without having to disclose your address. The party that receives this token can then verify (through the cryptographic process) that the information is genuine.

Privacy issues have been holding back use of many applications, most commonly because they required a level of trust that most users were not willing to give. Age verification for example via a credit card, was a problematic area. Voting is another issue, where in order to cast a vote, it is necessary to prove that you are a resident (or citizen) of a particular area, without giving any personally identifiable information. On the other side, proper care must be taken that you are eligible to vote, and that you are not voting more than once.

Microsoft has acquired the U-Prove technology in March 2008 and has spent two years preparing for the release of the technology. The current release includes two major milestones: a release of the U-Prove intellectual property with a cryptographic specification under Microsoft's Open Specification Promise. Microsoft will now work with standardisation bodies to get the specification approved in an official standard. Open source toolkits have also been made available in C# and Java to reach a broad audience of developers, enticing them to harness these new features in their applications and services. Microsoft has also made available a "Community Technology Preview" that integrates the U-Prove technology with Microsoft's Identity Platform technologies, specifically AD FS 2.0, Windows Identity Foundation and Windows CardSpace v2.

To underscore Microsoft's commitment to releasing this technology to the public without locking users into its technology, a second specification is available that details how to integrate the technology into other open source identity selectors. The reasons why Microsoft is careful to release this technology within its Open Specification Promise seems obvious. The technology will not be uniquely adopted if it is perceived that Microsoft is controlling it. Given the promise of minimum disclosure, the technology has the ring of a "magical silver bullet" to enable adoption of new applications and electronic identities. It therefore comes as no surprise that Microsoft is focusing on governments as its first major adopters. Government issued IDs are intrinsically authoritative credentials, and privacy concerns rule much of the political debate around its adoption. Up until now, adoption of government-issued eIDs has been held back for several reasons - availability, use cases and privacy. With the privacy aspect addressed by this technology, the debate should hopefully be easier in the future. It will take time - years to be exact - for the standardisation process to be completed, but the technology is there to use and embed today. I expect high interest from developers and businesses for this technology, and we should see adoption and several tangible use cases very soon.


Google+

Gerry Gebel joins Axiomatics

Feb 19, 2010 by Felix Gaehtgens

My friend Gerry Gebel, long time Burton Group analyst is joining Axiomatics to ramp up the company's US presence. I received an email from him that started by saying "I thought I would give you a nice surprise on a Saturday morning"... and indeed what a surprise that was!

I can definitely understand Gerry's choice for Axiomatics. The company is new, up and coming, full of very smart people and way ahead of everyone else in the area of authorisation/access management. Axiomatics comes at the top places in my own personal "favourite innovative companies" list, together with Unbound ID, the latter continuing to amaze me by their determination (and skill!) to redefine directory services from the ground up and "do it properly". Both Axiomatics and Unbound ID will in the near future surely conquer the Identity Management world as we know it! OK joke aside ;-)

Speaking of Axiomatics, the timing (for me, personally) was actually quite interesting, as I have just finished a report on the company's "Policy Server" and "Policy Auditor". This is due to come up on our site within the next week. The report focuses on strengths and weaknesses of the products, the contexts in which it is most useful, the areas in which it is way ahead of its competitors and where it still needs to catch up.

I've also had the pleasure of doing a few Webinars (here and here) with Axiomatics and also interviewed Babak at last year's EIC. So congratulations both to Gerry and to Axiomatics, a great team has gotten another great addition!


Google+

New Webinar series on Claims

Jan 06, 2010 by Felix Gaehtgens

It's been a few years since Kim Cameron presented the Identity Metasystem around the concept of "Claims". If you've been following Kuppinger Cole you know how positive we have been about this framework. Years later, Claims are a reality, and there are multiple platforms out there that support using them. We have been advocating the adoption of the Identity Metasystem's concepts, and whilst not endorsing any particular platform per se, we acknowledge that there are several products out there that support this today. From our customers we often hear questions regarding the feasibility, questions about the approach and of course best practises for implementation. Naturally, there are questions around the software development cycle as well: do applications need to be fundamentally rewritten, or written differently to make good use of the identity metasystem? What should developer keep in mind to make their lives easier? How can applications be written to ease privacy and security?

I'm kicking off this new year with a brand new webinar series where we will focus on practical issues and implementation details. The Identity Metasystem is here today, and it's here to stay, so let's take advantage of it and unlock its potential. Without endorsing any product by itself, we'll be looking at practical implementations - and indeed, products - to see how developers can harness the power of the Identity Metasystem today. Together with implementation tips, these webinars will feature good practises, and our guests are real experts in their particular implementation.

This format of this series is different from our regular webinars - they are not meant for decision makers, but for developers, architects and administrators, and therefore technical in their nature. If you're interested in the topic and if you don't mind seeing some tidbits of code thrown in there, then this is definitely for you. We're extending an open invitation for open source projects and vendors - not to showcase their products - but instead show how developers can use their APIs and services. Of course I have a side agenda here as well ;-) What I am hoping is that in the end this will promote interoperability - we're sure that there are some similarities in APIs and services, and hope that vendors will standardise - as users learn more about about these, they'll put vendors under pressure to standardise their APIS and services :-)

Our first guests in the first webinar will be Dr. Steffo Weber and Abdi Mohammadi from Sun Microsystems. On Thursday the 14th of January at 17:00 MET (16:00 BST, 11:00 EST, 8:00 PST) they will show how us to harness Sun's OpenSSO authentication and authorization mechanisms programmatically from any application (web applications, fat clients etc) via the following mechanisms:

- HTTP headers - REST based web-service - SOAP based web-service - OpenSSO's proprietary SDK

Steffo will demonstrate how to retrieve arbitrary user attributes from within a programme that is almost agnostic when it comes to technical details about the actual access management platform infrastructure (in this case, OpenSSO). Thus, using OpenSSO's identity services does not require much knowledge about OpenSSO. In fact, it is easier to retrieve information from OpenSSO than e.g. from LDAP. Moreover, it can be used from any framework (Java, .Net, PHP, Ruby on Rails - you name it).

Steffo studied Computing Sciences in Bonn and Dortmund, Germany and holds a Ph.D. in theoretical computer science. He started as a  security specialist at debis IT Security Sevices in 1997. In 2000 he started working for Sun Microsystems, and is an expert on highly scalable web environments, IT security and cryptography as well as identity and access management. Apart from being very knowledgeable in the field he is also an excellent speaker and has presented at our European Identity Conference last year together with his colleague Abdi Mohammadi.

Abdi is a Principal Field Technologist at Sun. With more than 20 years of industry experience, he has been responsible for the architecture, design, end-to-end testing and optimization of Internet facing infrastructures as well as providing business strategy assistance to some of Sun's largest and most strategic customers. Currently he is focused on directory, access management and messaging solutions at Sun.


Google+

Q & A from the XACML/ABAC Webinar

Oct 27, 2009 by Felix Gaehtgens

On the Webinar that Babak and I did on ABAC and XACML three weeks back, there were quite a few questions that popped up! Unfortunately we did not have time to answer all of them during the webinar, so we promised that we would collect them and answer them afterwards.

BTW today there is another webinar on a related topic: The Critical Role of XACML in SOA Governance and Perimeter Web Service Security

Q: Please, specify the major difference between role mining (role consolidation based on role attributes) and the privilege giving mining approach?

A: (Babak) Role mining is about finding groups of permissions that can be bundled in terms of roles that can then be assigned to users. The idea of privilege-giving attribute mining is to find those attributes that affect permissions and use them to create access rules. Let’s take an example. In a business application, users may have been assigned permissions to Create and Release Purchase Orders, to Maintain Vendor Master data, Release Requisitions, Register Service Entry and Release etc. In a role mining project doing a bottom-up survey of permissions, an analysis of these permissions and how they are grouped into roles will be made. If a role called Purchasing combines all of the above permissions, we would probably identify a Segregation of Duties violation between the rights to Release Purchase Orders and the right to Maintain Vendor Master Data. As a result we would suggest remodeling of the Purchasing role to avoid the conflict. In a top-down approach, Role mining is about identifying a role in business critical processes that will need to be entitled with certain permissions in order to serve its purpose in that process. Role mining projects are typically about top-down and bottom-up combined, which in the end will lead to considerable efforts to map permissions to roles in such a way that everyone is able to do his or her job without acquiring excessive permissions – quite a daunting task.

An Attribute Mining project would very much like the top-down approach in role mining start with business processes to define which RULES for access can be derived. Examples: Attestation of purchase orders exceeding the amount of $xx, can only be made by users who a) belong to the cost center charged and b) have a management level of 10 or higher. From this rule we can derive that the following attributes are privilege-giving: a) user profile’s cost center assignment, b) users management level, c) purchase orders cost center and d) purchase order’s amount. To verify, these attributes would allow a rule to be formalized like this: If user.costcenter = purchaseorder.costcenter and user.managementlevel>=10 and purchase.amount<=$xx then permit else deny.

Q: Tell me more / define better what you mean when you talk about a missing context of the RBACs model?

A: (Babak) What we argue is that RBAC is a static model which makes it difficult to capture the context that may affect an access decision.  If we try to capture the context for an access in terms of roles then we will easily get a role explosion. We may for instance need to differentiate permissions depending on time of day – some users have access only during normal business hours whereas others have 7*24 access. This could lead to the creation of two roles, one for normal business hours, one for extended access. Add other context-related conditions such as remote login, authentication strength, line encryption etc. and we end up with the need to capture very many different roles. It is worth noting that normal ERP systems typically need to handle very large numbers of roles (thousands) internally to capture all their user permissions. If a combined role structure from multiple ERP systems must be established with contextual aspects included, role explosion issues simply become unmanageable.

Q:  I didn't quite get the difference between attribute based access control and rule based access control. can you elaborate?

A: (Felix) In a nutshell, the main difference between ABAC and RBAC is that RBAC revolves around the concept of the role. ABAC can use any attributes (including the role) so it is much more flexible.

A:  (Babak) Attribute based access control is not in conflict with rule based access control. Rule based access control is about creating rules defining access permissions, but if these rules are based on attributes then we have a type of attribute-based access control.

Q: I understood there exists a better way in comparison to the RBAC model, but a language is not enough at all. You need a product which combines both. Is this the message you want to send out here?

A: (Babak) Well, the purpose of the workshop is to present the concept of ABAC and how it solves some of the common and well-known issues with RBAC. But you are right that we also need products to support this new approach. Axiomatics has a complete product suite to support xacml policy life cycle management 360. Most vendors of business applications and IAM products also have more or less elaborate support for XACML built-in.

Q: Is there a defined migration path from an established RBAC model to an ABAC model?

A: The OASIS XACML committee has released an XACML Profile for Role Based Access Control (RBAC) which can be used as a basis for migration projects. That said, one naturally needs to be aware of the constraints given by the architecture of legacy systems – “converting” an existing RBAC-based business application to an ABAC-based model may require a considerable effort. In some instances it may be more attractive to implement connectors that can provision attribute-based rules from a Policy Administration Point to application specific rule configurations which in turn may be RBAC based.

Q: How do you manage attribute based access to multiple resource? Traditionally, privilege attributes are bundled into roles and are assigned to users. If you have many attributes that control access to resources, doesn't that increase administrative burden?

A: No, as we said in the presentation it will most likely be much less number of attributes needed to define access permissions than the number of roles. This is because we will define access rules based on the attributes rather than representing different set of permissions in terms of roles.

Q: Sounds like this method will have significant application impact - can you respond to this concern?

A: Yes, we believe that many applications will in the future implement the XACML request-response protocol. Already today, most large vendors of Identity & Access Management products or applications that handle business critical data have some sort of “XACML story”.

Q: Does ABAC related to Claim Based Authentication? Are they like corresponding concepts?

A: (Babak) Yes, one way to see claims is as provisioning of attributes to the access control system, so these are definitely complementary technologies.

A: (Felix) Authentication and authorisation are two different concepts, but of course they are related: authentication tells us who the user is, and then authorisation tells us whether the user is allowed to do something. The concept of Claim-based authentication is based on the fact that a "Claim" will already deliver attributes to an application. What happens then? These attributes could be made available to the authorisation engine.

Q: Are there any good resources and real world examples to get started with ABAC?

A:  (Babak) Well a good place to start with is the XACML TC page. Axiomatics has also a very informative website (www.axiomatics.com) with all introductory information regarding ABAC and XACML.

A: (Felix) We also have recently released a XACML Technology report that is available from our web site.

Q: RBAC seems after implementation quite static in maintenance ABAC seems intensive in maintenance, since attribute values vary over time (daily, hourly etc) would it not make maintenance costs more expensive and more complex?

A: (Babak) Well this is really the other way around. The idea is not to change the time attribute manually but to fetch the data from the right attribute source which is perhaps a clock.

A: (Felix) To add to Babak's point there: ABAC will make use of information that already exists in an enterprise. The initial maintenance cost would be to deliver those attributes to the policy decision engine. And for that, good technology such as virtual directories already exist.


Google+

Google makes changes to Android Market, but many are still unhappy

Sep 28, 2009 by Felix Gaehtgens

Under immense pressure from users and developers, Google has recently announced some changes to Android Market. But this may turn not be enough. Even though sales for mobile phones with Google's Android operating system are ramping up, developers find it hard to make money on that platform. A recent bombshell was a blog post from Larva Labs towards the end of August. Larva Labs' average income for all Android paid applications was only $62.39 per day - and that included games that are ranked #5 and #12 in the Android Market. This is a tiny figure when compared to Apple's App Store, where a #5 position earns around $3500 a day according to sales figures from app vendors.

If developers cannot make a profit from their Android offerings, they will turn away from the platform. As of today, the Google Android Market forums are full of gripes from android developers trying to sell their software. A common complaint is about the way that applications are displayed in the Android Market. Up to now, developers could not post screen shots and were limited to a 325 character description of their program. Google has since announced that this limitation would be lifted in version 1.6 of the Android platform, which has been released recently.

Another frequent complaint is that Android users from many countries cannot buy applications at all. Users from other countries cannot even access free applications through the Android Market. Nor can developers in many countries sell their applications - instead, they are forced to hold them back or offer them for free. The only "supported" countries for paid applications are Austria, France, Germany, Netherlands, Spain, US, and UK and (since very recently) Italy. Users from those countries can buy applications, and developers from those countries (plus Japan) can sell applications.

That leaves many users and developers standing in the rain. Google is aware of the problem and states that it is "working hard" on this issue, but users are not convinced. Some of them are livid: "Who is sleeping behind his desk [at Google]" an angry Swiss user demands to know who has bought an Android handset just to find out that he cannot buy applications. Others are clueless: "Nokia doesn't restrict countries with Ovistore [the equivalent of the Android Market for Nokia's phones]. This is so unlike Google. Why are they punishing us for investing into their platform?" asks a Swedish game developer.

In the last two months, only one new "supported country" for paid applications has been added: Italy. This slow pace is hurting Google's image in many countries, as handsets are being offered in countries but users effectively shut out of the Android market. But an even more serious side effect is starting to show: piracy. As users have no way to legally buy applications that they want, some are turning to illegal Android distribution sites, which are proliferating on the Internet.

The discussion forums are buzzing with developers complaining to be shut out. Others (from "supported" countries) are offering to resell applications from those that are shut out of the Market because of their location. Alternative distribution channels are also under discussion, but many developers believe that these pale in comparison with native market applications such as Apple's App Store that come with the handsets.

But Google is aware of the problem. When asked, a Google spokesperson replied: "We'll add support for additional countries in the coming months, but we have nothing to announce at this time". Until then, many developers will need to make a difficult decision on whether they can make money on the Android platform.


Google+

Beyond RBAC

Sep 28, 2009 by Felix Gaehtgens

Please join me tomorrow for a free Webinar on the topic "Beyond Role Based Access Control - the ABAC Approach".

Many - if not most - organisations are not getting as much value as they thought from RBAC (role based access control). In fact, many RBAC projects start with high expectations, but quickly get bogged down due to many issues and problems. Eventually it turns out that the initial expectations were too ambitious. But why? Is RBAC making promises that are difficult to keep?

Many in the industry (Babak and myself included) think that this is due to the fact that the real world just happens to be too complex to model efficiently with RBAC. This means that organisations must be realistic about what they can achieve with RBAC, and mitigate some of its shortcomings. But isn't there a better way? There certainly is, and that's what we'll be speaking about tomorrow. There's nothing wrong about roles per se, but we need to add more context to them. Then finally, we can reap the full benefits of agile access management, reach and even surpass the value that was expected from troubled RBAC projects.

I am delighted to speak again on a Webinar with Babak Sadighi, CEO and one of the founders of Axiomatics. Babak and his colleagues are an extremely smart bunch of people who are very passionate about access control. They have researched the topic for many years. I've interviewed Babak at the last European Identity Conference, which you can see here. So if you're interested in access and role management, please join us tomorrow, I promise that you will not be disappointed :-)


Google+

Quick Wins in Identity Management

Aug 18, 2009 by Felix Gaehtgens

In times of economic downturn, the pressure is on to save costs and increase efficiency. Everybody working in the IT sector will be familiar with projects being put on hold, spending frozen, colleagues being laid off. Unsurprisingly, most of those left working in IT departments see their workload and working hours increased, as they are being asked to deliver more with less resources. These are the typical signs of a dire economy, that may or may not be starting to turn around slowly: but those particular problems are not going away any time soon.

With the current squeeze on cost and corporate spending, many IT departments find themselves in a true quagmire. On one hand, the IT industry is focusing on efficiency like never before - elaborating new approaches and processes to increase efficiency and do more with less. Governance and risk management is a big issue whose lack has greatly contributed to the current crisis. IT is under scrutiny to be more of a business enabler and less of a cost center. All of this requires change, new technology, and strategic vision. But as IT spending is reduced or even capped, this creates a Catch 22 situation. Under pressure, some IT departments try for more tactical approaches that can eventually be expanded into a broader strategy. Quick wins are needed to get there.

So what are the quick wins that can be made in identity and access management? In order to get projects approved, many IT directors have to demonstrate a return on investment that must be almost immediate. I have heard of projects not getting approval unless ROI can be demonstrated in six months or in some cases even less. The good news is that there are some pockets of “low hanging fruits” in identity management that have a very immediate ROI. But keep in mind the old wisdom of "think big – start small – grow big". Ideally your "quick wins" should be stepping stones in a broader, transformative strategy to deliver more value.

Consolidation

A good start is always consolidation. This can save money in staff time,server resources, license and support costs. For ROI calculations, the license and support costs will usually not translate into savings until a later date, but savings in staff time and server resources are usually immediate. Consolidation projects are also a vital step to get your house in order for a broader strategy to improve efficiency. Besides, consolidation is just a good practice and is usually easy to get approved when the ROI case can be made. The key here is to get the maximum while spending the minimum of time and money.

In identity management, this is a good time to review the number of identity data silos in your enterprise and think about eliminating some through consolidation. A good way to do this is with virtual directories. Often applications are installed with their own directory server. Identity data is then duplicated through provisioning systems or synchronization mechanisms. Virtual directories can help eliminate some of those extra directory servers by allowing multiple applications to have multiple “views” of the data whilst connecting to the same physical data source.

The Evergreen: Login and Password simplification

It is a well known fact that most users have a problem with passwords. Not only do they tend to forget them and then need to be helped by service desks to reset passwords. It becomes exponentially worse when users have multiple different passwords that need to be remembered and changed at different intervals. It therefore should come as no surprise that projects that simplify the “password mess” are highly visible. The ROI is also well documented. However, comprehensive single sign-on is complex, lengthy and expensive to implement.

When password simplification is done in smaller steps however, the value and can be immediate. Because this has a high visibility from the standpoint of the users, the perceived value is usually significant. Focus on eliminating either additional passwords or sign-ons. For example, if two systems are using different passwords, you can think about a password synchronization between the two. If you already have a single sign-on system in place, there might be the possibility to add additional applications.

Role Management

Roles and groups are used to give access to resources and allow users to do things. As more applications are deployed, the number of roles increase. Often, roles are created for one purpose and then subsequently re-used for another purpose by another department or application which can create unwanted entitlements. Sometimes roles are forgotten and never reaped. After some time, it becomes difficult to tell who actually has access to what, and who authorized the access. This can - and usually is - a be a big problem. For those organizations that are regulated – for example by the Sarbanes Oxley Act or Basel 2 – lengthy reports must be provided to auditors that contain information about access to high-risk and high-impact applications.

Role management projects can address these shortcomings and enforce proper controls, set up workflows for entitlements and attestation of access. For these projects, ROI can be quick to materialize and implementation time can be fairly short when – and this is important - priorities are set to focus on the most critical applications first. Once the initial quick wins are demonstrated, additional systems and applications can be added subsequently to the role management system.

Final words

As usual, those who take a good long-term view are usually rewarded most in the long run. But when strategic initiatives are out, and the thinking is tactical, the above mentioned areas have shown the potential for quick wins. These quick wins have additional benefits because they can be everybody, but that cannot be an excuse to do nothing – those who are smart and creative will be able to push ahead in front of others. Hopefully these ideas will help you delivering value in these tough times.


Google+

Novell takes off into the Cloud

Aug 18, 2009 by Felix Gaehtgens

Novell has very recently announced a new product entitled "Cloud Security Services" - a comprehensive set of software that allows cloud providers to connect customers to their infrastructure in a safe and efficient way. This product is the first one that is not marketed to enterprises - instead it is sold to cloud service providers, who will license it for their customers.

Cloud computing is generating much interest. A recent statistic by Google has shown that hits for the phrase "cloud computing" are growing steadily. Why? In search for productivity and efficiency, enterprises are looking to offload non-core processes. The same reasons that fueled outsourcing in the last decades is now driving cloud computing. The promises are enticing, yet there are many open issues and worries - especially in terms of security and privacy. That (amongst other things) keeps many potential cloud computing customers sitting on the fence.

Novell has focused a large share of its brainpower extensively on cloud computing over the past year and has come up with a strategy and a set of products and partnerships. In fact, Novell's CEO Ron Hovsepian made the bold move to summon the company's development managers together at the time when the economic crisis was at its worst. Instead of talking to them about cost savings (just like everybody else), he rallied them to make an aggressive push forward to become a leader in the hot cloud computing infrastructure segment. This seems to have paid off - by focusing a large part of Novell's research on development in this area, the company has not only submitted 63 patents within the area, but also solve major issues around cloud computing security that until now held back investment by customers.

The recently announced "Cloud Security Services" seems like the pinnacle of Novell's focus. It provides a secure framework that cloud providers can use to connect to their customers. What's so special about it, compares to traditional federation technology? For the first time, Novell solves important parts of governance and auditing associated with software-as-a-service (SaaS) and other cloud services.
 
Who will buy this product? Cloud providers, and therefore end customers in an indirect way. Cloud providers will need to prove to their customers all details about access, usage and entitlements. Before Novell taking a stab at this, there hasn't been much. When it comes to accountability, the cloud has been murky at worst, or cloudy at best.

Implementing proper controls to ensure regulatory compliance and proper business practices is essential. But how can this extend off premises? As things become distributed - as in the case with cloud computing - audit logs are distributed as well, with no clear vision how to collect, combine and analyse this data in a comprehensive way. Novell seems to have solved this in an innovative way. The CSS product combines federation technology with SIEM, also known as "Security Information and Event Monitoring".

An invariable question is what to do with this data, now that it is available and can be collected. Novell has partnered with a company called PivotLink that provides software for a complete online analysis of the collected information. This fits in with Novell's CSS like a glove - CSS will collect and correlate events and audit trails, and the PivotLink software acts as a dashboard to provide extensive reporting and analysis.


Google+

Microsoft: minimum disclosure about minimum disclosure

Aug 03, 2009 by Felix Gaehtgens

A good year ago, Microsoft acquired an innovative company called U-Prove. That company, founded by visionary Stephan Brandt, had come up with a privacy-enabling technology that effectively allows users to safely transmit the minimum required information about themselves when required to - and for those receiving the information, a proof that the information is valid. For example: if a country issued a digital identification card, and a service provider would need to check whether the holder over 18 years of age, the technology would allow to do just that - instead of having to transmit a full data set, including the age of birth. The technology works through a complex set of encryption and signing rules and is a win-win for both users who need to provide information as well as those taking it (also called “relying parties in geek speak”). With the acquisition of U-Prove, Microsoft now owns all of the rights to the technology - and more importantly, the associated patents with it. Stephan Brandt is now part of Microsoft’s identity team, filled with top-notch brilliant minds such as Dick Hardt, Ariel Gordon, Mark Wahl, Kim Cameron and numerous others.

Privacy advocates should (and are) happy about this technology because it effectively allows consumers to protect their information, instead of forcing them to give up unnecessary information to transact business. How many times have we needed to give up personal information for some type of service without any real need for this information? For example, if you’re not shipping anything to me… what’s the point of providing my home or address? If you are legally required to verify that I’m over 18 (or 21), why would you really need to know my credit card details and my home address? If you need to know that I am a customer of one of your partner banks, why would you also need to know my bank account number? Minimum disclosure makes transactions possible with exactly the right fit of personal details being exchanged. For those enterprises taking the data, this is also a very positive thing. Instead of having to “coax” unnecessary information out of potential customers, they can instead make a clear case of what information they do require for fulfilling the transaction, and will ultimately find consumers more willing to do business with them.

So all of this is really great. And what’s even better, Microsoft’s chief identity architect, Kim Cameron has promised not to “hoard” this technology for Microsoft’s own products, but to actually contribute it to society in order to make the Internet a better place. But more than one year down the line, Microsoft has not made a single statement about what will happen to U-Prove: minimum disclosure about its minimum disclose technology (pun intended!). In a post that I made a year ago, I tried making the point that this technology is so incredibly important for the future of the Internet, that Microsoft should announce its plans what do with the technology (and the patents associated for it).

Kim’s response was that Microsoft had no intentions of “hoarding” the technology for its own purposes. He highlighted however that it would take time to do this - time for Microsoft’s lawyers, executives and technologists to irk out the details of doing this.

Well - it’s been a year, and the only “minimum disclosure” that we can see is Microsoft’s unwillingness to talk about it. The debate is heating up around the world about different governments’ proposals for electronic passports and ID cards. Combined with the growing dangers of identity theft and continued news about spectacular leaks and thefts of personal information, this would really make our days. Unless you’re a spammer or identity thief of course.

So it’s about time Microsoft started making some statements to reassure all of us what is going to happen with the U-Prove technology, and - more importantly - with the patents. Microsoft has been reinventing itself and making a continuous effort to turn from the “bad guys of identity” a decade (in the old Hailstorm days with Microsoft Passport) into the “good guys” of identity with its open approach to identity and privacy protection and standardisation. At Kuppinger Cole we have loudly applauded the Identity Metasystem and Infocards as a ground-breaking innovation that we believe will transform the way we use the Internet in the years to come. Now is the time to really start off the transformative wave of innovation that comes when we finally address the dire need for privacy protection. Microsoft has the key in its hands, or rather, locked in a drawer. C’mon guys, when will that drawer finally be opened?


Google+

Finally: an open XACML API!

Jul 31, 2009 by Felix Gaehtgens

Whilst at the Burton Group’s Catalyst 2009 conference, I ran into Prateek Mishra from Oracle who told me somewhere between the lines of our conversation that a new XACML API that has just been posted to the OASIS XACML TC. It was a “soft launch” that was announced at the Kantara meetings on Monday at Burton Catalyst (which very unfortunately, I missed). When Prateek mentioned it to me, it stopped me dead in my tracks, because I find it really significant news – a very important step towards flexible access control policy based on XACML. Before I get in the details, let me step back a bit and explain what this is, why I find this so significant and why it got me so excited.

XACML, the eXtensible Access Control Modeling Language is an XML-based standard for authorization and access control. It is based on the Attribute Based Access Control (ABAC) model that is hailed as the next generation of access control models. According to many, ABAC will ultimately replace RBAC (Role Based Access Control). Instead of only using a role as the determining factor whether to grant access or not, many attributes can be used. Of course roles can be used in ABAC as well – since ABAC can use multiple attributes to make access control decisions, the “Role” can be one of those attributes – so ABAC can emulate RBAC perfectly while adding many additional advantages. This means that it is possible to add context to the access control decisions and adds for a finer granularity, tighter controls and more flexibility for the business.

Here’s an example: I might be authorised to make bank transfers from an application. In RBAC, this would usually mean that I would have a role enabled for my account, for example “Make_Transfers”. Simple, right? Well, perhaps not so. As the need for control gets tighter, I may be authorised only to make transfers up to a value of 2000 EUR without any approval. Anything else above that requires the approval of at least two of the financial supervisors. So how would you do this with RBAC? Not really so easy. And with ABAC? Piece of cake. With RBAC, the bank transfer application would have to have some hardwired piece of logic implementing the “max 2000 EUR without approval”. With ABAC, the policy could just express that if I have the role “Make_Transfers” and “transfer_amount <= 2000” the operation is approved. ALso approved is an operation if I have the role “Make_Transfers” and “transfer_amount <= 2000” and “valid_approvals >= 2”. Everything else is denied.

So let me get back to the XACML API. There has been adoption by XACML, and I could even see it for myself here at Burton Catalyst 2009 just by meeting the sheer number of vendors that are actively supporting it and using it it for policy enforcement and access control. What has really been missing however was a ready-to-use API that would allow developers to make an access control request in their application and get a decision. Now we finally have an API that allows developers to do just that. I’ve spent over an hour yesterday hunched over my brand-new netbook with Prateek and Pat Patterson, poring over the API and can only say: thumbs up!

So what can this API be used for? Is it easy enough for developers to jump on and enable their applications for externalised access control? Well, not really. XACML is a very powerful and expressive policy modeling language, and also defines a request/response protocol. This creates a certain level of complexity. Whilst of course it is possible for application developers to use this API in their applications, I think that higher-level authorisation APIs are still needed that make it “dead easy” for developers to externalise access control. For matters of comparison, I was very impressed at how easy it is to .NET developers to harness the Geneva Framework (which is now called WIF or Windows Identity Foundation). Microsoft has made it truly “dead easy” for developers to make their applications ready for externalised authentication and claims – with just a few lines of “plumbing code”. Externalising authorisation must be made just as simple. The XACML API is a great start to provide a foundation that can be used to connect simpler APIs and existing access control frameworks to XACML.

Kudos for Cisco and Oracle for having contributed this. Great work, guys!


Google+


top
Author info

Felix Gaehtgens
Profile | All posts
KuppingerCole Blog
By:
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Register now
Spotlight
Analytics
There is now an enormous quantity of data which is being generated in a wide variety of forms. However this data, in itself, has little meaning or value; it needs interpretation to make it useful. Analytics are the tools, techniques and technologies that can be used to analyze this data into information with value. These analytics are now being widely adopted by organizations to improve their performance. However what are the security and governance aspects of the use of these tools?
KuppingerCole Services
KuppingerCole offers clients a wide range of reports, consulting options and events enabling aimed at providing companies and organizations with a clear understanding of both technology and markets.
Links
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on Google+

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing

 GenericIAM
Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2015 KuppingerCole