Blog posts by Matthias Reinwarth

Beyond simplistic: Achieving compliance through standards and interoperability

"There is always an easy solution to every problem - neat, plausible, and wrong.
 (
H.L. Mencken)

Finally, it's beginning: GDPR gains more and more visibility.

Do you also get more and more GDPR-related marketing communication from IAM and security vendors, consulting firms and, ehm, analyst companies? They all offer some pieces of advice for starting your individual GDPR project/program/initiative. And of course, they want you to register your personal data (Name, company, position, the size of a company, country, phone, mail etc...) for sending that ultimate info package over to you. And obviously, they want to acquire new customers and provide you and all the others with marketing material.

It usually turns out that the content of these packages is OK, but not really overwhelming.  A summary of the main requirements of the GDPR. Plus, in the best cases, some templates that can be helpful, if you can find them between the marketing material included in the "GDPR resource kit". But the true irony lies in the fact that according to the GDPR it is not allowed to offer a service that has a mandatory consent on data that is not needed for the service being offered (remember?… Name, company, position, the size of a company, country, phone, mail etc...).

The truth is, that GDPR compliance does not come easily and the promise of a shortcut and an easy shortcut via any GDPR readiness kit won't work out. Instead, newly designed but also already implemented processes of how personal and sensitive data is stored and processed, will have to be subject to profound changes.

Don't get me wrong: Having a template for a data protection impact analysis, a prescanned template for breach notification, a decision tree for deciding whether you need a DPO or not, and some training material for your staff are all surely important. But they are only a small part of the actual solution.

So in the meantime, while others promise to give you simple solutions, the Kantara Initiative is working on various aspects for providing processes and standards for adequate and especially GDPR-compliant management of Personally Identifiable Information. These initiatives include UMA (User-Managed Access), Consent and Information Sharing, OTTO (Open Trust Taxonomy for Federation Operators) and IRM (Identity Relationship Management).

Apart from several other objectives and goals, one main task is to be well-prepared for the requirements of GDPR (and e.g. eIDAS). The UMA standards is now reaching a mature 2.0 status. Just a few days ago two closely interrelated documents have been made available for public review, that makes the cross-application implementation of access based on provided consent possible. "UMA 2.0 Grant for OAuth 2.0 Authorization" enables asynchronous party-to-party authorization (between requesting party = client and resource owner) based on rules and policies. "Federated Authorization for User-Managed Access (UMA) 2.0" on the server side defines and implements authorization methods that are interoperating between various trust domains. This, in turn, allows the resource owner to define her/his rules and policies for access to protected resource in one single place.

These methods and technologies serve two major aspects: They enable the resource owner (you and me) to securely and conveniently define consent and implement and ensure it through technology. And it enables requesting partners (companies, governments, and people, again you and me) to have reliable and well-defined access in highly distributed environments.

So, they need to be verified if they can be adequate methods to getting to GDPR compliance and far beyond: By empowering the individual, enabling compliant business models, providing shared infrastructure and by designing means for implementing reliable und user-centric technologies. Following these principles can help achieving compliance. "Beyond" means: Take the opportunity of becoming and being a trusted and respected business partner that is known for proactively valuing customer privacy and security. Which is for sure much better than only preparing for the first/next data breach.

This surely is not an easy approach, but it goes to the core of the actual challenge. Suggested procedures, standards, guidelines and first implementations are available. They are provided to support organizations in moving towards security and privacy from the ground up. The UMA specifications including the ones described above are important building blocks for those who want to go beyond the simple (and insufficient) toolkit approach.

GDPR and Customer Data - Eyes on the Stars and Feet on the Ground

Big data analytics is getting more and more powerful and affordable at the same time. Probably the most important data within any organisation is knowledge of and insight into its customer's profiles. Many specialized vendors target these organisations. And it is obvious: The identification of customers across devices and accounts, a deep insight into their behaviour and the creation of rich customer profiles comes with many promises. The adjustment, improvement and refinement of existing product and service offerings, while designing new products as customer demand changes, are surely some of those promises.

Dealing with sensitive data is a challenge for any organisation. Dealing with personally identifiable information (PII) of employees or customers is even more challenging.

Recently I have been in touch with several representatives of organisations and industry associations who presented their view on how they plan to handle PII in the future. The potentials of leveraging customer identity information today are clearly understood. A hot topic is of course the GDPR, the general data protection regulation as issued by the European Union. While many organisations aim at being compliant from day one (= May 25, 2018) onward, it is quite striking that there are still organisations around, which don't consider that as being important. Some consider their pre-GDPR data protection with a few amendments as sufficient and subsequently don't have a strategy for implementing adequate measures to achieve GDPR-compliant processes.

To repeat just a few key requirements: Data subject (= customer, employee) rights include timely and complete information about personal data being stored and processed. This includes also a justification for doing this rightfully. Processes for consent management and reliable mechanisms for implementing the right to be forgotten (deletion of PII, in case it is no longer required) need to be integrated into new and existing systems.

It is true: In Europe and especially in Germany data protection legislation and regulations have always been challenging already. But with the upcoming GDPR things are changing dramatically. And they are also changing for organisations outside the EU in case they are processing data of European citizens.

National legislation will fill in details for some aspects deliberately left open within the GDPR. Right now this seems to weaken or “verschlimmbessern” (improve to the worse, as we say in German) several practical aspects of it throughout the EU member states. Quite some political lobbying is currently going on. Criticism grows e.g. over the German plans. Nevertheless, at its core, the GDPR is a regulation, that will apply directly to all European member states (and quite logically also beyond). It will apply to personal data of EU citizens and the data being processed by organisations within the EU.

Some organisations fear that compliance to GDPR is a major drawback in comparison to organisations, e.g. in the US which deal with PII with presumably lesser restrictions. But this is not necessarily true and it is changing as well, as this example shows: The collection of viewing user data, through software installed on 11 million "smart" consumer TVs without their owner's consent or even their information, led to a payment of $2.2 million by the manufacturer of these devices to the (American!) Federal Trade Commission.

Personal data (and the term is defined very broadly in the GDPR) is processed in many places, e.g. in IoT devices or in the smart home, in mobile phones, in cloud services or connected desktop applications. Getting to privacy by design and security by design as core principles should be considered as a prerequisite for building future-proof systems managing PII. User consent for the purposes of personal data usage while managing and documenting proof of consent are major elements for such systems.

GDPR and data protection do not mean the end to Customer Identity Management. On the contrary rather, GDPR needs to be understood as an opportunity to build trusted relationships with consumers. The benefits and promises as described above can still be achieved, but they come at quite a price and substantial effort as this must be well-executed (=compliant). But this is the real business opportunity as well.

Being a leader, a forerunner and the number one in identifying business opportunities, in implementing new business models and in occupying new market segments is surely something worth striving for. But being the first to fail visibly and obviously in implementing adequate measures for e.g. maintaining the newly defined data subject rights should be consider as something that needs be avoided.

KuppingerCole will cover this topic extensively in the next months with webinars and seminars. And one year before coming into effect the GDPR will be a major focus at the upcoming EIC2017 in May in Munich as well.

KYC is a must, not only for compliance reasons, but what about KYE?

Providing a corporate IT infrastructure is a strategic challenge. Delivering all services needed and fulfilling all requirements raised by all stakeholders for sure is one side of the medal. Understanding which services customers and all users in general are using and what they are doing within the organisation’s infrastructure, no matter whether it is on premises, hybrid or in the cloud, is for sure an important requirement. And it is more and more built into the process framework within customer facing organisations.

The main drivers behind this are typically business oriented aspects, like customer relationship management (CRM) processes for the digital business and, increasingly, compliance purposes. So we see many organisations currently learning much about their customers and site visitors, their detailed behaviour and their individual needs. They do this to improve their products, their service offerings and their overall efficiency which is of course directly business driven.  Understanding your customers comes with the immediate promise of improved business and increased current and future revenue.

But the other side of the medal is often ignored: While customers and consumers are typically kept within clearly defined network areas and online business processes, there are other or additional areas within your corporate network (on-premises and distributed) where different types of users are often acting much more freely and much less monitored. 

Surprisingly enough there is a growing number of organisations which know more about their customers than about their employees.  But this is destined to prove as short-sighted: Maintaining compliance to legal and regulatory requirements is only possible when all-embracing and robust processes for the management and control of access to corporate resources by employees, partners and external workforce are established as well. Preventing, detecting and responding to threats from inside and outside attackers alike is a constant technological and organisational challenge.

So, do you really know your employees? Most organisations stop when they have recertification campaigns scheduled and some basic SoD (Segregation of Duties) rules are implemented. But that does not really help, when e.g. a privileged user with rightfully assigned, critical access abuses that access for illegitimate purposes or a business user account has been hacked.

KYE (Know your Employee - although this acronym might still require some more general use) needs to go far beyond traditional access governance.  Identifying undesirable behaviour and ideally preventing it as it happens requires technologies and processes that are able to review current events and activities within the enterprise network. Unexpected changes in user behaviour and modified access patterns are indicators of either inappropriate behaviour of insiders or that of intruders within the corporate network.  

Adequate technologies are on their way into the organisations although it has to be  admitted that “User Activity Monitoring” is a downright inadequate name for such an essential security mechanism. Other than it suggests, it is not meant to implement a fully comprehensive, corporate-wide, personalized user surveillance layer. Every solution that aims at identifying undesirable behaviour in real-time needs to satisfy the high standards of requirements as imposed by many accepted laws and standards, including data protection regulations, labour law and the general respect for user privacy.

Nevertheless, the deployment of such a solution is possible and often necessary. To achieve this, such a solution needs to be strategically well-designed from the technical, the legal and an organisational point of view. All relevant stakeholders from business to IT and from legal department to the workers’ council need to be involved from day one of such a project. A typical approach means that all users are pseudonymized and all information is processed on basis of Information that cannot be traced back to actual user IDs. Outlier behaviour and inadequate changes in access patterns can be identified with looking at an individual user.  The outbreak of a malware infection or a privileged account being taken over can usually be identified without looking at the individual user.  And in the rare case of the de-pseudonymization of a user being required, there have to be adequate processes in place. This might include the four eyes principle for actual de-cloaking and the involvement of the legal department, the workers’ council and/or a lawyer.

Targeted access analytics algorithms can nowadays assist in the identification of security issues. Thus they can help organisations in getting to know their employees, especially their privileged business users and administrators. By correlating this information with other data sources, for example threat intelligence data and real-time security intelligence (RTSI) this might act as the basis for the identification of Advanced Persistent Threats (APT) traversing a corporate network infrastructure from the perimeter through the use of account information and the actual access to applications.

KYE will be getting as important as KYC but for different reasons. Both rely on intelligent analytics algorithms and a clever design of infrastructure, technology and processes.  They both transform big data technology, automation and a well-executed approach towards business and security into essential solutions for sustainability, improved business processes and adequate compliance.  We expect that organisations leveraging existing information and modern technology by operationalising both for constant improvement of security and the core business can draw substantial competitive advantages from that.

GDPR and the post-Brexit UK

The Brexit-Leave-Vote will have substantial influences on the economy inside and outside of the UK. But the impact will be even higher on UK-based, but also on EU-based and even non-EU based organisations, potentially posing a major threat when it comes to various aspects of business. Especially seen from the aspects of data protection, security and privacy, the future of the data protection legislation within the UK will be of great interest.

When asked for his professional view as a lawyer, our fellow analyst Dr. Karsten Kinast replied with the following statement:

"On the 23rd June, UK carried out a referendum to vote about UK´s EU membership. About 52% of the participants voted for leaving the EU. The process of withdrawal from the EU will have to be done according to Art. 50 of the Treaty on the European Union and will take about two years until the process is completed.

The withdrawal of the UK´s membership will also have an impact on data protection rules. First of all, the GDPR will enter into force on the 25th May 2018, so that by this time, the UK will still be in process to leave the EU. This means that UK businesses will have to prepare and be compliant with the GDPR.

Additionally, if UK businesses trade in the EU, a similar framework to that of the GDPR will be required in order to carry out data transfers within the EU member states. The British DPA, ICO, published a statement regarding the existing data protection framework in the UK. According to ICO, 'if the UK wants to trade with the Single Market on equal terms we would have to prove adequacy – in other words UK data protection standards would have to be equivalent to the EU´s General Data Protection Regulation framework starting in 2018'.

Currently, the GDPR is the reference in terms of data protection and organizations will have to prepare to be compliant and, even if the GDPR is not applicable to UK, a similar framework should be in place by the time the GDPR enters into force."

So it is adequate to distinguish between the phase before the UK actually leaving the EU and the time afterwards. In the former phase, starting right now EU legislation will still apply, so in the short term organisations might be probably well advised to follow all steps required to be compliant to the GDPR as planned anyway. With the currently surfacing reluctance of the British government to actually initiate the Art. 50 process according to the Lisbon treaty by delaying the leave notification until October, this first phase might even take longer than initially expected. And we will most likely see the UK still being subject to the GDPR as it comes into effect by May 2018 and before the actual exit.

For the phase after the actual exiting process the situation is yet unclear.  What does that mean for organisations doing business in and with the UK as soon as GDPR is in full effect?

  • In case they are UK-based and are only acting locally we expect them to be subject to just the data protection regulations as defined in Britain after the exit process. But any business with the EU will make them subject to the GDPR.
  • In case they are based in the EU they are subject to the GDPR anyway. In that case to have to be compliant to the rigid regulations as laid out in the EU data protection regulation.
  • In case they are based outside of the EU but are doing business with the EU as well, they are again subject to the GDPR.
  • We expect the number of companies outside the EU doing business only with a post-Brexit UK (i.e. not with the EU at all) to be limited or minimal. Those would have to comply with the data protection regulations as defined in Britain after the exit process.

Reliable facts for the post-Brexit era are not yet available. Nevertheless, CEOs and CIOs of commercial organisations have to make well-informed decisions and need to be fully prepared for the results of the decisions. An adequate approach in our opinion can only be a risk-based approach: organisations have to assess the risks they are facing in case of not being compliant to the GDPR within their individual markets. And they have to identify which mitigating measures are required to reduce or eliminate that risk. If there is any advice possible at that early stage, it still remains the same as given in my previous blog post: Organisations have to understand the GDPR as the common denominator for data protection, security and privacy within the EU and outside the EU for the future, starting right now and effective latest by May 2018. Just like Karsten concluded in the quote cited above: To facilitate trading in the common market the UK will have to provide a framework similar to the GDPR and acceptable to the EU.

So any organisation already having embarked on their journey for implementing processes and technologies to maintain compliance to all requirements as defined by the GDPR should strategically continue doing so to maintain an appropriate level of compliance by May 2018 matter whether inside or outside the UK. Organisations who have not yet started preparing for an improved level of security, data protection and privacy (and there are still quite a lot in the UK as well, as recent surveys have concluded) should consider starting to do so today, with the fulfilment of the requirements of the GDPR adapted to the individual business model as their main goal.

We expect stable compliance to the regulations as set forth in the GDPR as a key challenge and an essential requirement for any organisation in the future, no matter whether in the EU, in the UK or outside of Europe. Being a player in a global economy and even more so in the EU single market mandates compliance to the GDPR.

Managing the customer journey

Every one of us, whether a security professional or not, is also a part-time online customer or a subscriber of digital services. Providing personal information to a service organisation, to a social media platform or a retailer is a deliberate act. This will be even more the case with the upcoming GDPR being in full effect soon. Ideally the disclosure of potentially sensitive information should always lead to a win-win-situation with both directly involved parties, the customer and the provider of services benefiting from information provided by the end user.

So organisations need to make sure that managing customer information needs to be performed at an utmost level of diligence to the benefit of both the customer and the organisation. That means that the customer identity is to be put into the centre of all processes. And organisations need to understand that there are more sources available within (and outside of) the organisation, where information about a single customer is available, providing social, behavioural, interest, transactional and much more data, including historical data. Combining and consolidating this data into a single unified customer profile while maintaining scalability, security and compliance is most probably one of the essential challenges organisations will have to solve in the future.

Customers interacting with a service provider or any other internet-facing organisation typically start with eight registration process, either from scratch by creating a new account or by reusing and complementing existing 3rd party account information, e.g. from social logins. From that moment on they are interacting with the system and thus they implicitly provide a constant flow of information through their behaviour. But Customer Identity and Access Management strategically goes far beyond that. Information about a single specific customer might already be available in the enterprise CRM system, providing in-depth insight into former interactions, e.g. with helpdesk. Previous purchases or subscriptions will be documented in their respective systems and more information might be available in the enterprise IAM system (especially when the organisation needs to understand, that a customer is also an employee) or the corporate ERP system.

These types of information and valuable when it comes to understanding customer identity as a whole. The actual task of retrieving and leveraging this information should not be underestimated: in many organisations these different systems are usually run by different teams and different parts of the organisation and this often leads to so-called information silos. Getting to a unified customer profile necessarily requires breaking up the barriers between those organisational and technical silos. Cross-organisational and cross-functional teams are typically required to consolidate the information already available within a single enterprise. Aligning different sources of information and different semantics resulting from different business purposes to get to a meaningful pool of consumer profiles requires expertise from various teams.

After having done their“homework”(by exploiting their already existing knowledge about each customer identity), many organisations are also looking into integrating information available from third parties, which means data sources outside of the organisation. Potential sources are manifold: they range from social media (Facebook, Twitter, Google+ and many others, including regional and special interest social media services) and the reuse of profile data (including likes, recommendations, comments) to sources of commercial marketing data, and from existing sources for Open Data to credit rating organisations.

When it comes to comparing effort and benefit, it becomes obvious, that greedily collecting each and every information available cannot be effective. Identifying the right set of information for the right business purpose is one of the major challenges. Having the right information available for the end user to improve his user experience and for the organisation to support decision-making processes has to be the key objective. Nevertheless, the definition of an adequate set of “right” information is a moving target that needs to be adjusted during the life time of a customer identity and the underlying CIAM system.

However, it must be made sure that the reuse of all the above mentioned information is only possible, when the owner of this data, i.e. the customer, has agreed to this processing of the information for additional purposes. User consent is key, when it comes to recombining and analysing existing information.

GDPR now!

The news is already getting quieter around the GDPR, the general data protection regulation as issued by the European Union. Several weeks ago it has been discussed in detail in many articles, and background information has been provided by many sources, including lawyers and security experts, but in the meantime other topics have taken its place in the news.

But unlike some other topics, the GDPR won't go away by simply ignoring it. It is less than two years from now, that it will reach legally binding status as a formal law for example in Germany. Probably one of the most striking characteristics of the new regulation that is constantly underestimated is the scope of its applicability: It actually applies in all cases where the data controller or the data processor or the data subject is based in the EU. This includes all data processors (e.g. cloud service providers) or data controllers (e.g. retailers, social media, practically any organisation dealing with personally identifiable information) which are outside the EU, especially for example those in the US. They, however, seem to be gaining the lead in taking the right first steps already in comparison with European organisations.

So the GDPR will be a major game changer for a lot of customer facing services. For many organisations changing the processes, the applications and the infrastructure landscape to be compliant with the regulations of the upcoming new requirements as laid out in the GDPR will be a massive challenge.

The following image focuses just on some of the “highlights” of the European General Data Protection Regulation. But apart from this each and every organisation should review the current version of the text which goes far beyond that. It is available on the Internet, e.g here, and detailed and profound commentary is available e.g. here. My fellow analyst Dr. Karsten Kinast provided a great short wrap-up during his keynote at EIC 2016 in Munich earlier this year.


While two years sound like a long period of time actually the opposite is true. The requirements as imposed by the GDPR are at least partially substantially different from existing national data protection regulations. Every organisation has to identify, which steps are required to implement proper measures to comply to these regulations for their own processes and business models. When looking at the amount of time required to implement all changes identified, somewhat less than two years no longer appears to be overly plenty of time.

Unfortunately, especially industry associations appear not to be willing to supply adequate support or advice and often enough end up in commonplace remarks. Instead of providing appropriate guidance often the opposite is done by repeatedly praising Big Data as the basis for next generation business models. While this might nevertheless be true for some organizations, it can only be true when being compliant to the upcoming GDPR in every relevant respect.

Many important decisions will have to be left for court decisions in the end. This might turn out as a difficult challenge with only little practical advice being available as of now. But doing nothing is not an option at all.

Compliance to legal or regulatory requirements is rarely considered as a value in itself, but it is - and will be even more - a sine qua non when it comes to data protection, customer consent and privacy very soon. On the other hand: Assuring a high level of security and consumer privacy ahead of the legal requirements can be a competitive advantage. So if you have not yet started making your organisation and your business ready for the GDPR and its upcoming regulations, today might be a good day to take the first steps. 

Challenges of large-scale IAM environments

Long before analysts, vendors or journalists were coining terms like Digitalization, Identity Relationship Management or Customer IAM, several industries were already confronted with large-scale Identity and Access management (IAM) environments. Due to the character of their businesses they were challenged with the storage of huge amounts of identity data while serving massive volumes of both read and write requests at a constantly high access speed. Especially providers of telecommunication infrastructure like voice or data services typically handle identity data for several millions of subscribers. This information is leveraged for various purposes: One highly essential task focuses on controlling which subscribers are permitted to access which services and keeping track which resources they have used. This is typically done in highly specialized AAA (Triple-A) systems providing real-time Authentication (who?), Authorization (what?) and Accounting (how many?) services.

As this forms the basis for the actual core business processes, performance, availability, reliability and security are of utmost importance. Therefore, telco operators have always been in the forefront of designing and implementing highly redundant, scalable, sustainable special-purpose IAM systems as directory or database systems capable of fulfilling their unique requirements.

But several other systems traditionally need access to various subsets of subscriber=customer data: Customer Relationship Management (CRM) systems are the foundation for sales and help desks processes, while this information needs to be merged with AAA-data to produce e.g. the monthly bills. But apart from the traditional help desk systems, where customers call and want to interact with helpdesk personnel, the service landscape has changed dramatically: Many telco operators have transformed into being full service providers of communication and entertainment services, e.g. IPTV. In parallel subscribers have more and more gotten used to online portals for self-service access to their operator’s product portfolio. Having online access to their billing information, while being able to change, extend or cancel their subscriptions has become the new normal. This of course requires strong security mechanisms, especially rock-solid authentication and authorisation functionalities, while this is also true for ordering immediate access to streaming a blockbuster movie or gaining access to live coverage of their favourite sports event directly from the set-top box. These devices among money others (tablets, mobile phones or even gaming consoles) represent the identities individual subscribers and are of course more sources for additional billing information as well.

Providing a large-scale IAM system comes with many promises and requirements: gaining better insight into subscriber data through big data analytics might lead to efficient and agile business decisions and new products. The resulting information might be even more valuable when own subscriber data is intelligently merged with information provided by third parties (e.g. financial data, market research) and even social data, e.g. from Facebook, Google or Twitter logins. On the other hand, the privacy, security and reliability of sensitive information provided to the operators by subscribers is highly important. An example for that is when mobile devices are used for mobile, online payments (which is already done for example by Swisscom with their Easypay system) or secure mobile authentication (e.g. as a second factor) in the not so far future.

In large-scale IAM environment we observe that the traditional use case scenarios don’t go away, while they are constantly complemented with completely new requirements and business models. New technical requirements (new access methods, new devices, optimized performance, new data processing like big data analytics and lots more) are the results from such developments. And this often introduces the need for compliance to new sets of legal or regulatory requirements. All of this has to be adequately implemented in parallel, while existing requirements continue to be fulfilled, but usually with rising numbers of subscribers and increasing volumes of access requests.

With the traditional business models of providing mere access to voice or data services getting more and more irrelevant, telco operators have to constantly re-invent themselves and their business models. Existing and changing IAM systems for large numbers of customers and subscribers might turn out to be one of their biggest challenges but also their most significant asset to provide added value to their subscribers and new customer groups in the future.

Why we need adaptive authentication everywhere - not just in eBanking

Most probably the first thing that comes to your mind, when being asked about what should be highly secure when being done online is electronic banking. Your account data, your credit card transactions and all the various types of transactions that can be executed online today, ranging from simple status queries to complex brokerage, require adequate protection. With the criticality of the individual transactions varying substantially, this is also true for the required level of protection. This makes electronic banking the perfect use case for explaining and demonstrating adaptive authentication.

But creating an access matrix by mapping a set of client side attributes on one axis to a set of application functionality of varying criticality on the other axis makes perfect sense for almost any application available online as of today. The analysis of a user's location, the time of day and the time zone, the operating system or browser type and the respective version of the used device is vital information for identifying heightened access risk. Cleverer context data might be derived from the number of consecutive failed login attempts and the actual time required for the successful login process (several failed attempts and then very fast login: might be an automated brute force attack). These mechanisms are incredible improvements for online privacy and security when defined and implemented appropriately.

Maybe the most important argument for having adaptive authentication in almost every other application as well might come as a surprise: It is ease of use. Many applications do provide certain functionalities that require strong authentication in general and certain other functionalities that should be protected adequately in case of inappropriate context information suggesting this. The majority of functionalities, however, is usually of lower criticality and should, as a result, be available without customers / citizens / members / subscribers/… having to provide strong authentication data, requiring for example two-factor-authentication.

  • Many users log into their favourite shopping portals just for “research purposes” on a regular basis without actually buying something in that very session (i.e. the digital equivalent for going window shopping). As long as no purchase is made no additional adaptive authentication should be required in most cases.
  • The same can apply for accessing basic functionality (e.g. read-only access to general information of the local municipality) within an eGovernment site. As long as no PII or other sensitive data is retrieved or even modified, a simple authentication on basis of username and a reasonably strong password should be sufficient.
  • Watching a cartoon suitable for children on your favourite video streaming service might be fine using the usually already configured basic authentication, but the attempt to change the parental control settings obviously has to trigger a request for providing additional authentication.

But we are of course not only talking about end users accessing more or less public online services in different scenarios. In corporate environments, employees, partners and external workforce are accessing enterprise resources located on-premises or deployed in diverse cloud or hybrid infrastructures. With the ever changing landscape of user devices they will want to have access to corporate applications and services from various types of devices ranging from corporate notebooks to all types of personal devices, including mobile phones or tablets. Every organisation will have the ability to define their own policies, whether individual types of access is permitted in general and which rules apply for which type and criticality of functionality. The decision whether access can be provided at all or whether e.g. a step-up authentication is required can be made based on large set of available information at runtime. This information can include e.g. the general user group(employee, external workforce, partners, freelance sales partners and even customers), of course detailed identity data of the individual user, the accessing device (e.g. type, operating system, browser, versions), type and security of current network access (e.g. mobile access via cellular network, originating country, secured by VPN or not), local time and time zone and many many more. With all this information available at runtime it is clear that decisions for accessing the same resource for the same user can be different for a secured national connection from a corporate end-user device versus a connection from an outdated Android device originating from an untrusted country and without line encryption.

The above given examples clearly show the advantages of adaptive authentication in a lot of different use case scenarios: Adaptive authentication allows to provide both: a) An improved user experience by not requesting additional authentication whenever it is not required. And b) an adequate level of protection for user security and privacy whenever context information and/or the criticality of the requested functionality demand for that. Both are perfect reasons for you as either the provider of online services or being responsible for secure and modern corporate infrastructure components to consider implementing adaptive authentication soon.

CSP vs. tenant - Understanding shared responsibilities

Running an application or a service implies covering a large set of diverse responsibilities. Many requirements have to be fulfilled: the actual infrastructure has to be provided, which means that bare metal (computers, network and storage devices) as the foundation layer has to be installed and made available. On the next logical level operating systems have to be installed and appropriately maintained, including patches and updates. Appropriate mechanisms for virtualization have to be implemented.

Any layer of the provided infrastructure has to be implemented in an adequately scalable, stable, available and accessible way, at an at least sufficient level of performance. Service level agreements have to be defined and met which involves responsibilities for availability, accessibility, again scalability. This also requires the allocation of appropriate administrative or user services, e.g. implementing help desks and/or self-service infrastructure.

Security is of utmost importance for every application, service or infrastructure. This includes for example platform security, the reliable and robust management of users and privileged accounts and their individual roles, fine-grained access control and network security including intrusion detection. In a shared, virtualized environment this also demands strong requirements for the separation of individual, parallely operated platforms and the isolation of software, processes and data across the network, the storage and the computing environments. The provisioning of appropriate management interfaces, the implementation of change processes and maintaining stable, reliable and auditable systems operation procedures is a key responsibility within an application system or infrastructure environment.

The aspect of overall application security defines another set of responsibilities which focuses on logical and functional aspects and the business processes implemented. Ensuring all required aspects regarding the IT security of an application or infrastructure system, including the confidentiality, integrity and availability of the computer system and a proper implementation of the underlying business processes are important challenges, no matter which deployment scenario is chosen.

Whenever an application or a service is running on premises, determining who is responsible for which aspects of the infrastructure typically is a straightforward task. All vital building blocks ranging from the infrastructure to the operating system and from the application modules to the stored data and the underlying business processes is in the responsibility of the organization itself, i.e. the internal customer. Many organizations assign individual responsibilities and tasks along the lines of the ITIL service management processes with typical roles like the "application owner" or the "system owner", reflecting different functional aspects and responsibilities within the organization.

Moving services into the cloud or creating new services within the cloud changes the picture substantially and introduces the Cloud Service Provider (CSP) as a new stakeholder to the network of functional roles already established. Cloud services are characterized by the level of services provided. Individual services in the cloud are organized as layers building upon each other. Although the terms are not used consistently across different CSPs, cloud service offerings are often characterized as e.g. "Infrastructure as a Service" (IaaS) or "Platform as a service" (PaaS). Depending on the fact which parts of the services are provided by the CSP on behalf of the customer and which parts are implemented by the tenant on top of the provided service layers, the responsibilities are to be assigned to either the Cloud Service Provider (CSP) or the tenant.

The following image gives a rough overview which responsibilities are to be assigned to which partner within a cloud service provisioning contract in which cloud service model. While an "Infrastructure as a Service" (IaaS) scenario puts the responsibility for only the infrastructure on the Cloud Service Provider (CSP), the only responsibility left to the tenant in a "Software as a service" (SaaS) scenario is the responsibility for the actual business data. This is obvious as the data ownership within an organisation is an inalienable responsible and thus cannot be delegated to anybody outside the actual organisation.

cloudservicemodel.jpg

Shared responsibilities between the Cloud Service Provider (CSP) and the tenant are a key characteristic of every deployment scenario of cloud services. The above image gives a first idea of this new type of shared responsibilities before between service providers and their customers. For every real-life cloud service model scenario all responsibilities identified have to be clearly assigned individually to the adequate stakeholder. This might be drastically different in scenarios where only infrastructure is provided, for example the provisioning of plain storage or computing services, compared to scenarios where complete "Software as a service" (SaaS, e.g. Office 365) or even "Business process as a Service" (BaaS) is provided in the cloud, for example an instance of SalesForce CRM. A robust and complete identification of which responsibilities are to be assigned to which contract partner within a cloud service scenario is the prerequisite for an appropriate service contract between the Cloud Service Provider (CSP) and the tenant.

This article has originally appeared in KuppingerCole Analysts' View newsletter.

Different, better and compliant – Business-orientated Access Governance

Identity Management and Access Management are on their way into the first line of defence when it comes to enterprise security. With changing architecture paradigms and with the identity of people, things and services being at the core of upcoming security concepts, maintaining identity and Access Governance is getting more and more a key discipline of IT security. This is true for traditional Access Governance within the enterprise and this will become even more true for the digital business and the identities of customers, consumers, partners and devices.

Many organizations have already established Access Governance processes as a toolset for achieving compliance with regulatory requirements and for mitigating access-related risks on a regular basis. Identity and Access Management(IAM) processes accompany every identity through its complete life cycle within an organisation: The management of corporate identities and their access to resources is the combination of both IAM technology and the application of well-defined processes and policies. Traditional ways of adding Access Governance to these processes include the implementation of well-defined access request and approval workflows, the scheduled execution of recertification programs and the analysis of assigned access rights for the violation of the Segregation of Duties (SoD) requirements.

While the initial cause for creating such a program is typically the need for being compliant to regulatory requirements, mature organisations realize that fulfilling such requirements is also a business need and fundamental general benefit. The design and implementation of a well-thought-out dynamic, efficient, flexible and swift identity and access management is the foundation layer for an efficient and proactive Access Governance system.

This requires appropriate concepts for both management processes and entitlement concepts: Lean and efficient roles lead to simplified assignment rules. Intelligent approval processes, including pre-approvals as the default for many entitlements reduce manual approval work and allow for easier certification. Embedding business know-how within the actual entitlement definition allows for the specification of more and more processes in a way that they do no longer require any administrative or business interaction.

Aiming at defining and implementing automatable access assignment and revocation processes in fact reduces the need for various Access Governance processes. Once the processes are designed in a manner that they prevent the assignment of undesirable entitlements to identities and that they make sure that entitlements no longer needed are revoked from identities, they make many checks and controls obsolete. On the other hand, the immediate and automated assignment of entitlements whenever required fulfil business requirements in making people effective and efficient from day one. Subsequent business process changes and thus changes in job descriptions and their required access rights can be propagated automatically without further manual steps.

Applying risk assessments to each individual entitlement is a crucial prerequisite when it comes to analysing assigned access. Once all access is understood regarding its criticality, a risk orientated approach towards recertification (i.e. high-risk entitlements more often and faster) can be chosen and by default time-based assignments of critical entitlements can be enforced.

Well-defined access management and Identity Management life cycle processes can help to ease the burden of the actual Access Governance exercises. Before looking into further, often costly and tedious measures, redesigning and rethinking assignment and revocation processes in an intelligent manner within a lean entitlement model might help in improving efficiency and gaining security.

This article has originally appeared in KuppingerCole Analysts' View newsletter.

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

Learn more

Connected Consumer

When dealing with consumers and customers directly the most important asset for any forward-thinking organisation is the data provided and collected for these new type of identities. The appropriate management of consumer identities is of utmost importance. Handing over personal data to a commercial organisation the consumer typically does this with two contrasting expectations. On one hand the consumer wants to benefit from the organisation as a contract partner for goods or services. Customer-facing organizations get into direct contact with their customers today as they are accessing their [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00