Blog posts by Matthias Reinwarth
Traditional endpoint and infrastructure security approaches are tackling changes to OS, application and communication by monitoring these through dedicated solutions installed as agents onto the actual system. Often these solutions search for specific violations and act upon predefined white listed applications / processes or blacklisted identified threats.
Due to their architecture, virtualization platforms and cloud infrastructures have completely different access to security-relevant information. When intelligently executed, real-time data and current threats can be correlated. But much more is possible from the central and unique perspective these virtualized architectures allow. Observing the behavior of components in the software-defined network, comparing this with their expected behavior and identifying unexpected deviations allows the detection and treatment of previously unknown threats up to zero-day attacks.
Manufacturers such as Citrix and VMware are working at full speed to provide high-performance, integrated security infrastructures as part of their platform. These may be delivered, for example, not only as a component of hypervisor, but also as a component of a hybrid security architecture between cloud, virtualization and bare metal.
By going beyond traditional “known good” and “known bad” approaches through black-listing and whitelisting, such solutions provide an intelligent approach for infrastructure security. The approach of capturing the actual runtime behavior of existing software systems to learn expected and appropriate behavior while applying algorithmic control and monitoring in later phases has the potential to be able to cover a vast number of systems, including homegrown and enterprise-critical systems. Earlier this year, KuppingerCole published an Executive View research document on VMware AppDefense as a representative of this innovative security approach. And just this week VMware announced the availability of AppDefense in EMEA as well as extended capabilities to protect containerized workloads.
If legal laypersons (as I am) read legal texts and regulations, they often miss clear and obligatory guidelines on how to implement them in practice. This is not least due to the fact that laws are generally designed to last and are not directly geared to concrete measures. This type of texts and provisions regularly contain references to the respective "state of the art".
For example, it is obvious that detailed requirements on how companies should implement the protection of the privacy of customers and employees cannot necessarily be found in the EU General Data Protection Regulation (GDPR). The appropriate implementation of such requirements is a considerable challenge and offers substantial scope for interpretation, not least when having to decide between "commercially sensible" and "necessary".
While many organizations currently focus on the implementation of the GDPR, the BAFIN (the German Federal Financial Supervisory Authority "Bundesanstalt für Finanzdienstleistungsaufsicht), published a revised version of its "Minimum requirements for risk management"("Mindestanforderungen an das Risikomanagement", MaRisk). Often unknown outside of the financial sector, this regulatory document provides a core framework for the overall implementation of financial business in Germany and subsequently worldwide. MaRisk concretize § 25a Paragraph 1 of the German Banking Act („Kreditwirtschaftsgesetz“, KWG) and are therefore its legally binding interpretation.
The new version of MaRisk has been extended to include a requirements document that deals with its concrete implementation in banking IT, so to speak as a concretisation of MaRisk itself. This gives financial institutions clear and binding guidelines that become valid without a long-term implementation period. This document, entitled "Supervisory Requirements for IT in Financial Institutions" covers a large number of important topics in the implementation of measures to meet the IT security requirements for banks.
It does this by describing (and calling for) an appropriate technical and organizational design of IT systems for financial services. Particular attention has to be paid to information security requirements. It aims at improving IT service continuity management and information risk management and defines how new media should be handled appropriately. Beyond pure technology, a variety of measures are designed to create an enterprise risk culture and to increase employee awareness for IT security and risk management. And it includes specific requirements for modernizing and optimizing the bank's own IT infrastructure, but gives clear advice also with regard to the aspect of outsourcing IT (think: cloud).
Financial institutions must define and implement an information security organization, in particular by appointing an information security officer. Adequate resource planning to support the defined information security must ensure that this agreed security level can actually be achieved.
For national and international banks, meeting these requirements is a essential challenge, in particular due to their immediate applicability. But should you be interested in these requirements if you are not active in Germany or maybe you are not a bank at all?
From my point of view: Yes! Because it is not easy to find such clear and practice-oriented guidelines for an appropriate handling of IT security within the framework of regulatory requirements. And it is to be expected that similar requirements will become increasingly relevant in other regions and sectors in the future.
KuppingerCole will continue to monitor this topic in the future and integrate the criteria of the BAIT as a relevant module for requirements definitions in the area of enterprise IT security.
In May 2017, my fellow KuppingerCole analyst Mike Small published the Executive Brief research document entitled “Six Key Actions to Prepare for GDPR” (then and now free to download). This was published almost exactly one year before the GDPR takes full effect and outlines six simple steps needed to adequately prepare for this regulation. “Simple” here means “simple to describe”, but not necessarily “simple to implement”. However, while time has passed since then, and further regulations and laws are gradually gaining additional importance, properly ensure consumers’ privacy remains a key challenge today.
An even briefer summary of the recommendations provided by Mike is: (1) Find personal data in your organization, (2) control access to it, (3) store and process it legally and fairly, e.g. by obtaining and managing consent. Do (4) all this accordingly in the cloud as well. Prevent a data breach but (5) be properly prepared for what to do should one occur. And finally (6) implement privacy engineering so that IT systems are designed and built from ground up to ensure data privacy.
While tools-support for these steps was not overwhelming back then, things have changed in the meantime. Vendors inside and outside the EU have understood the key role they can play in supporting and guiding their customers on their path to compliance by providing built-in and additional controls in their systems and platforms. Compliance and governance are no longer just ex-post reports and dashboards (although these are still essential for providing adequate evidence). Applications and platforms in daily use now provide actionable tools and services to support privacy, data classification, access control, consent management, and data leakage prevention.
One example: Microsoft’s Office and software platforms continue to be an essential set of applications for almost all organizations, especially in their highly collaborative and cloud-based incarnations with the suffix 365. Just recently, Microsoft announced the availability of a set of additional tools to help organizations implement an information protection strategy with a focus on regulatory and legal requirements (including EU GDPR, ISO 27001, ISO 27018, NIST 800- 53, NIST 800-171, and HIPAA) across the Microsoft 365 platforms.
For data, processes and applications running within their ecosystems, these tools support the implementation of many of the steps described above. By automatically or semi-automatically detecting and classifying personal data relevant to GDPR the process of identifying the storage and processing of this kind of data can be simplified. Data protection across established client platforms as well as on-premises is supported through labeling and access control. This labeling mechanism, together with Azure Information Protection and Microsoft Cloud App Security extends the reach of stronger data protection into the cloud.
An important component on an enterprise level is Compliance Manager, which is available for Azure, Dynamics 365, as well as Office 365 Business and Enterprise customers in public clouds. It enables continuous risk assessment processes across these platforms, deriving individual and specific compliance scores from weighted risk scores and implemented controls and measures.
In your organization’s ongoing journey to achieve and maintain compliance to GDPR as well as for other regulations you need your suppliers to become your partners. In this respect, other vendors have announced the provision of tools and strategies for several other applications, as well as virtualization and infrastructure platforms, ranging from VMware to Oracle and from SAP to Amazon. Leveraging their efforts and tools can greatly improve your strategy towards implementing continuous controls for privacy and security.
So, if you are using platforms that provide such tools and services, you should evaluate their use and benefit to you and your organization. Where appropriate, embed them into your processes and workflows as fundamental building blocks as part of your individual strategy for compliance. There is not a single day to waste, as the clock is ticking.
You have heard it all before: May 25th, 2018, enormous fines, "you have to act now", the "right to be forgotten", DPO and breach notification. Every manufacturer whose marketing database contains your data will send you information, whitepapers, webinars, product information and reminders about GDPR. And they of course can “help” you in getting towards compliance. So you have set up a filter in your mail client that sorts GDPR messages directly into spam and #gdpr is muted in your Twitter client.
Because you have started your journey towards compliance to GDPR early? Compliance activities have long been established and your employees are informed? Consent management is not just theory? Data Leakage is prevented, monitored, detected and if it does occur, communicated appropriately?
But there might be still a good reason to read on: Unlike other regulations, there is no regular inspection of compliance with the requirements. Rather, individuals (including customers, employees or other relevant data subjects) and the competent supervisory authorities are able to make enquiries if alleged or actual omissions or offences are to be investigated. However, as yet there is no proof of GDPR compliance as a regular and permanent seal of quality.
It is difficult to identify sufficient indicators for good preparation. Yes, vendors and integrators provide some basic questionnaires… But you still might be in need of a neutral set of criteria determining the maturity level of your organization's readiness in the areas of compliance with regulatory or industry-specific regulations or frameworks. To support such reviews, KuppingerCole provides Maturity Level Matrixes that are specifically targeted to distinct areas of the IT market, in this case, GDPR readiness.
Assessing the quality and maturity of the controls, systems and processes implemented by your organization is essential. Given the level of agility required from business and market requirements this assessment needs to be executed on a regular basis. Continuous improvements are essential to achieve an adequate level of compliance in all key areas of the GDPR.
To achieve the highest level 5 of GDPR maturity it is essential to continuously measure GDPR readiness to enable an organization to understand their status quo, document it and, if possible, realize the potential benefits of investing in improving data protection. Then you might happily ignore further GDPR-related blogposts.
The KuppingerCole Maturity Level Matrix for GDPR readiness provides neutral criteria exactly for that purpose. Find it here for download.
And get in touch with us if you feel that an independent assessment (along the lines of exactly the same maturity levels) might be even more meaningful.
Today, companies are increasingly operating on the basis of IT systems and are thus dependant on them. Cyber risks must therefore be understood as business risks. The detection and prevention of cyber security threats and appropriate responses to them are among the most important activities to protect the core business from risks.
But in practice, however, many challenges arise here. The requirement to arrive at a uniform and thus informed view of all types of business risks often fails due to a multitude of organisational, technical and communication challenges:
Technical risk monitoring systems in the enterprise (e. g. systems for monitoring compliance with SoD rules or systems for monitoring network threats at the outer edge of an enterprise network) are often extremely powerful in their specific area of application. Interoperability across these system boundaries usually fails due to a lack of common language (protocols) or the semantics of information to be exchanged (uniform risk concepts and metrics).
The same thing is happening in the organization of large organizations: although it is only a few years in which we have observed this trend, this leads to independently operating IT operations teams, IT security teams and (cyber) governance teams that focus on individual tasks and their solutions with which they deal with individual, but very similar problems. They typically act without adequate integration into a corporate security strategy or a consolidated communication approach for the joint, coordinated management of risks. They do this without correlating the results to determine a comprehensive IT security maturity and thus without identifying the overall risk situation of a company.
Management boards and executives must act and react on the basis of incomplete and mostly very technical data, which can only lead to inadequate and incomplete results. The implicit link between cyber risks and business risks is lost when only individual aspects of cyber security are considered. Management decisions made on the basis of this information are usually far from adequate and efficient.
The only way to solve this problem is to move from tactical to strategic approaches. Recently the term “Cyber Risk Governance” has been coined to describe holistic solutions to this problem, covering organization, processes and technologies. More and more companies and organizations are realizing that cyber risk governance is a challenge that needs to be addressed at management level. Cyber security and regulatory compliance are strong drivers for rethinking and redesigning a mature approach to improve cyber resilience.
This requires an adequate strategic approach instead of tactical, more or less unplanned ad hoc measures. A strong risk governance organisation, a strategic framework for a comprehensive cyber risk governance process and related technological components must underpin it. This can only be achieved by bundling corporate expertise, taking a holistic view of the overall risk situation and understanding the sum of all risk mitigation measures implemented.
If the situation described above sounds familiar, read more about “Cyber Risk Governance” as a strategic architecture and management topic in the free KuppingerCole "White Paper: TechDemocracy: Moving towards a holistic Cyber Risk Governance approach".
The trouble with hypes is that they have an expiration date. From that date on they either need to be made real for some very good purposes within a reasonable timeframe, or they go bad. There have been quite a few hype topics around recently. But there have not been many single topics that have been covered by media at a frequency and from many different angles and with as many different focal areas as the Blockchain (or distributed ledgers in general). And most probably none of those articles failed to include the adjective "disruptive".
There have been books, conferences, articles, reference implementations, hackathons, webinars and lots more indicators proving that the blockchain is somehow the prototype of a hype topic.
But apart from the bitcoin currency as one regularly cited (and initial) usage of this technology, there have not been too many clearly visible use cases of the blockchain for the everyday user. Actually, it could be doubted that even bitcoin really is something for the everyday user. Other than every now and then needing one of those for paying ransom to get your encrypted files back...
Many great ideas have been developed and implemented in PoC (Proof of Concept) scenarios, but the truth is that the technology still is not very visible in general (which is quite normal for infrastructure concepts) and that there are no commonly known outstanding blockchain use cases in the wild, at least any that everybody knows. The main challenge is the identification of an adequate use case that can be implemented with blockchain technology and that is immediately offering benefits for the end user and the provider of the service.
This might change with the announcements a major insurance company, namely Axa, has made recently. The geeky name "fizzy" stands for an Ethereum-based implementation of a modern insurance concept that allows user-travelers to be covered by an "automatic" insurance policy, in case of a booked (and insured) flight being delayed for more than two hours. Blockchain technology provides adequate security. Automation, smart contracts and parameterization make it adaptable and available.
By doing this, “fizzy” provides smart, real-life benefit while leveraging the advantages of the blockchain technology. This is exactly what we should expect manifestations of this technology to look like. Instead of aiming at disrupting complete business models, organizations across industries should look into implementing smart and adequate solutions that provide real benefit to the end-user. Until this has proven successful at a convincing scale we might want to postpone the task of "reinventing and disrupting complete industries" into the next project phase.
It seems almost ironical, but the currently and constantly growing number of legal and regulatory requirements might be the important (and first actually working) catalyst for changing the attitude of organizations towards privacy. While the true rationale behind it are most probably the substantial fines that come with several of these regulations, first and foremost the GDPR.
The value of customer data, from basic account data to detailed behavioural profiles is undisputed. And whether information is really the new oil in the digitalized economy or if comparisons are misleading anyway: Customer identity and access management (CIAM) is already a core discipline for almost any organization and will be even more so.
Changing the attitude towards consumer and employee privacy does not necessarily mean that all those promising new business models building upon data analytics are prevented by design. But it surely means that all the data can be used for these extended purposes if and only if the data subject (consumer, employee, social user, prospective customer, etc.) gives permission. This user consent is something that will need to be more and more deserved by companies relying on user data.
The problem with trust is that it needs to be strategically grown over long periods of time, but as it is highly fragile it can be destroyed within a very short period of time. This might be through a data breach, which surely is one worst case scenario. But the mere assumption, maybe just a gut feeling or even hearsay that data might be reused or transferred inappropriately or inspected by (foreign) state authorities can immediately destroy trust and make users turn away and turn towards a competitor.
The real question is why many organizationa have not yet started actively building this trusted relationship with their users/customers/consumers/employees. The awareness is rising, so that security and privacy are moving increasingly into the focus of not only tech-savvy users, but also that of everyday customers.
Building up trust truly must be the foundation of any business strategy. Designing businesses to be privacy-aware from ground up is the first and only starting point. This involves both well-thought business processes and appropriate technologies. Trustworthy storage and processing of personal data needs to be well-designed and well-executed and adequate evidence needs to be presented to many stakeholders including the individual Data Processing Authorities and the users themselves.
Being more trusted and more trustworthy than your competitors will be a key differentiator for many customer decisions today and in the future. And trusting users will be more readily willing to share relevant business information with an organization as a data steward. But this must be based on well-executed consent management lifecycles, especially when it turns out to be to the benefit of all involved parties.
KuppingerCole will embark on the Customer Identity World Tour 2017 with 3 major events in Seattle, Paris and Singapore. Trusted and privacy-aware management of customer data will be a main topic for all events. If you want to see your organization as a leader in customer trust, you might want to benefit from the thought leadership and best practices presented there, so join the discussion.
"There is always an easy solution to every problem - neat, plausible, and wrong.
Finally, it's beginning: GDPR gains more and more visibility.
Do you also get more and more GDPR-related marketing communication from IAM and security vendors, consulting firms and, ehm, analyst companies? They all offer some pieces of advice for starting your individual GDPR project/program/initiative. And of course, they want you to register your personal data (Name, company, position, the size of a company, country, phone, mail etc...) for sending that ultimate info package over to you. And obviously, they want to acquire new customers and provide you and all the others with marketing material.
It usually turns out that the content of these packages is OK, but not really overwhelming. A summary of the main requirements of the GDPR. Plus, in the best cases, some templates that can be helpful, if you can find them between the marketing material included in the "GDPR resource kit". But the true irony lies in the fact that according to the GDPR it is not allowed to offer a service that has a mandatory consent on data that is not needed for the service being offered (remember?… Name, company, position, the size of a company, country, phone, mail etc...).
The truth is, that GDPR compliance does not come easily and the promise of a shortcut and an easy shortcut via any GDPR readiness kit won't work out. Instead, newly designed but also already implemented processes of how personal and sensitive data is stored and processed, will have to be subject to profound changes.
Don't get me wrong: Having a template for a data protection impact analysis, a prescanned template for breach notification, a decision tree for deciding whether you need a DPO or not, and some training material for your staff are all surely important. But they are only a small part of the actual solution.
So in the meantime, while others promise to give you simple solutions, the Kantara Initiative is working on various aspects for providing processes and standards for adequate and especially GDPR-compliant management of Personally Identifiable Information. These initiatives include UMA (User-Managed Access), Consent and Information Sharing, OTTO (Open Trust Taxonomy for Federation Operators) and IRM (Identity Relationship Management).
Apart from several other objectives and goals, one main task is to be well-prepared for the requirements of GDPR (and e.g. eIDAS). The UMA standards is now reaching a mature 2.0 status. Just a few days ago two closely interrelated documents have been made available for public review, that makes the cross-application implementation of access based on provided consent possible. "UMA 2.0 Grant for OAuth 2.0 Authorization" enables asynchronous party-to-party authorization (between requesting party = client and resource owner) based on rules and policies. "Federated Authorization for User-Managed Access (UMA) 2.0" on the server side defines and implements authorization methods that are interoperating between various trust domains. This, in turn, allows the resource owner to define her/his rules and policies for access to protected resource in one single place.
These methods and technologies serve two major aspects: They enable the resource owner (you and me) to securely and conveniently define consent and implement and ensure it through technology. And it enables requesting partners (companies, governments, and people, again you and me) to have reliable and well-defined access in highly distributed environments.
So, they need to be verified if they can be adequate methods to getting to GDPR compliance and far beyond: By empowering the individual, enabling compliant business models, providing shared infrastructure and by designing means for implementing reliable und user-centric technologies. Following these principles can help achieving compliance. "Beyond" means: Take the opportunity of becoming and being a trusted and respected business partner that is known for proactively valuing customer privacy and security. Which is for sure much better than only preparing for the first/next data breach.
This surely is not an easy approach, but it goes to the core of the actual challenge. Suggested procedures, standards, guidelines and first implementations are available. They are provided to support organizations in moving towards security and privacy from the ground up. The UMA specifications including the ones described above are important building blocks for those who want to go beyond the simple (and insufficient) toolkit approach.
Big data analytics is getting more and more powerful and affordable at the same time. Probably the most important data within any organisation is knowledge of and insight into its customer's profiles. Many specialized vendors target these organisations. And it is obvious: The identification of customers across devices and accounts, a deep insight into their behaviour and the creation of rich customer profiles comes with many promises. The adjustment, improvement and refinement of existing product and service offerings, while designing new products as customer demand changes, are surely some of those promises.
Dealing with sensitive data is a challenge for any organisation. Dealing with personally identifiable information (PII) of employees or customers is even more challenging.
Recently I have been in touch with several representatives of organisations and industry associations who presented their view on how they plan to handle PII in the future. The potentials of leveraging customer identity information today are clearly understood. A hot topic is of course the GDPR, the general data protection regulation as issued by the European Union. While many organisations aim at being compliant from day one (= May 25, 2018) onward, it is quite striking that there are still organisations around, which don't consider that as being important. Some consider their pre-GDPR data protection with a few amendments as sufficient and subsequently don't have a strategy for implementing adequate measures to achieve GDPR-compliant processes.
To repeat just a few key requirements: Data subject (= customer, employee) rights include timely and complete information about personal data being stored and processed. This includes also a justification for doing this rightfully. Processes for consent management and reliable mechanisms for implementing the right to be forgotten (deletion of PII, in case it is no longer required) need to be integrated into new and existing systems.
It is true: In Europe and especially in Germany data protection legislation and regulations have always been challenging already. But with the upcoming GDPR things are changing dramatically. And they are also changing for organisations outside the EU in case they are processing data of European citizens.
National legislation will fill in details for some aspects deliberately left open within the GDPR. Right now this seems to weaken or “verschlimmbessern” (improve to the worse, as we say in German) several practical aspects of it throughout the EU member states. Quite some political lobbying is currently going on. Criticism grows e.g. over the German plans. Nevertheless, at its core, the GDPR is a regulation, that will apply directly to all European member states (and quite logically also beyond). It will apply to personal data of EU citizens and the data being processed by organisations within the EU.
Some organisations fear that compliance to GDPR is a major drawback in comparison to organisations, e.g. in the US which deal with PII with presumably lesser restrictions. But this is not necessarily true and it is changing as well, as this example shows: The collection of viewing user data, through software installed on 11 million "smart" consumer TVs without their owner's consent or even their information, led to a payment of $2.2 million by the manufacturer of these devices to the (American!) Federal Trade Commission.
Personal data (and the term is defined very broadly in the GDPR) is processed in many places, e.g. in IoT devices or in the smart home, in mobile phones, in cloud services or connected desktop applications. Getting to privacy by design and security by design as core principles should be considered as a prerequisite for building future-proof systems managing PII. User consent for the purposes of personal data usage while managing and documenting proof of consent are major elements for such systems.
GDPR and data protection do not mean the end to Customer Identity Management. On the contrary rather, GDPR needs to be understood as an opportunity to build trusted relationships with consumers. The benefits and promises as described above can still be achieved, but they come at quite a price and substantial effort as this must be well-executed (=compliant). But this is the real business opportunity as well.
Being a leader, a forerunner and the number one in identifying business opportunities, in implementing new business models and in occupying new market segments is surely something worth striving for. But being the first to fail visibly and obviously in implementing adequate measures for e.g. maintaining the newly defined data subject rights should be consider as something that needs be avoided.
KuppingerCole will cover this topic extensively in the next months with webinars and seminars. And one year before coming into effect the GDPR will be a major focus at the upcoming EIC2017 in May in Munich as well.
Providing a corporate IT infrastructure is a strategic challenge. Delivering all services needed and fulfilling all requirements raised by all stakeholders for sure is one side of the medal. Understanding which services customers and all users in general are using and what they are doing within the organisation’s infrastructure, no matter whether it is on premises, hybrid or in the cloud, is for sure an important requirement. And it is more and more built into the process framework within customer facing organisations.
The main drivers behind this are typically business oriented aspects, like customer relationship management (CRM) processes for the digital business and, increasingly, compliance purposes. So we see many organisations currently learning much about their customers and site visitors, their detailed behaviour and their individual needs. They do this to improve their products, their service offerings and their overall efficiency which is of course directly business driven. Understanding your customers comes with the immediate promise of improved business and increased current and future revenue.
But the other side of the medal is often ignored: While customers and consumers are typically kept within clearly defined network areas and online business processes, there are other or additional areas within your corporate network (on-premises and distributed) where different types of users are often acting much more freely and much less monitored.
Surprisingly enough there is a growing number of organisations which know more about their customers than about their employees. But this is destined to prove as short-sighted: Maintaining compliance to legal and regulatory requirements is only possible when all-embracing and robust processes for the management and control of access to corporate resources by employees, partners and external workforce are established as well. Preventing, detecting and responding to threats from inside and outside attackers alike is a constant technological and organisational challenge.
So, do you really know your employees? Most organisations stop when they have recertification campaigns scheduled and some basic SoD (Segregation of Duties) rules are implemented. But that does not really help, when e.g. a privileged user with rightfully assigned, critical access abuses that access for illegitimate purposes or a business user account has been hacked.
KYE (Know your Employee - although this acronym might still require some more general use) needs to go far beyond traditional access governance. Identifying undesirable behaviour and ideally preventing it as it happens requires technologies and processes that are able to review current events and activities within the enterprise network. Unexpected changes in user behaviour and modified access patterns are indicators of either inappropriate behaviour of insiders or that of intruders within the corporate network.
Adequate technologies are on their way into the organisations although it has to be admitted that “User Activity Monitoring” is a downright inadequate name for such an essential security mechanism. Other than it suggests, it is not meant to implement a fully comprehensive, corporate-wide, personalized user surveillance layer. Every solution that aims at identifying undesirable behaviour in real-time needs to satisfy the high standards of requirements as imposed by many accepted laws and standards, including data protection regulations, labour law and the general respect for user privacy.
Nevertheless, the deployment of such a solution is possible and often necessary. To achieve this, such a solution needs to be strategically well-designed from the technical, the legal and an organisational point of view. All relevant stakeholders from business to IT and from legal department to the workers’ council need to be involved from day one of such a project. A typical approach means that all users are pseudonymized and all information is processed on basis of Information that cannot be traced back to actual user IDs. Outlier behaviour and inadequate changes in access patterns can be identified with looking at an individual user. The outbreak of a malware infection or a privileged account being taken over can usually be identified without looking at the individual user. And in the rare case of the de-pseudonymization of a user being required, there have to be adequate processes in place. This might include the four eyes principle for actual de-cloaking and the involvement of the legal department, the workers’ council and/or a lawyer.
Targeted access analytics algorithms can nowadays assist in the identification of security issues. Thus they can help organisations in getting to know their employees, especially their privileged business users and administrators. By correlating this information with other data sources, for example threat intelligence data and real-time security intelligence (RTSI) this might act as the basis for the identification of Advanced Persistent Threats (APT) traversing a corporate network infrastructure from the perimeter through the use of account information and the actual access to applications.
KYE will be getting as important as KYC but for different reasons. Both rely on intelligent analytics algorithms and a clever design of infrastructure, technology and processes. They both transform big data technology, automation and a well-executed approach towards business and security into essential solutions for sustainability, improved business processes and adequate compliance. We expect that organisations leveraging existing information and modern technology by operationalising both for constant improvement of security and the core business can draw substantial competitive advantages from that.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Whether public, private or hybrid clouds, whether SaaS, IaaS or PaaS: All these cloud computing approaches are differing in particular with respect to the question, whether the processing sites/parties can be determined or not, and whether the user has influence on the geographical, qualitative and infrastructural conditions of the services provided. Therefore, it is difficult to meet all compliance requirements, particularly within the fields of data protection and data security. The decisive factors are transparency, controllability and influenceability of the service provider and his [...]