Identity isn't hard when you don't always use it. For example, here in the natural world we are anonymous—literally, nameless—in most of our public life, and this is a handy thing. Think about it: none of us walks down the street wearing a name badge, and it would be strange to do so. A feature of civilization is not needing to know everyone's name, or details about their lives, and to give others information about ourselves on a need-to-know basis.
To be anonymous, however, does not mean to lack distinction. In fact to be human is to be distinctive: designed by nature to look and sound different than other people, so we can tell each other apart. We also add to our distinctions through clothing, jewelry, haircuts, mannerisms and body art. Our souls are also profoundly original in ways that transcend our genetic portfolio. For example, television star Laverne Cox has an identical twin brother. So does transgender activist Nicole Maines. Being distinctive helps relieve us of the need to disclose our names all the time, because in most cases all we need is to be recognizable, or familiar, not identified by name. This too is a grace of civilization.
Our identities are also profoundly personal, and often complex. We start with the names given to us by our parents or our tribe. After that we add abbreviations and nickames, which have conditional uses and conventions. For example, my father was named Allen, but most people called him Al. He and my mother, who was named Eleanor and sometimes went by El, named me David Allen. Mostly they called me Dave. My son Peter's middle name is also Allen, and that's the name he mostly goes by, while family members call him Pete. When I worked in radio, somebody called my on-air persona "Doctor Dave." Then, after I started a business with a one of my listeners whose name was also David (and who didn't like being called Dave), he and our co-workers called me Doc to avoid confusion. As my social network expanded through our growing business, the nickname stuck, and I've been mostly called Doc ever since. (By the way, years after we went into business, I found out David's first name was Paul. David was his middle name. Nobody, even in his family, called him Paul.)
Everything I just described falls under the heading Devon Loffreto was the first to call self-sovereign identity: the kind fundamentally under the control of a single (or sovereign) individual. All the systems by which organizations give us identifiers he calls administrative.
From their start, administrative identity systems have had a hard time coping with the simple fact that identifiers are optional among human beings having human interactions in the natural world, that our default state within those interactions is to be anonymous yet distinctive—and that we especially value anonymity. Proof of how much we value anonymity is the exception to it we call celebrity. Ask any famous person about the cost of their fame and they'll tell you it's anonymity. The bargain is Faustian: while there are many benefits to celebrity, it is also a curse to be recognized by everyone everywhere, and known by name.
The world's administrative systems have little use for anonymity. After all, they require identifiers for people, so they can know who they are serving, arresting, or sending messages. Knowing people by name has many advantages for administrative systems, but also presents problems in the networked world for both those systems and human beings. Requiring "an ID" for every person puts operational and cognitive overhead on both sides. In the natural world, a boundless variety of business interactions only require that the business know who they encounter is human, trustworthy, and worth the time and effort.
In the networked world, however, we are still stuck with systems comprised of “identity providers” and “relying parties” that reduce individuals to mere “users” burdened with logins and passwords—or convenienced by the Faustian bargain of "federated" identities that let them login with Facebook, Linkedin or Twitter. In these systems, who we are as individuals is secondary to the needs of identity providers and relying parties and the transactions their systems perform, most of which eliminate anonymity. This is dehumanizing. Even the GDPR, which was created to cause respect for personal privacy, and to protect it, reduces us in compliance considerations to mere “data subjects”: a label that is barely less demeaning than “user” and “consumer.”
While these systems are digital, their legacy designs are industrial: top-down and one-to-many. They also grew into their current forms within the architecture of the client-server Web, rather than atop the peer-to-peer (aka end-to-end) Internet beneath the Web (and everything else). This made sense in the early days of dial-up and asymmetrical provisioning of bandwidth, but is a stale legacy in a time when everyone has ample bandwidth in both directions, most commonly on a mobile device that works as an extension of one's body and mind.
In today's networked world, we need approaches to identity that start with human agency, and are modeled on the way each of us operates in the natural world. We should be able to disclose and express our distinctions, choices, requirements and existing relationships with ease—and with anonymity as the defaulted social state until we decide otherwise.
These are the base requirements addressed by many of today's pioneering self-sovereign identity systems and approaches. Here's the key thing to bear in mind: while self-sovereign identity needs to work with existing administrative identity systems, self-sovereign identity cannot be fully understood or explained in terms of those systems—any more than personal computing can be explained in terms of a mainframe, or the distributed Internet can be explained in terms of a centralized LAN.
When each of us has full control of our naturally self-sovereign identity in the networked world, there is no limit to what we can do—while the limits of administrative systems are painfully apparent. (Example: logins and passwords, which everyone hates.)
This doesn't mean, by the way, that we should throw out the great work that has been done with administrative systems, especially those that have obeyed Kim Cameron's Seven Laws of Identity, which he first wrote in 2004. Here they are:
1. User control and consent
2. Minimum disclosure for a constrained use
3. Justifiable parties
4. Directed identity
5. Pluralism of operators and technologies
6. Human integration
7. Persistent experience across contexts
Today those laws apply to both self-sovereign and administrative identity, and remain an especially helpful guide if we change the first word in that list from “User” to “Personal.”
The time has come to humanize identity in the networked world by making it as personal as it has been all along in the natural one. We can also make progress a lot faster if veterans of administrative systems try to understand self-sovereign approaches from the perspective of how they, as naturally sovereign human beings, choose to be known.
The Equifax data breach saga continues to unfold. In late 2017, the company admitted it had suffered significant data loss starting in March of last year. There were likely multiple data theft events over a number of months. At some point in May, they notified a small group of customers but kept mostly quiet. Months later the story went public, after Equifax contacted government officials at the US federal and state level. The numbers and locations of consumers affected by the breach keeps growing. As of March 1, 2018, Equifax is reported to have lost control of personally identifiable information on roughly 147 million consumers. Though most of the victims are in the US, Equifax had and lost data on consumers from Argentina to the UK.
Perpetrators made off with data such as names, addresses, Social Security numbers, and in some cases, driver’s license numbers, credit card numbers, and credit dispute files. Much of this is considered highly sensitive PII.
The breach and its effects on consumers is only part of the story. Equifax faces 240 class action lawsuits and legal action from every US state. However, the US Consumer Financial Protection Bureau is not investigating, has issued no subpoenas, and has essentially “put the brakes” on any punitive actions. The US Federal Trade Commission (FTC) can investigate, but its ability to levy fines is limited. On March 14, 2018, the US Securities and Exchange Commission (SEC) brought insider trading charges against one of Equifax’ executives, who exercised his share options and then sold before news of the breach was made public.
Given that Equifax is still profiting, and the stock price seems to have suffered no lasting effects (some financial analysts are predicting the stock price will reach pre-breach levels in a few months), fines are one of the few means of incentivizing good cybersecurity and privacy practices. Aiming for regulatory compliance is considered by most in the field to be the bare minimum that enterprises should strive for with regard to security. A failure to strictly enforce consumer data protection laws, as in the Equifax case so far, may set a precedent, and may allow other custodians of consumers’ personal data to believe that they won’t be prosecuted if they cut corners on cybersecurity and privacy. Weak security and increasing fraud are not good for business in general.
At the end of May 2018, the General Data Protection Regulation (GDPR) comes into effect in the EU. GDPR requires 72-hour breach notification and gives officials the ability to fine companies which fail to protect EU person data up to 4% of global revenue (or €20M) per instance. If an Equifax-like data breach happens in the EU after GDPR takes hold, the results will likely be very different.
Regulators in all jurisdictions must enforce the rules on the books for the good of consumers.
Traditional endpoint and infrastructure security approaches are tackling changes to OS, application and communication by monitoring these through dedicated solutions installed as agents onto the actual system. Often these solutions search for specific violations and act upon predefined white listed applications / processes or blacklisted identified threats.
Due to their architecture, virtualization platforms and cloud infrastructures have completely different access to security-relevant information. When intelligently executed, real-time data and current threats can be correlated. But much more is possible from the central and unique perspective these virtualized architectures allow. Observing the behavior of components in the software-defined network, comparing this with their expected behavior and identifying unexpected deviations allows the detection and treatment of previously unknown threats up to zero-day attacks.
Manufacturers such as Citrix and VMware are working at full speed to provide high-performance, integrated security infrastructures as part of their platform. These may be delivered, for example, not only as a component of hypervisor, but also as a component of a hybrid security architecture between cloud, virtualization and bare metal.
By going beyond traditional “known good” and “known bad” approaches through black-listing and whitelisting, such solutions provide an intelligent approach for infrastructure security. The approach of capturing the actual runtime behavior of existing software systems to learn expected and appropriate behavior while applying algorithmic control and monitoring in later phases has the potential to be able to cover a vast number of systems, including homegrown and enterprise-critical systems. Earlier this year, KuppingerCole published an Executive View research document on VMware AppDefense as a representative of this innovative security approach. And just this week VMware announced the availability of AppDefense in EMEA as well as extended capabilities to protect containerized workloads.
If legal laypersons (as I am) read legal texts and regulations, they often miss clear and obligatory guidelines on how to implement them in practice. This is not least due to the fact that laws are generally designed to last and are not directly geared to concrete measures. This type of texts and provisions regularly contain references to the respective "state of the art".
For example, it is obvious that detailed requirements on how companies should implement the protection of the privacy of customers and employees cannot necessarily be found in the EU General Data Protection Regulation (GDPR). The appropriate implementation of such requirements is a considerable challenge and offers substantial scope for interpretation, not least when having to decide between "commercially sensible" and "necessary".
While many organizations currently focus on the implementation of the GDPR, the BAFIN (the German Federal Financial Supervisory Authority "Bundesanstalt für Finanzdienstleistungsaufsicht), published a revised version of its "Minimum requirements for risk management"("Mindestanforderungen an das Risikomanagement", MaRisk). Often unknown outside of the financial sector, this regulatory document provides a core framework for the overall implementation of financial business in Germany and subsequently worldwide. MaRisk concretize § 25a Paragraph 1 of the German Banking Act („Kreditwirtschaftsgesetz“, KWG) and are therefore its legally binding interpretation.
The new version of MaRisk has been extended to include a requirements document that deals with its concrete implementation in banking IT, so to speak as a concretisation of MaRisk itself. This gives financial institutions clear and binding guidelines that become valid without a long-term implementation period. This document, entitled "Supervisory Requirements for IT in Financial Institutions" covers a large number of important topics in the implementation of measures to meet the IT security requirements for banks.
It does this by describing (and calling for) an appropriate technical and organizational design of IT systems for financial services. Particular attention has to be paid to information security requirements. It aims at improving IT service continuity management and information risk management and defines how new media should be handled appropriately. Beyond pure technology, a variety of measures are designed to create an enterprise risk culture and to increase employee awareness for IT security and risk management. And it includes specific requirements for modernizing and optimizing the bank's own IT infrastructure, but gives clear advice also with regard to the aspect of outsourcing IT (think: cloud).
Financial institutions must define and implement an information security organization, in particular by appointing an information security officer. Adequate resource planning to support the defined information security must ensure that this agreed security level can actually be achieved.
For national and international banks, meeting these requirements is a essential challenge, in particular due to their immediate applicability. But should you be interested in these requirements if you are not active in Germany or maybe you are not a bank at all?
From my point of view: Yes! Because it is not easy to find such clear and practice-oriented guidelines for an appropriate handling of IT security within the framework of regulatory requirements. And it is to be expected that similar requirements will become increasingly relevant in other regions and sectors in the future.
KuppingerCole will continue to monitor this topic in the future and integrate the criteria of the BAIT as a relevant module for requirements definitions in the area of enterprise IT security.
The Facebook data privacy story continues to be in the headlines this week. For many of us in IT, this event is not really a surprise. The sharing of data from social media is not a data breach, it’s a business model. Social media developers make apps (often as quizzes and games) that harvest data in alignment with social networks’ terms of service. By default, these apps can get profile information about the app users and their friends/contacts. There are no granular consent options for users. What gives this story its outrage factor is the onward sharing of Facebook user data from one organization to another, and the political purposes for which the data was used. Facebook now admits that the data of up to 87 million users was used by Cambridge Analytica. If you are a US-based Facebook user, and are curious about how they have categorized your politics, go to Settings | Ads | Your Information | Your Categories | US Politics.
But data made available through unsecured APIs, usually exported in unprotected file formats without fine-grained access controls or DRM, cannot be assumed to be secure in any way. Moreover, the Facebook - Cambridge Analytica incident is probably just the first of many that are as yet unreported. There are thousands of apps and hundreds of thousands of app developers that have had similar access to Facebook and other social media platforms for years.
CNBC reports that Facebook was attempting to acquire health record data from hospitals, but that those plans are on “hiatus” for the moment. Though the story says the data would be anonymized, there is no doubt that unmasked health care records plus social media profile information would be incredibly lucrative for Facebook, health care service providers, pharmaceutical companies, and insurance companies. But again, according to this report, there was no notion of user consent considered.
It is clear that Facebook users across the globe are dissatisfied with the paucity of privacy controls. In many cases, users are opting out by deleting their accounts, since that seems to be the only way at present to limit data sharing. However, the data sharing without user consent problem is endemic to most social networks, telecommunications networks, ISPs, smartphone OSes and apps developers, free email providers, online retailers, and consumer-facing identity providers. They collect information on users and sell it. This is how these “free” services pay for themselves and make a profit. The details of such arrangements are hidden in plain sight in the incomprehensible click-through terms of service and privacy policies that everyone must agree to in order to use the services.
This is certainly not meant to blame the victim. At present, users of most of these services have few if any controls over how their data is used. Even deleting one’s account doesn’t work entirely, as a Belgian court found that (and ruled against) Facebook for collecting information on Belgian citizens who were not even Facebook users.
The rapidly approaching May 25th GDPR effective date will certainly necessitate changes in the data sharing models of social media and all organizations hosting and processing consumer data for EU persons. Many have wondered if GDPR will be aggressively enforced. As a result of this Facebook – Cambridge Analytica incident, EU Justice Commissioner Vera Jourova said “I will take all possible legal measures including the stricter #dataProtection rules and stronger enforcement granted by #GDPR. I expect the companies to take more responsibility when handling our personal data.” We now have the answer to the “Will the EU enforce GDPR?” question.
It is important to note that GDPR does not aim to put a damper on commerce. It only aims to empower consumers by giving them control over what data they share and how it can be used. GDPR requires explicit consent per purpose (with some exceptions for other legitimate processing of personal data). This consent per purpose stipulation will require processors of personal data to clearly ask and get permission from users.
Other countries are looking to the GDPR model for revamping their own consumer privacy regulations. We predict that in many jurisdictions, similar laws will come into effect, forcing social networks and consumer-facing companies to change how they do business in more locations.
Even before the Cambridge Analytica story broke, Facebook, Google, and Twitter were under fire for allowing their networks to spread “fake news” in the run-up to the US election cycle. Disengagement was growing, with some outlets reporting 18-24% less time spent on site per user. Users are quickly losing trust in social media platforms for multiple reasons. This impacts commerce as well, in that many businesses such as online retailers rely on “social logins” such as Facebook, Twitter, Google, etc.
To counter their growing trust problems, social network providers must build in better privacy notifications and consent mechanisms. They must increase the integrity of content without compromising free speech.
Facebook and other social media outlets must also communicate these intentions to improve privacy controls and content integrity monitoring to their users. In the Facebook case, it is absolutely paramount to winning back trust. CEO Mark Zuckerberg announced that Facebook is working on GDPR compliance but provided no details. Furthermore, he has agreed to testify before the US Congress, but his unwillingness to personally appear in the UK strengthens a perception that complying with EU data protection regulations is not a top priority for Facebook.
If social network operators cannot adapt in time, they will almost certainly face large fines under GDPR. It is quite possible that the social media industry may be disrupted by new privacy-protecting alternatives, funded by paid subscriptions rather than advertising. The current business model of collecting and selling user data without explicit consent will not last. Time is running out for Facebook and other social network providers to make needed changes.
Just when you thought we had enough variations of IAM, along comes FIAM. Fake digital identities are not new, but they are getting a lot of attention in the press these days. Some fake accounts are very sophisticated and are difficult for automated methods to recognize. Some are built using real photos and stolen identifiers, such as Social Security Numbers or driver’s license numbers. Many of these accounts look like they belong to real people, making it difficult for social media security analysts to flag them for investigation and remove them. With millions of user credentials, passwords, and other PII available on the dark web as a result of the hundreds of publicly acknowledged data breaches, it’s easy for bad actors to create new email addresses, digital identities, and social media profiles.
As we might guess, fake identities are commonly used for fraud and other types of cybercrime. There are many different types of fraudulent use cases, ranging from building impostor identities and attaching to legitimate user assets, to impersonating users to spread disinformation, and for defamation, extortion, catfishing, stalking, trolling, etc. Fake social media accounts were used by St. Petersburg-based Internet Research Agency to disseminate election-influencing propaganda. Individuals associated with these events have been indicted by the US, but won’t face extradition.
Are there legitimate uses for fake accounts? In many cases, social network sites and digital identity providers have policies and terms of service that prohibit the creation of fake accounts. In the US, violating websites’ terms of service also violates the 1984 Computer Fraud and Abuse Act. Technically then, in certain jurisdictions, creating and using fake accounts is illegal. It is hard to enforce, and sometimes gets in the way of legitimate activities, such as academic research.
However, it is well-known that law enforcement authorities routinely and extensively use fake digital identities to look for criminals. Police have great success with these methods, but also scoop up data on innocent online bystanders as well. National security and intelligence operatives also employ fake accounts to monitor the activities of individuals and groups they suspect might do something illegal and/or harmful. It’s unlikely that cops and spies have to worry much about being prosecuted for using fake accounts.
A common approach that was documented in the 1971 novel “The Day of the Jackal by Frederick Forsyth is to use the names and details of dead children. This creates a persona that is very difficult to identify as being a fraud. It is still reported as being in use and when discovered causes immense distress to the relatives.
In the private sector, employees of asset repossession companies also use fake accounts to get close to their targets to make it easier for them to repo their cars and other possessions. Wells Fargo has had an ongoing fake account creation scandal, where up to 3.5 million fake accounts were created so that the bank could charge customers more fees. The former case is sneaky and technically illegal, while the latter case is clearly illegal. What are the consequences, for Wells Fargo? They may have suffered a temporary stock price setback and credit downgrade, but their CEO got a raise.
FIAM may sound like a joke, but it is a real thing, complete with technical solutions (using above-board IDaaS and social networks), as well as laws and regulations sort of prohibiting the use of fake accounts. FIAM is at once a regular means of doing business, a means for spying, and an essential technique for executing fraud and other illegal activities. It is a growing concern for those who suffer loss, particularly in the financial sector. It is also now a serious threat to social networks, whose analysts must remove fake accounts as quickly as they pop up, lest they be used to promote disinformation.
GDPR comes into force on May 25th this year, the obligations from this are stringent, the penalties for non-compliance are severe and yet many organizations are not fully prepared. There has been much discussion in the press around the penalties under GDPR for data breaches. KuppingerCole’s advice is that preparation based on six key activities is the best way to avoid these penalties. The first two activities are first to find the personal data and second to control access to this data.
While most organizations will be aware of where personal data is used as part of their normal business operations, many use this data indirectly, for example as part of test and development activities. Because of the wide definition of processing given in GDPR, this use is also covered by the regulation. The Data Controller is responsible to demonstrate that this use of personal data is fair and lawful. If this can be shown, then the Data Controller will also need to be able to show that this processing complies with all the other data protection requirements.
While the costs and complexities of compliance with GDPR may be justified by the benefits from using personal data for normal business processes this is unlikely to be the case for its non-production use. However, the GDPR provides a way to legitimately avoid the need for compliance. According to GDPR (Recital 26), the principles of data protection should not apply to anonymous information, that is information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not identifiable.
One approach is known as pseudonymisation, and GDPR accepts the use of pseudonymisation as an approach to data protection by design and data protection by default. (Recital 78). Pseudonymisation is defined in Article 4 as “the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information...” with the additional proviso that the additional information is kept separate and well protected.
In addition, Under Article 6 (4)(e), the Data Controller can take account of the existence of appropriate safeguards, which may include encryption or pseudonymisation, when considering whether processing for another purpose is compatible with the purpose for which the personal data were initially collected and the processing for another purpose. However, the provisos introduce an element of risk for the Data Controller relating to the reversibility of the process and protection of any additional information that could be used identify individuals from the pseudonymized data.
However, not all approaches to anonymization and pseudonymisation are equal. In 2014, the EU article 29 Working Party produced a report providing their opinion on the use of Anonymization Techniques applied to EU privacy. Although it is written with reference to the previous directive 95/46/EC, it is still very relevant. It identifies three tests which should be used to judge an anonymization technique:
- is it still possible to single out an individual?
- is it still possible to link records relating to an individual?
- can information be inferred concerning an individual?
It also provides examples of where anonymization techniques have failed. For example, in 2006, AOL publicly released a database containing twenty million search keywords for over 650,000 users over a 3-month period. The only privacy preserving measure consisted of replacing the AOL user ID by a numerical attribute. This led to the public identification and location of some of the users by the NY Times and other researchers.
Pseudonymization provides a useful control over the privacy of personal data and is recognized by GDPR as a component of privacy by design. However, it is vital that you chose and use the appropriate pseudonymization techniques for your use case correctly. For more information on this subject attend KuppingerCole’s webinar “Acing the Upcoming GDPR Exam”. There will also be a stream of sessions on GDPR at KuppingerCole’s European Identity & Cloud Conference in Munich May 15-18th, 2018.
Recently, Microsoft has announced general availability for another addition to their cybersecurity portfolio: Azure Advanced Threat Protection (Azure ATP for short) – a cloud-based service for monitoring and protecting hybrid IT infrastructures against targeted cyberattacks and malicious insider activities.
The technology behind this service is actually not new. Microsoft has acquired it back in 2014 with the purchase of Aorato, an Israel-based startup company specializing in hybrid cloud security solutions. Aorato’s behavior detection methodology, named Organizational Security Graph, enables non-intrusive collection of network traffic, event logs and other data sources in an enterprise network and then, using behavior analysis and machine learning algorithms, detects suspicious activities, security issues and cyber-attacks against corporate Active Directory servers.
Although this may sound like an overly specialized tool, in reality solutions like this can be a very useful addition to any company’s security infrastructure – after all, according to statistics, the vast majority of security breaches leverage compromised credentials, and close monitoring of the heart of nearly every company’s identity management – the Active Directory servers – allows for quicker identification of both known malicious attacks and traces of unknown but suspicious activities. And since practically every cyberattack involves manipulating stolen credentials at some stage of the killchain, identifying them early allows security experts to discover these attacks much earlier than the typical 99+ days.
Back in 2016, we have reviewed Microsoft Advanced Threat Analytics (ATA), the first product Microsoft released with the Security Graph technology. KuppingerCole’s verdict at the time was that the product was easy to deploy, transparent and non-intrusive, with an innovative and intuitive user interface, yet powerful enough to identify a wide range of security issues, malicious attacks and suspicious activities in corporate networks. However, the product was only intended for on-premises deployment and provided very limited forensic and mitigation capabilities due to lack of integration with other security tools.
Well, with the new solution, Microsoft has successfully addressed both of these challenges. Azure ATP, as evident from its name, is a cloud-based service. Although you obviously still need to deploy sensors within your network to capture the network traffic and other security events, they are sent directly to the Azure cloud, and all the correlation magic happens over there. This makes the product substantially more scalable and fitting even for the largest corporate networks. In addition, it can directly consume the latest threat intelligence data collected by Microsoft across its cloud infrastructure.
On top of that, Azure ATP integrates with Windows Defender ATP – Microsoft’s endpoint protection platform. If you’re using both platforms, you can seamlessly switch between them for additional forensic information or direct remediation of malware threats on managed endpoints. In fact, the company’s Advanced Threat Protection brand now also includes Office 365 ATP, which provides protection against malicious emails and URLs, as well as secures files in Office 365 applications.
With all three platforms combined, Microsoft can now offer seamless protection against malicious attacks across the most critical attack surfaces as a fully managed cloud-based solution.
CyberArk, an overall leader in privilege management according to KuppingerCole Leadership Compass on Privilege Management, announced yesterday that it has acquired certain assets in a privately held America-based Israeli cloud security provider, Vaultive.
Data encryption has emerged as a key inhibitor for organizations seeking to adopt cloud services. Most cloud providers today offer own encryption to ensure that data in transit and at rest remains unreadable if a breach occurs. However, as organizations adopt multiple SaaS applications, varied encryption standards and inconsistent key management practices of cloud providers can quickly lead to a complex environment with lack of visibility and control of keys.
While most privilege management products today can help with credential vaulting and monitoring of shared administrative access to cloud platforms (including SaaS, IaaS and PaaS), they are largely ineffective against the risks of privileged credentials under direct compromise at cloud providers' end. Some cloud access security brokers (CASBs) can prevent such risks by offering data encryption capabilities that separate encryption of data at rest and key management from that of the cloud providers. However, the CASBs lack privileged account management capabilities and usually do not support on-premises systems. Therefore, organizations requiring a complete control of privileged access across cloud platforms have no option but to integrate CASB's capabilities with their privileged management solution. CyberArk's acquisition of Vaultive is primarily aimed at solving this challenge for its customers.
Vaultive is a data encryption platform for cloud that helps organizations retain control of their encryption keys providing an end-to-end encryption of data across cloud platforms. CyberArk with its existing capabilities to manage privileged access in cloud platforms can benefit from Vaultive's data encryption capabilities to:
- assure its customers of exclusive administrative access to cloud while retaining control over entire data lifecycle
- extend its privilege management capabilities beyond administrative access to privileged business users of SaaS applications
- build finer-grained privileged access control for cloud environments using context-aware access policies from Vaultive
While only time will tell how well CyberArk is able to integrate and promote Vaultive's Cloud Data Security platform within its privileged account and session management capabilities for cloud, this acquisition comes in the wake of a conscious and well thought out decision to offer a one-stop cloud security solution for the customers.
In May 2017, my fellow KuppingerCole analyst Mike Small published the Executive Brief research document entitled “Six Key Actions to Prepare for GDPR” (then and now free to download). This was published almost exactly one year before the GDPR takes full effect and outlines six simple steps needed to adequately prepare for this regulation. “Simple” here means “simple to describe”, but not necessarily “simple to implement”. However, while time has passed since then, and further regulations and laws are gradually gaining additional importance, properly ensure consumers’ privacy remains a key challenge today.
An even briefer summary of the recommendations provided by Mike is: (1) Find personal data in your organization, (2) control access to it, (3) store and process it legally and fairly, e.g. by obtaining and managing consent. Do (4) all this accordingly in the cloud as well. Prevent a data breach but (5) be properly prepared for what to do should one occur. And finally (6) implement privacy engineering so that IT systems are designed and built from ground up to ensure data privacy.
While tools-support for these steps was not overwhelming back then, things have changed in the meantime. Vendors inside and outside the EU have understood the key role they can play in supporting and guiding their customers on their path to compliance by providing built-in and additional controls in their systems and platforms. Compliance and governance are no longer just ex-post reports and dashboards (although these are still essential for providing adequate evidence). Applications and platforms in daily use now provide actionable tools and services to support privacy, data classification, access control, consent management, and data leakage prevention.
One example: Microsoft’s Office and software platforms continue to be an essential set of applications for almost all organizations, especially in their highly collaborative and cloud-based incarnations with the suffix 365. Just recently, Microsoft announced the availability of a set of additional tools to help organizations implement an information protection strategy with a focus on regulatory and legal requirements (including EU GDPR, ISO 27001, ISO 27018, NIST 800- 53, NIST 800-171, and HIPAA) across the Microsoft 365 platforms.
For data, processes and applications running within their ecosystems, these tools support the implementation of many of the steps described above. By automatically or semi-automatically detecting and classifying personal data relevant to GDPR the process of identifying the storage and processing of this kind of data can be simplified. Data protection across established client platforms as well as on-premises is supported through labeling and access control. This labeling mechanism, together with Azure Information Protection and Microsoft Cloud App Security extends the reach of stronger data protection into the cloud.
An important component on an enterprise level is Compliance Manager, which is available for Azure, Dynamics 365, as well as Office 365 Business and Enterprise customers in public clouds. It enables continuous risk assessment processes across these platforms, deriving individual and specific compliance scores from weighted risk scores and implemented controls and measures.
In your organization’s ongoing journey to achieve and maintain compliance to GDPR as well as for other regulations you need your suppliers to become your partners. In this respect, other vendors have announced the provision of tools and strategies for several other applications, as well as virtualization and infrastructure platforms, ranging from VMware to Oracle and from SAP to Amazon. Leveraging their efforts and tools can greatly improve your strategy towards implementing continuous controls for privacy and security.
So, if you are using platforms that provide such tools and services, you should evaluate their use and benefit to you and your organization. Where appropriate, embed them into your processes and workflows as fundamental building blocks as part of your individual strategy for compliance. There is not a single day to waste, as the clock is ticking.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Companies continue spending millions of dollars on their cybersecurity. With an increasing complexity and variety of cyber-attacks, it is important for CISOs to set correct defense priorities and be aware of state-of-the-art cybersecurity mechanisms. [...]