The dream of being able to create systems that can simulate human thought and behaviour is not new. Now that this dream appears to be coming closer to reality there is both excitement and alarm. Famously, in 2014 Prof. Stephen Hawking told the BBC: "The development of full artificial intelligence could spell the end of the human race”. Should we be alarmed by these developments and what in practice does this mean today?
The origins of today’s AI (Artificial Intelligence) can be traced back to the seminal work on computers by Dr Alan Turing. He proposed an experiment that became known as the “Turing Test”, to define the standard for a machine to be called "intelligent". A computer could only be said to "think" if a human was not able to distinguish it from a human being through a conversation with it.
The theoretical work that underpins today’s AI and ML (Machine Learning) was developed in the 1940s and 1950s. The early computers of that era were slow and could only store limited amounts of data, this restricted what could practically be implemented. This has now changed – the cloud provides the storage for vast amounts of data and the computing power needed for ML.
The theoretical basis for ML stems from work published in 1943 by Warren McCulloch and Walter Pitts on a computational model for neural networks based on mathematics and algorithms called threshold logic. Artificial neural networks provide a framework for machine learning algorithms to learn based on examples without being formally programmed. This learning needs large amounts of data and the significant computing power which the cloud can provide.
Analysing this the vast amount of data now available in the cloud creates its own challenges and ML provides a potential solution to these. Normal statistical approaches may not be capable of spotting patterns that a human would see, and programming individual analyses is laborious and slow. ML provides a way to supercharge human ability to analyse data. However, it changes the development cycle from programming to training based on curated examples overseen by human trainers. Self-learning systems may provide a way around the programming bottleneck. However, the training-based development cycle creates new challenges around testing, auditing and assurance.
ML has also provided a way to enhance algorithmic approaches to understanding visual and auditory data. It has for example enabled facial recognition systems as well as chatbots for voice-based user interactions. However, ML is only as good as the training and is not able to provide explanations for the conclusions that it reaches. This leads to the risk of adversarial attacks – where a third party spots a weakness in the training and exploits this to subvert the system. However, it has been applied very successfully to visual component inspection in manufacturing where it is faster and more accurate than a human.
One significant challenge is how to avoid bias – there are several reported examples of bias in facial recognition systems. Bias can come from several sources. There may be insufficient data to provide a representative sample. The data may have been consciously or unconsciously chosen in a way that introduces bias. This latter is difficult to avoid since every human is part of a culture which is inherently founded on a set of shared beliefs and behaviours which may not be the same as in other cultures.
Another problem is one of explanation – ML systems are not usually capable of providing an explanation for their conclusions. This makes training ML doubly difficult because when the system being trained gets the wrong answer it is hard to figure out why. The trainer needs to know this to correct the error. In use, an explanation may be required to justify a life-changing decision to the person that it affects, to provide the confidence needed to invest in a project based on a projection, or to justify why a decision was taken in a court of law.
A third problem is that ML systems do not have what most people would call “common sense”. This is because currently each is narrowly focussed on one specialized problem. Common sense comes from a much wider understanding of the world and allows the human to recognize and discard what may appear to be a logical conclusion because in the wider context it is clearly stupid. This was apparent when Microsoft released a chatbot that was supposed to train itself did not recognize mischievous behaviour.
Figure: AI, Myths, Reality and Challenges
In conclusion, AI systems are evolving but they have not yet reached the state portrayed in popular science fiction. ML is ready for practical application and major vendors offer tools to support this. The problems where AI is ready can be applied today can be described in two dimensions – the scope of knowledge required and the need for explanation. Note that the need for explanation is related to the need for legal justification or where potential consequences of mistakes are high.
Organizations are recommended to look for applications that fit the green area in the diagram and to use caution when considering those that would lie in the amber areas. The red area is still experimental and should only be considered for research.
For more information on this subject attend the AI track at EIC in Munich in May 2019.
Hype topics are important. They are important for vendors, startups, journalists, consultants, analysts, IT architects and many more. The problem with hypes is that they have an expiration date. Who remembers 4GL or CASE tools as an exciting discussion topic in IT departments? Well, exactly, that's the point...
From that expiration date on, they either have to be used for some very good purposes within a reasonable period of time, or they turn out to be hot air. There have been quite a few hype topics lately. Think for example of DevOps, Machine Learning, Artificial Intelligence, IoT, Containers and Microservices, Serverless Computing, and the Blockchain. All of these will be evaluated against their impact in the real world. The Blockchain can even be called a prototype for hype topics. The basic concept of trust in hostile environments through technology and the implementation of crypto currencies laid the groundwork for an unparalleled hype. However, there are still no compelling new implementations of solutions using this technology, which any IT-savvy hype expert could refer to immediately.
This week I attended the Berlin AWS Summit as an analyst for KuppingerCole. Many important (including many hype) topics, which have now arrived in reality, were looked at in the keynotes, combined with exciting success stories and AWS product and service offerings. These included migration to the cloud, big data, AI and ML, noSQL databases, more AI and ML, containers and microservices, data lakes and analytics, even more AI and ML and much more that is available for immediate use in the cloud and "as a service" to today's architects, developers and creators of new business models.
But if you weren't attentive just for a short moment, you could have missed the first appearance of the Blockchain topic: at the bottom of the presentation slide about databases in the column "Purpose-Built" you could find "Document-DBs", "Key-Value"-, "In-Memory-", "Time series-" and Graph databases as well as "Ledger: Amazon QLDB".
Even the word "Blockchain" was missing. A clear technological and conceptual categorization.
Behind this first dry mention is the concept of QLDB as a fully managed ledger solution in the AWS cloud, announced on the next presentation slide as "a transparent, immutable, cryptographically verifiable transaction log owned by a central trusted authority" which many purists will not even think of as a Blockchain. Apart from that AWS provides also a preview of a fully managed Blockchain based on Hyperledger Fabric or Ethereum.
This development, which has of course already manifested before in several other comparable offers from competitors, is not the end, but probably only the beginning of the real Blockchain hype. It proves that there is demand for these conceptional and technological building blocks and that this technology has come to stay.
This clearly corresponds directly and stunningly accurate to the development depicted in the trend compass for Blockchain and Blockchain Identity that Martin Kuppinger presented in this video blog post. Less hype, less volume in investment, but much better understood.
Figure: The Trend Compass - Blockchain Hype
Like every good hype topic that is getting on in years, it has lost a bit of its striking attractiveness to laymen, but gained in maturity for IT, security and governance professionals. In practice, however, it can now play a central role in the choice of the adequate tools for the right areas of application. And we will for sure need trust in hostile environments through software, technology and processes in the future.
The QLDB product offered by AWS and the underlying concept cited above is certainly not the only possible and meaningful form of Blockchain or decentralized, distributed and public digital ledger in general. But for an important class of applications of this still disruptive technology another efficient and cost-effective implementation for real life (beyond the hype) becomes available. Having the Blockchain available in such an accessible form will potentially drive Blockchain in a maturing market on to the upper right sector of the trend compass, as an established technology with substantial market volume, even if might not even be called explicitly „Blockchain“ in every context.
An organization’s need to support communication and collaboration with external parties such as business partners and customers is just as an essential technical foundation today as it has been in the past. Web Access Management and Identity Federation are two vital and inseparable technologies that organizations can use to manage access to and from external systems, including cloud services, consistently. While the core Web Access Management and Identity Federation technologies have been well established for years, organizations will still need a strategic approach to address the growing requirement list that can support a Connected and Intelligent Enterprise.
New IT challenges are driving the shift in IT from a traditional, internal-facing approach towards an open IT infrastructure supporting this Connected and Intelligent Enterprise. At the core of these changes is the need to become more agile in an increasingly complex and competitive business environment. Because of this, business models have to adapt more rapidly, and organizations need to react more quickly to new attack vectors that are continually changing. Having a Connected Enterprise means that organizations have to deal with more and larger user populations than ever before. Given these new challenges, the technologies that help to support this complex and changing landscape include Cloud, Mobile, Social and Intelligent Computing.
As the changing workforce looks to work from anywhere from any device, the need to manage mobile devices are being leveraged onto organizations. Amongst these other technologies are new types of cloud-based directory services as well as various other kinds of Cloud services that include Cloud Identity Services that give flexibility and control for both internal and external identities. Support for social logins such as Facebook, Google+, etc., are also needed and is now considered standard support for established Cloud Service Providers today. In addition to the foundational Access Management and Identity Federation capabilities, improvements to authentication and authorization technologies such as risk- and context-based Access Management, sometimes called “adaptive” authentication and authorization, are needed too.
Figure: Overall Leadership rating for the Access Management and Federation market segment
In the market segment of Web Access Management and Identity Federation, KuppingerCole is seeing an evolutionary shift in vendor solutions towards the support of the Connected and Intelligent Enterprise in various degrees. In the latest Web Access Management and Identity Federation Leadership Compass, we evaluated 15 vendors in this market segment as depicted here in this overall leadership chart. So, when considering your organizational requirements for Web Access Management and Identity Federation, you should also think about how your IT infrastructure is connecting and intelligently adapting on-premise IT to the outer world in its many different and changing ways.
To get the latest information on the market, that includes detailed technical descriptions of the leading solutions, see our most recent Web Access Management and Identity Federation Leadership Compass.
Blockchain - Just a Hype?
Beyond the new data privacy regulations: how to improve customer understanding and the customer experience?
When it comes to state-of-the-art sales and marketing, customer experience (CX) is a highly important topic. Creating and analyzing outstanding customer journeys while considering attractive and suitable marketing touchpoints are seen as key to success when it comes to omnichannel marketing.
The customer experience depends on many factors, all of which have to be considered in terms of strategic and operational marketing. A key topic is the individualization of various marketing touchpoints. Individual content, recommendations, user interfaces, and product offers can lead to a win-win situation: resulting in improved customer satisfaction and marketing success.
Artificial intelligence is a recent megatrend facilitating the customer experience—enabling advanced profiling, predictions, and continuous improvements to be made. The emerging Internet of Things (IoT) is opening the door to consumer’s living rooms, enabling “everywhere marketing.”
But what about privacy? Undoubtedly, data protection must be part of the story—whether you like it or not. Individual customer journeys depend on the processing of personally identifiable information (PII), which is restricted by regulations in various countries, such as GDPR.
Anyway, complying with the relevant legislation is mandatory. Creating outstanding customer experience while balancing privacy and marketing is a challenge.
Consumers are well aware of the issue of privacy, e.g., due to the latest news about social network data leaks. Furthermore, new technologies, such as AI and IoT, are seen as critical points of concern by many consumers as they are not yet fully aware of the consequences of how these handle their privacy.
Nevertheless, as long as customers are convinced that their data is in safe hands and they see an added value of providing it, they will give consent to process their PII. Transparency and customer understanding are thus essential in this context; this can be achieved e.g., by providing context-oriented information in addition to mandatory privacy policies, or by providing easily configurable privacy centers.
In the end, it can even be a marketing opportunity to convince consumers and customers to provide PII on the basis that your company has not only a high reputation related to its core business but also if it is seen as a champion of privacy as well. This can lead to sustainable trust—a typical marketing goal.
KuppingerCole provides extensive research and advisory helping privacy and marketing stakeholders to combine the best of their two worlds.
Our research documents give insights into many topics to be considered and balanced when it comes to creating trustful customer experiences, such as marketing automation, consumer identity management, artificial intelligence, privacy, security, governance.
We would be pleased to welcome you to experience a customer journey with KuppingerCole—offering many fresh insights in terms of data privacy and CX.
According to the Ponemon Institute - cyber incidents that take over 30 days to contain cost $1m more than those contained within 30 days. However, less than 25% of organizations surveyed globally say that their organization has a coordinated incident response plan in place. In the UK, only 13% of businesses have an incident management process in place according to a government report. This appears to show a shocking lack of preparedness since it is when not if your organization will be the target of a cyber-attack.
Last week on January 24th I attended a demonstration of IBM’s new C-TOC (Cyber Tactical Operations Centre) in London. The C-TOC is an incident response centre housed in an 18-wheel truck. It can be deployed in a wide range of environments, with self-sustaining power, an on-board data centre and cellular communications to provide a sterile environment for cyber incident response. It is designed to provide companies with immersion training in high pressure cyber-attack simulations to help them to prepare for and to improve their response to these kinds of incidents.
The key to managing incidents is preparation. There are 3 phases to a cyber incident, these are the events that led up to the incident, the incident itself and what happens after the incident. Prior to the incident the victim may have missed opportunities to prevent it. When the incident occurs, the victim needs to detect what is happening, to manage and contain its effects. After the incident the victim needs to respond in a way that not only manages the cyber related aspects but also deals with potential customer issues as well as reputational damage.
Prevention is always better than cure, so it is important to continuously improve you organization’s security posture, but you still need to be prepared to deal with an incident when it occurs.
The so-called Y2K (Millenium) bug is an example of an incident that was so well managed some people believe it was a myth. In fact, I like many other IT professionals, spent the turn of the century in a bunker ready to help any organization experiencing this problem. However, I am glad to say that the biggest problem that I met was when I returned to my hotel the next morning, I had to climb six flights of stairs because the lifts had been disabled as a precaution. There were many pieces of software that contained the error and it was only through the recognition of the problem, rigorous preparation to remove the bug as well as planning to deal with it where it arose that major problems were averted.
In the IBM C-TOC I participated in cyber response challenge involving a fictitious international financial services organization called “Bane and Ox”. This organization has a cyber security team and so called “Fusion Centre” to manage cyber security incident response. This exercise started with an HR Onboarding briefing welcoming me into the team.
We then were then taken through an unfolding cyber incident and asked to respond to the events as they occurred with phone calls from the press, attempts to steal money via emails exploiting the situation, a ransom demand, physical danger to employees, customers claiming that their money is being stolen, a data leak and an attack on the bank’s ATMs. I then underwent a TV interview about the bank’s response to the event with hostile questioning by the news reporter, not a pleasant experience!
According to IBM, organizations need a clear statement of the “Commander’s Intent”. This is needed to ensure that everyone works together towards a common goal that everyone can understand when under pressure and making difficult decisions. IBM gave the example that the D Day Commander’s Intent statement was “Take the beach”.
The next priority is to collect information. “The first call is the most important”. Whether it is from the press, a customer or an employee. You need to get the details, check the details and determine the credibility of the source.
You then need to implement a process to resolve where the problems lie and to take corrective action as well as to inform regulators and other people as necessary. This is not easy unless you have planned and prepared in advance. Everyone needs to know what they must do, and management cover is essential to ensure that resources and budget are available as needed. It may also be necessary to enable deviation from normal business processes.
Given the previously mentioned statistics on organizational preparedness for cyber incidents, many organizations need to take urgent action. The preparation needed involves the many parts of the organization not just IT, it must be supported at the board level and involve senior management. Sometimes the response will require faster decision making with the ability to bypass normal processes - only senior management can ensure that this is possible. An effective response need planning, preparation and above all practice.
- Obtain board level sponsorship for your incident response approach;
- Identify the team of people / roles that must be involved in responding to an incident;
- Ensure that it is clear what constitutes an incident and who can invoke the response plan;
- Make sure that you can contact the people involved when you need to;
- You will need external help – set up the agreement for this before you need it;
- Planning, preparation and practice can avoid pain and prosecution;
- Practice, practice and practice again.
KuppingerCole Advisory Note: GRC Reference Architecture – 72582 provides some advice on this area.
This week I had an opportunity to visit the city of Tel Aviv, Israel to attend one of the Microsoft Ignite | The Tour events the company is organizing to bring the latest information about their new products and technologies closer to IT professionals around the world. Granted, the Tour includes other cities closer to home as well, but the one in Tel Aviv was supposed to have an especially strong focus on security and the weather in January is so warm, so here I was!
I do have to confess however that the first day was somewhat boring– although I could imagine that around 2000 visitors were enjoying the show, for me as an analyst most of the information presented in sessions wasn’t really that new. But on the second day, we have visited the Microsoft Israel Development Center in nearby Herzliya and had a chance to talk directly to people leading the development of some of the most interesting products from Microsoft’s security portfolio.
At this moment some readers would probably ask me: wait a minute, are you suggesting that Microsoft is really a security vendor, let alone the best one? Well, that’s where it starts getting interesting! In one of the sessions, the speaker made a strong point for the notion of “good enough security”, explaining that most end-user companies do not really need the best of breed security products, because they’ll eventually end up with a massive number of disjointed tools that need to be managed separately.
Not only does it further increase the complexity of your corporate IT infrastructure that is already complex enough without security; these disconnected tools fail to deliver a unified view into everything happening within it and thus are unable to detect the most advanced cyber threats. Instead, he argued, a perfectly integrated solution covering multiple areas of cybersecurity would be more beneficial for most, even if it’s not the best of breed in individual areas. And who was the best opportunity to offer such an integrated solution? Well, Microsoft of course, given their leading positions in several key markets like on endpoints with Windows, in the cloud with Azure and, of course, in the workplace with Office 365.
Now, I’m not sure I like the term “good enough security” and I definitely do not believe that market domination in one area automatically translates into better opportunities in others, but there is actually a grain of truth behind this bold claim. First of all, being present on so many endpoints, cloud computers, mail servers, and other connected systems, Microsoft is able to collect vast amounts of telemetry data that end up in their Intelligent Security Graph – a single database of security events that can provide security insights and threat intelligence.
Second, even though many people still do not realize it, Microsoft has been a proper security vendor for quite some time already. Even though the company was a late starter in many areas, they are quickly closing the gaps in areas like Endpoint Protection or Cloud Security and in others, like Information Protection, they are already ahead of competitors. In recent years, the company has acquired a number of security startups, primarily here in Israel, and making these new products work together seamlessly has been one of their top priorities. This will certainly not happen overnight but talking to the actual developers gave me a strong impression of their motivation and commitment.
Now, Microsoft has an interesting history of working hard for years to win a completely new market, with impressive successes (like Azure or Xbox) and spectacular failures (remember Windows Mobile?). It seems also that technology excellence plays less of a role here than quality marketing. Unfortunately, this is where the company is still falling short – for example, how many potential customers are even considering Windows Defender Advanced Threat Protection for a shortlist of EDR solutions? Do they even know that Windows Defender is a full-featured EPP/EDR solution and not just a basic antivirus it used to be?
It seems to me that the company is still exploring their marketing strategy, judging by the number of new product names and licensing changes I’ve seen during the last year. We’re down to 4 product lines now, but I really wish they’d choose one name and stick to it. In the end, do I think that Microsoft is the best security vendor of them all? Of course not, they still have a very long way to go towards that, and there is no such thing as the single “best” security vendor anyway. But they are definitely already beyond the “good enough” stage.
Last week I attended the Oracle Open World Europe 2019 in London. At this event Andrew Sutherland VP of technology told us that security was one of the main reasons why customers were choosing the Oracle autonomous database. This is interesting for two reasons firstly it shows that security is now top of mind amongst the buyers of IT systems and secondly that buyers have more faith in technology than their own efforts.
The first of these reasons is not surprising. The number of large data breaches disclosed by organizations continues to grow and enterprise databases contain the most valuable data. The emerging laws mandating data breach disclosure, such as GDPR, have made it more difficult for organizations to withhold information when they are breached. Being in a position of responsibility when your organization suffers a data breach is not a career enhancing event.
The second reason takes me back to the 1980s when I was working in the Knowledge Engineering Group in ICL. This group was working as part of the UK Government sponsored Alvey Programme which was created in response to the Japanese report on 5th generation computing. This programme had a focus on Intelligent Knowledge Based Systems (IKBS) and Artificial Intelligence (AI). One of the most successful products from this group was VCMS a performance advisor for the ICL mainframe. This was commercially very successful with a large uptake in the VME customer base. However, one interesting observation was that it boosted customers to upgrade their mainframes. It became apparent that the buyers of these systems were more ready to accept the advice from VCMS than from the customer service representatives.
From an AI point of view managing computer systems is relatively easy. This is because computer systems are deterministic, and their behaviour can be described using rules. These rules may be complex and there may be a wide range of circumstances that need to be considered. However, given the volume of metered data available and the rules an AI system can usually make a good job of optimizing performance. Computer systems manufacturers also have the knowledge needed.
That is not to diminish the remarkable achievements by Oracle to make their database systems autonomous. Mark Hurd, CEO of Oracle (who had to present via a satellite link because he had been unable to renew his passport because of the US government shutdown) described how the Net Suite team had spent 15 years creating 9,000 specialized table indexes for performance. The autonomous database created 6,000 indexes in a short time, and which improved performance by 7%.
Oracle’s strategy for AI extends beyond the autonomous database. Melissa Boxer, VP Fusion Adaptive Intelligence, described how Oracle Adaptive Intelligent Apps (Oracle AI Apps) provide a suite of prebuilt AI and data-driven across Customer Experience (CX), Human Capital Management (HCM), Enterprise Resource Planning (ERP), and Manufacturing. These use a shared data model to provide Adaptive Intelligence and Machine Learning across all the different pillars. They include a personalized user experience OAUX and intelligent chat bots.
So, to return to the original question – can Autonomous Systems (as defined by Oracle) improve security posture? In my opinion the answer is both yes and no. The benefits of autonomous systems are that they can automatically incorporate patches and implement best practice configuration. Since many data breaches stem from these areas this is a good thing. They can also automatically tune themselves to optimize performance for an individual customer’s workload and, since they discover what is normal, they can potentially identify abnormal activity.
The challenge is that AI technology is not yet able to defend against adversarial activity. Machine Learning is only as good as its training and, at the current state of the art, it is does not provide good explanations for the conclusions reached. This means that sometimes, the learnt behavior contains flaws that can be exploited by a malicious agent to trick the system into the wrong conclusions. Given that cyber crime is an arms race and we must assume that the cyber adversaries are working with the same technology we must assume that this will be deployed against us as well as for us.
For security – AI can help to avoid the simple mistakes – to defend against a concerted attack still needs a human in the loop.
2019 started off with a very noteworthy acquisition in the identity and security space: the purchase of Janrain by Akamai. Janrain is a top vendor in the Consumer Identity market, as recognized in our recent Leadership Compass: https://www.kuppingercole.com/report/lc79059. Portland, OR-based Janrain provides strong CIAM functionality delivered as SaaS for a large number of Global 2000 clients. Boston-based Akamai has a long history of providing web acceleration and content delivery services. Last year, they entered into a partnership whereby Akamai provided network layer protection for Janrain assets.
Akamai has lately been focusing on increasing its market share of web security services in order to grow revenue. This acquisition will add identity layer functionality and increase visibility for the infrastructure company.
New account fraud and account takeover fraud are two of the chief concerns that companies in many industries, particularly finance and retail, must guard against. Bot management has been one of Akamai’s fastest growing services. The further integration of Akamai’s threat intelligence capabilities with Janrain’s CIAM solution has the potential to enhance consumer security for their clients.
As with all such acquisitions, there are two major possible routes their combined service roadmap can take:
- Integrate Janrain's CIAM functionality into Akamai services in a purely supportive way, or
- Integrate Janrain's CIAM functionality into Akamai services while continuing to promote and sell the CIAM services as a standalone solution
In many cases, purchasers in the IT business take the first option. The second option is more difficult to execute, but often offers a better long-term investment for both the purchaser and their clients. Akamai has a defined, well thought-out plan to pursue option 2, to extend the Janrain solution and continue to market it as a CIAM SaaS branded under Akamai.
Given the size of the CIAM market, KuppingerCole expects to see additional M&A activity as well new entrants in this space in the next 12-18 months. Keep up to date with the latest developments and research in cybersecurity and identity management by watching our blog: https://www.kuppingercole.com/blog.
It wasn't too long ago that discussions and meetings on the subject of digitization and consumer identity access management (CIAM) in an international environment became more and more controversial when it came to privacy and the personal rights of customers, employees and users. Back then the regulations and legal requirements in Europe were difficult to communicate, and especially the former German data protection law has always been belittled as exaggerated or unrealistic.
However, in the past three years, during which I have given many talks, workshops and advisory sessions on the subject of the European General Data Protection Regulation (EU-GDPR), perception has shifted. Many companies, especially large ones, have adopted the concepts of privacy, data security and data protection and have embraced the principles behind them.
Of course, this is especially true for European and German companies, as the implementation phase of the GDPR is finally over since the end of May 2018 and the GDPR and its obligations are fully effective and enforceable. This also includes the applicability to all companies processing data of European citizens. Thus, this important milestone of data protection regulation has had considerable effects on international enterprises as well, in particular on large US companies.
I myself, as a consumer, an online services user and a customer, have in the meantime perceived the first positive changes toward a new appreciation of trust and respect as the basis of a customer-supplier relationship (instead of “Hands up, give me all your personal data” as before). That went hand-in-hand with the desire and the expectation that the GDPR as a precedent could also act as a role model.
This is exactly what's happening right now. The first important example is the California Consumer Privacy Act (CCPA). The CCPA was passed at the end of June 2018 and will come into force on January 1, 2020, with actual implementation scheduled to begin sometime between January 1, 2020 and July 1, 2020.
CCPA is surely no 1:1 copy of the GDPR, for it it is considerably slimmer, a little more readable, leaves out some central demands of the GDPR and surely benefits from the experiences that have already been made elsewhere.
One thing is obvious: This puts companies in California and the US in a situation comparable to that in which EU companies were at the beginning of the implementation period, May 25, 2015. Those who have already adjusted their business to accommodate the GDPR probably might be better off, because they only have to deal with the differences between the requirements of GDPR and CCPA. Those enterprises, to which the GDPR was perhaps too "far away", must deal now with the requirements of their national legislation and initiate profound changes in their systems, processes and their organization...
If CCPA is relevant for you, right now is exactly the right time to embark on this journey.
Beware, this is where the promotional section of this blog post kicks in: Wouldn't it be good if you were able to draw on the experience of an international analyst company with extensive experience in this area? With a local team in the US that has international experience in handling personally identifiable information (PII) from customers, consumers, employees and citizens? That has been incorporating privacy, security and trust into the design of complex (C)IAM systems for years? Do you want to be prepared for the implementation of the CCPA? Do you want to meet the GDPR and CCPA requirements in equal measure and define a strategic path for implementation? Then get in touch with us to have a first chat with our US team.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]