KuppingerCole Blog

The Ethics of Artificial Intelligence

Famously, in 2014 Prof. Stephen Hawking told the BBC: "The development of full artificial intelligence could spell the end of the human race." The ethical questions around Artificial Intelligence were discussed at a meeting led by the BCS President Chris Rees in London on October 2nd. This is also an area covered by KuppingerCole under the heading of Cognitive Technologies and this blog provides a summary of some of the issues that need to be considered.

Firstly, AI is a generic term and it is important to understand precisely what this means. Currently the state of the art can be described as Narrow AI. This is where techniques such as ML (machine learning) combined with massive amounts of data are providing useful results in narrow fields. For example, the diagnosis of certain diseases and predictive marketing. There are now many tools available to help organizations exploit and industrialise Narrow AI.

At the other extreme is what is called General AI where the systems are autonomous and can decide for themselves what actions to take. This is exemplified by the fictional Skynet that features in the Terminator games and movies. In these stories this system has spread to millions of computers and seeks to exterminate humanity in order to fulfil the mandates of its original coding. In reality, the widespread availability of General AI is still many years away.

In the short term, Narrow AI can be expected to evolve into Broad AI where a system will be able to support or perform multiple tasks using what is learnt in one domain applied to another. Broad AI will evolve to use multiple approaches to solve problems. For example, by linking neural networks with other forms of reasoning. It will be able work with limited amounts of data, or at least data which is not well tagged or curated. For example, in the cyber-security space to be able to identify a threat pattern that has not been seen before.

What is ethics and why is it relevant to AI? The term is derived from the Greek word “ethos” which can mean custom, habit, character or disposition. Ethics is a set of moral principles that govern behaviour, or the conduct of an activity, ethics is also a branch of philosophy that studies these principles. The reason why ethics is important to AI is because of the potential for these systems to cause harm to individuals as well as society in general. Ethics considerations can help to better identify beneficial applications while avoiding harmful ones. In addition, new technologies are often viewed with suspicion and mistrust. This can unreasonably inhibit the development of technologies that have significant beneficial potential. Ethics provide a framework that can be used to understand and overcome these concerns at an early stage.

Chris Rees identified 5 major ethical issues that need to be addressed in relation to AI:

  • Bias;
  • Explainability;
  • Harmlessness;
  • Economic Impact;
  • Responsibility.

Bias is a very current issue with bias related to gender and race as top concerns. AI systems are likely to be biased because people are biased, and AI systems amplify human capabilities. Social media provides an example of this kind of amplification where uncontrolled channels provide the means to share beliefs that may be popular but have no foundation in fact - “fake news”. The training of AI systems depends upon the use of data which may include inherent bias even though this may not be intentional. The training process and the trainers may pass on their own unconscious bias to the systems they train. Allowing systems to train themselves can lead to unexpected outcomes since the systems do not have the common sense to recognize mischievous behaviour. There are other reported examples of bias in facial recognition systems.

Explanation - It is very important in many applications that AI systems can explain themselves. Explanation may be required to justify a life changing decision to the person that it effects, to provide the confidence needed to invest in a project based on a projection, or to justify after the event why a decision was taken in a court of law. While rule-based systems can provide a form of explanation based on the logical rules that were fired to arrive at a particular conclusion neural network are much more opaque. This poses not only a problem to explain to the end user why a conclusion was reached but also to the developer or trainer to understand what needs to be changed to correct the behaviour of the system.

Harmlessness – the three laws of robotics that were devised by Isaac Asimov in the 1940’s and subsequently extended to include a zeroth law apply equally to AI systems. However, the use or abuse of the systems could breach these laws and special care is needed to ensure that this does not happen. For example, the hacking of an autonomous car could turn it into a weapon, which emphasizes the need for strong inbuilt security controls. AI systems can be applied to cyber security to accelerate the development of both defence and offence. It could be used by the cyber adversaries as well as the good guys. It is therefore essential that this aspect is considered and that countermeasures are developed to cover the malicious use of this technology.

Economic impact – new technologies have both destructive and constructive impacts. In the short-term the use of AI is likely to lead to the destruction of certain kinds of jobs. However, in the long term it may lead to the creation of new forms of employment as well as unforeseen social benefits. While the short-term losses are concrete the longer-term benefits are harder to see and may take generations to materialize. This makes it essential to create protection for those affected by the expected downsides to improve acceptability and to avoid social unrest.

Responsibility – AI is just an artefact and so if something bad happens who is responsible morally and in law? The AI system itself cannot be prosecuted but the designer, the manufacturer or the user could be. The designer may claim that the system was not manufactured to the design specification. The manufacturer may claim that the system was not used or maintained correctly (for example patches not applied). This is an area where there will need to be debate and this should take place before these systems cause actual harm.

In conclusion, AI systems are evolving but they have not yet reached the state portrayed in popular fiction. However, the ethical aspects of this technology need to be considered and this should be done sooner rather than later. In the same way that privacy by design is an important consideration we should now be working to develop “Ethical by Design”. GDPR allows people to take back control over how their data is collected and used. We need controls over AI before the problems arise.

Making Sense of the Top Cybersecurity Trends

With each passing year, the CISO’s job is not becoming any easier. As companies continue embracing the Digital Transformation, the growing complexity and openness of their IT infrastructures mean that the attack surface for hackers and malicious insiders is increasing as well. Combined with the recent political developments such as the rise of state-sponsored attacks, new surveillance laws, and harsh privacy regulations, security professionals now have way too many things on their hands that sometimes keep them awake at night. What’s more important – protecting your systems from ransomware or securing your cloud infrastructure? Should you invest in CEO fraud protection or work harder to prepare for a media fallout after a data breach? Decisions, decisions…

The skills gap problem is often discussed by the press, but the journalists usually focus more on the lack of IT experts which are needed to operate complex and sprawling cybersecurity infrastructures. Alas, the related problem of making wrong strategic decisions about the technologies and tools to purchase and deploy is not mentioned that often, but it is precisely the reason for the “cargo cult of cybersecurity”. Educating the public about the modern IT security trends and technologies will be a big part of our upcoming Cybersecurity Leadership Summit, which will be held in Berlin this November, and last week, my fellow analyst John Tolbert and I have presented a sneak peek into this topic by dispelling several popular misconceptions.

After a lengthy discussion about choosing just five out of the multitude of topics we’ll be covering at the summit, we came up with a list of things that, on one hand, are generating enough buzz in the media and vendors’ marketing materials and, on the other hand, are actually relevant and complex enough to warrant a need to dig into them. That’s why we didn’t mention ransomware, for example, which is actually declining along with the devaluation of popular cryptocurrencies…

Artificial Intelligence in Cybersecurity

Perhaps the biggest myth about Artificial Intelligence / Machine Learning (which, incidentally, are not the same even though both terms are often used interchangeably) is that it’s a cutting-edge technology that has arrived to solve all our cybersecurity woes. This cannot be further from the truth, though: the origins of machine learning predate digital computers. Neural networks were invented back in the 1950s and some of their applications are just as old. It’s only the recent surge in available computing power thanks to commodity hardware and cloud computing that has caused this triumphant entering of machine learning into so many areas of our daily lives.

In his recent blog post, our fellow analyst Mike Small has provided a concise overview of various terms and methods related to AI and ML. To his post, I can only add that applications of these methods to cybersecurity are still very much a field of academic research that is yet to mature into advanced off-the-shelf security solutions. Most products that are currently sold with “AI/ML inside” stickers on their boxes are in reality limited to the most basic ML methods that enable faster pattern or anomaly detection in log files. Only some of the more advanced ones offer higher-level functionality like actionable recommendations and improved forensic analysis. Finally, true cognitive technologies like natural language processing and AI-powered reasoning are just beginning to be adapted towards cybersecurity applications by a few visionary vendors.

It’s worth stressing, however, that such solutions will probably never completely replace human analysts if only for numerous legal and ethical problems associated with decisions made by an “autonomous AI”. If anyone, it would be the cybercriminals without moral inhibitions that we will see among the earliest adopters…

Zero Trust Security

The Zero Trust paradigm is rapidly gaining popularity as a modern alternative to the traditional perimeter-based security, which can no longer provide sufficient protection against external and internal advanced cyberthreats. An IT infrastructure designed around this model treats every user, application or data source as untrusted and enforces strict security, access control, and comprehensive auditing to ensure visibility and accountability of all user activities.

However, just like with any other hyped trend, there is a lot of confusion about what Zero Trust actually is. Fueled by massive marketing campaigns by vendors trying to get into this lucrative new market, a popular misconception is that Zero Trust is some kind of a “next-generation perimeter” that’s supposed to replace outdated firewalls and VPNs of old days.

Again, this cannot be further from the truth. Zero Trust is above all a new architectural model, a combination of multiple processes and technologies. And although adopting Zero Trust approach promises a massive reduction of attack surface, reduction of IT complexity, and productivity improvements, there is definitely no off-the-shelf solution that magically transforms your existing IT infrastructure.

Going Zero Trust always starts with a strategy, which must heterogeneous and hybrid by design. It involves discovering, classifying and protecting sensitive data; redefining identities for each user and device; establishing and enforcing strict access controls to each resource; and finally, continuous monitoring and audit of every activity. And remember: you should trust no one. Especially not vendor marketing!

Insider Threat Management

Ten years ago, the riskiest users in every company were undoubtedly the system administrators. Protecting the infrastructure and sensitive data from them potentially misusing their privileged access was the top priority. Nowadays, the situation has changed dramatically: every business user that has access to sensitive corporate data can, either inadvertently or with a malicious intent, cause substantial damage to your business by leaking confidential information, disrupting access to a critical system or simply draining your bank account. The most privileged users in that regard are the CEO or CFO, and the number of new cyber attacks targeting them specifically is on the rise.

The studies show that cyberattacks focusing on infrastructure are becoming too complex and costly for hackers, so they are focusing on social engineering methods instead. One carefully crafted phishing mail can thus cause more damage than an APT attack that takes months of planning… And the best part is that victims do all the work themselves!

Unfortunately, traditional security tools and even specialized Privileged Access Management solutions aren’t suitable for solving this new challenge. Again, the only viable strategy is to combine changes in existing business processes (especially those related to financial transactions) and a multi-layered deployment of different security technologies ranging from endpoint detection and response to email security to data loss prevention and even brand reputation management.

Continuous Authentication

Passwords are dead, biometric methods are easily circumvented, account hijacking is rampant… How can we still be sure that users are who they are claiming they are when they access a system or an application, from anywhere in the world and from a large variety of platforms?

One of the approaches that’s been growing in popularity in the recent year is adaptive authentication – the process of gathering additional context information about the users, their devices and other environmental factors and evaluating them according to risk-based policies. Such solutions usually combine multiple strong authentication methods and present the most appropriate challenge to the user based on their current risk level. However, even this quite complex approach is often not sufficient to combat advanced cyberattacks.

Continuous authentication paradigm takes this to the next level. By combining dynamic context-based authentication with real-time behavior biometrics, it turns authentication from a single event into a seamless ongoing process and thus promises to reduce the impact of a credential compromise. This way, the user’s risk score is not calculated just once during initial authentication but is constantly reevaluated across time, changing as the user moves into a different environment or reacting to anomalies in their behavior.

Unfortunately, this approach requires major changes in the way applications are designed and modernizing legacy systems can be a major challenge. Another problem is AA’s perceived invasiveness – many users do not feel comfortable being constantly monitored, and in many cases, these actions may even be illegal. Thus, although promising solutions are starting to appear on the market, AA is still far from mainstream adoption.

Embedding a Cybersecurity Culture

Perhaps the biggest myth about cybersecurity is that it takes care of itself. Unfortunately, the history of the recent large-scale cybersecurity incidents clearly demonstrates that even the largest companies with massive budgets for security tools are not immune to attacks. Also, many employees and whole business units often see security as a nuisance that hurts their productivity and would sometimes go as far as to actively sabotage it, maintaining their own “shadow IT” tools and services.

However, the most common cause of security breaches is simple negligence stemming primarily from insufficient awareness, lack of established processes and general reluctance to be a part of corporate cybersecurity culture. Unfortunately, there is no technology that can fix these problems, and companies must invest more resources into employee training, teaching them the cybersecurity hygiene basics, explaining the risks of handling personal information and preparing them for the inevitable response to a security incident.

Even more important is for the CISOs and other high-level executives to continuously improve their own awareness of the latest trends and developments in cybersecurity. And what is a better way for that than meeting the leading experts at the KuppingerCole’s Cybersecurity Leadership Summit next month? See you in Berlin!

Artificial Intelligence and Cyber Security

As organizations go through digital transformation, the cyber challenges they face become more important. Their IT systems and applications become more critical and at the same time more open. The recent data breach suffered by British Airways illustrates the sophistication of the cyber adversaries and the difficulties faced by organization to prevent, detect, and respond to these challenges. One approach that is gaining ground is the application of AI technologies to cyber security and, at an event in London on September 24th, IBM described how IBM Watson is being integrated with other IBM security products to meet these challenges.

The current approaches to cyber defence include multiple layers of protection including firewalls and identity and access management as well as event monitoring (SIEM). While these remain necessary they have not significantly reduced the time to detect breaches. For example, the IBM-sponsored 2018 Cost of a Data Breach Study by Ponemon showed that the mean time for organizations to identify a breach was 197 days. This length of time has hardly improved over many years. The reasons for this long delay are many and include: the complexity of the IT infrastructure, the sophistication of the techniques used by cyber adversaries to hide their activities and the sheer volume data available.

So, what is AI and how can it help to mitigate this problem?

AI is a generic term that covers a range of technologies. In general, the term AI refers to systems that “simulate thought processes to assist in finding solutions to complex problems through augmentation and enhancement of human capabilities”. Kuppingercole has analysed in detail what this really means in practice and this is summarized in the following slide from the EIC 2017 Opening Keynote by Martin Kuppinger.

At the lower layer, improved algorithms enable the transformation of Big Data into “Smart Information”. See KuppingerCole Advisory Note: Big Data Security, Governance, Stewardship - 72565. This is augmented by Machine Learning where human reinforcement is used to tune the algorithms to identify those patterns that are of interest and to ignore those that are not. Cognitive technologies add an important element to this mix through their capability to include speech, vision and unstructured data into the analysis. Today, this represents the state of the art for the practical application of AI to cyber security.

The challenges of AI at the state of the art are threefold:

  • The application of common sense – a human applies a very wide context to decision making whereas AI systems tend to be very narrowly focussed and so sometimes reach what the human would consider to be a stupid conclusion.
  • Explanation – of how the conclusions were reached by the AI system to demonstrate that they are valid and can be trusted.
  • Responsibility –for action based on the conclusions from the system.

Cyber security products collect vast amounts of data – the cyber security analyst is literally drowning in data. The challenge is to find the so called IOCS (Indicators of Compromise), that show the existence of a real threat, amongst this enormous amount of data. The problem is to not just to find what is abnormal, but to filter out the many false positives that obscure the real threats. 

There are several vendors that have incorporated Machine Learning (ML) systems into their products to tune the identification of important anomalies. This is useful to reduce false positives, but it is not enough. To be really useful to a security analyst, the abnormal pattern needs to be related to known or emerging threats. While there have been several attempts to standardize the way information on threats is described and shared most of this information is still held in unstructured form in documents, blogs and twitter feeds. It is essential to take account of these.

This is where IBM QRadar Advisor with Watson is different. A Machine Learning system is only as is only as good as its training – training is the key to its effectiveness. IBM say that it has been trained through the ingestion of over 10 billion pieces of structured data and 1.24 million unstructured documents to assist with the investigation of security incidents. This training involved IBM X-Force experts as well as IBM customers. Because of this training, it can now identify patterns that represents potential threats and provide links to the relevant sources that have been used to reach these conclusions. However, while this helps the security analyst to do their job more efficiently and more effectively, it does not yet replace the human.

Organizations now need to assume that cyber adversaries have access to their organizational systems and to constantly monitor for this activity in a way that will enable them to take action before damage is done. AI provides a great potential to help with this challenge and to evolve to help organizations to improve their cyber security posture through intelligent code analysis and configuration scanning as well as activity monitoring. For more information on the future of cyber security attend KuppingerCole’s Cybersecurity Leadership Summit 2018 Europe.

Consumer Identity World (CIW) USA 2018 - Report

Fall is Consumer Identity Season at KuppingerCole, just in time for holiday shopping. Last week we kicked off our 2018 tour in Seattle. The number of attendees and sponsors was well up over last year, indicating the significant increase in interest in the Consumer Identity and Access Management (CIAM) subject. CIAM is one of the fastest growing market segments under IAM, and with good reason. Companies that deploy CIAM solutions find that they can connect with their consumers better, delivering a more positive experience, and generating additional revenue. CIAM can also aid with regulatory compliance, such as those for privacy (GDPR, CCPA, etc.) and finance (AML, KYC, PSD2, etc.).

Some of the big topics last week were authentication methods for CIAM, particularly biometrics, GDPR and privacy regulations around the world, consumer preferences for identity, and blockchain identity. 

CIAM requires thinking “outside-in” about authentication. The FIDO Alliance held a workshop on Wednesday. FIDO was a particularly relevant topic for CIW, as there were many discussions on the latest authentication methods and techniques. The turnout was excellent, and attendees heard from some of the leaders and active members of the organization.  I believe that FIDO will play a key role in modernizing authentication technology, especially for consumer-facing applications. FIDO specifications have been maturing rapidly. Version 2.0, and the W3C WebAuthN and CTAP protocols are exactly what has been needed to speed adoption. Expect to see FIDO deployments increasing as the major browsers fully support the standard. We can also expect to see higher consumer satisfaction as FIDO rolls out widely, due to ease of use, and better security and privacy. For an overview of how FIDO works, see Alex Takakuwa’s presentation.

Mobile biometric solutions are enjoying popularity, many companies want to find out how to reduce friction for consumers in the authentication process. We considered risk-adaptive and continuous authentication as means to right-size authentication to specific use cases, such as finance and health care. 

I noted that the “C” in CIAM can also apply to “citizens” as well as customers and consumers. State and local government agencies are exploring Government-to-Citizen (G2C) identity paradigms, and in some cases CIAM solutions are a good fit.

Privacy is an ever-present concern for consumer-facing systems. GDPR is in effect in Europe, and companies around the world must now abide by it when processing personal data of European persons. Tim Maiorino gave an update on the state of GDPR. The subject of California’s upcoming privacy law arose in some panels. Will the California model be adopted across the US? Probably not at the federal level, at least not in the foreseeable future. However, other states are likely to enact similar privacy laws, leading to discrepancies and possible difficulties in complying with similar but different regulations. We learned from Marisa Rogers that there is a call for participation for an ISO group on privacy by design for consumer services.

There were several speakers and panels addressing consumer wants and preferences with regard to CIAM.  We had a few sessions on blockchain and identity. Didier Collin de Causabon gave a good example of how blockchain may be able to aid with KYC.  Sarah Squire, co-founder and vice-chair of IDPro, gave a great talk on role of identity professionals in business. Her keynote also contains a lot of practical advice on IAM/CIAM implementations and where we as an industry can go from here.

Our European CIW event will take place on October 29-31 in Amsterdam, followed by our Asia-Pacific CIW in Singapore on November 20-22.

We are already actively planning on CIW for 2019. Join us at the Motif Hotel in Seattle next September 25-27 for the next edition.

Thanks to all of our speakers and panelists for sharing their knowledge. Also thanks to our event sponsors Gigya – SAP Customer Data Cloud, WSO2, Radiant Logic, Nok Nok Labs, Trusted Key, iWelcome, Auth0 and Uniken

 

Intelligente Governance jenseits von Auditoren und regulatorischen Anforderungen

Es kann viele Gründe geben, warum ein Unternehmen eine Initiative zur Verbesserung seiner Informationssicherheit ergreift. Es gibt jedoch einen spezifischen Grund, der sich immer wieder wiederholt: "Weil die Auditoren das sagen, müssen wir....".

Die Realität und die hieraus resultierende Logik war bislang oft wie folgt: Zur Durchsetzung der regulatorischen oder gesetzlichen Anforderungen gehören Sanktionen bei Nichteinhaltung. Diese galt es zu vermeiden. Dies führte zu einem Ankreuz-Listen-Ansatz für die Einhaltung der Vorschriften. Wenn dieser mit dem wie auch immer möglichen  absoluten Minimum an Kosten und Aufwand betrieben wurde, um eine Nicht-Compliance und damit die Geldstrafe zu vermeiden, war der "vorteilhafteste" Ansatz für das Unternehmen gefunden. Als eine durchdachte strategische Sichtweise von Governance und Compliance konnte und kann das nicht betrachtet werden.

Doch mit der Zeit verändern sich die Anforderungen, sie werden mehr und spezifischer. Jüngstes Beispiel aus dem Bereich der Versicherungswirtschaft: Mit dem im Juli 2018 final vorgelegten Dokument „Versicherungsaufsichtliche Anforderungen an die IT“ (VAIT) gibt die BaFin (Bundesanstalt für Finanzdienstleistungsaufsicht) Versicherungsunternehmen konkretere Vorgaben für die Umsetzung ihrer Geschäftsprozesse mittels IT an die Hand.

Die Namensähnlichkeit zu den BAIT und damit den “Bankaufsichtlichen Anforderungen an die IT“ ist mitnichten Zufall: Beide Dokumente stammen von der BaFin und weisen auch inhaltlich starke Parallelen auf. Damit stellen beide Dokumente Herausforderungen dar, denen in betroffenen Unternehmen angemessen, transparent und wohldokumentiert begegnet werden muss. Und da diese nur Verfeinerungen sind, sind diese per sofort gültig, weil die ursprünglich zu verfeinernden, zugrunde liegenden Regelungen ja auch schon gültig sind.

Doch nicht nur die externen Anforderungen verändern sich, auch in den Unternehmen ist verstanden, dass IT heute eine zentrale Komponente des Kerngeschäftes darstellt - oder IT ist  das Kerngeschäft. Backup, Contingency Management, Security, Audit und Governance werden damit auch zunehmend Anforderungen, die von einer wachsenden Anzahl interner Stakeholdern zur Wahrung und Verbesserung der Geschäftsgrundlage eingefordert werden.  IT Risiko Management führt dazu, dass aussagefähige Kennzahlen wie „Key Risk Indicators“ zu klaren Vorgaben an mögliche Ausfall- und Wiederanlaufzeiten,  aber auch zu Aussagen zu SoD, Privilege Management, Rechtevergabe und Access Governance führen

Klar ist darüber hinaus auch, dass Banken mit der um weniges früheren Publikation der BAIT einen gewissen zeitlichen Vorsprung in der Umsetzung wirksamer Maßnahmen haben können. Im Umkehrschluss kann es für Unternehmen der Versicherungsbranche in hohem Maße sinnvoll sein, direkt oder über konsolidierte Best Practices von den Erfahrungen der doch verwandten Branchen zu profitieren.

Proaktive Unternehmen, die nachweislich eine Vielzahl an Anforderungen (extern wie intern) durch Policies, Controls , Dokumentation und Reporting erfüllen müssen, werden die VAIT im Rahmen einer effizienten „Control once, comply to many“-Strategie abdecken wollen. Und mit den deutlich spezifischeren  (aber immer noch interpretationsfähigen) Vorgaben der VAIT werden einige Versicherungen konkreten Handlungsbedarf, sei es bei der Analyse eines verlässlichen Status Quo oder der Identifikation und Durchführung konkreter Umsetzungsprojekte.

Als Herausforderung formuliert:  Die VAIT stehen für jeden im Internet publiziert zur Verfügung stehen. Wirklich proaktive CISOs in Unternehmen jenseits der Finanzbranche werden sich diese als Ausgangsbasis und  als Herausforderung an die Qualität der eigenen,angemessene Security und Compliance annehmen. Jenseits konkreter regulatorischer Anforderungen, aber zur Absicherung des eigenen Unternehmens.

Intelligent Governance Beyond Auditors and Regulatory Requirements

There can be many reasons why a company takes an initiative to improve its information security. However, there is one specific reason that repeats itself time and again: "Because the auditors say that, we have to..."

The reality and the resulting logic have so far often been as follows: The enforcement of regulatory or legal requirements includes sanctions for non-compliance. These had to be avoided.  This led to a check-list approach for regulatory compliance. If this was done with the absolute minimum possible cost and effort in order to avoid non-compliance and thus the fine, the "most advantageous" approach for the company was found. This could not and cannot be regarded as a well-thought-out strategic view of governance and compliance.

But over time the requirements change, they become more and more specific. The latest example from the insurance industry is the document "Versicherungsaufsichtliche Anforderungen an die IT" (VAIT), which was finalised in July 2018 and published by BaFin (Bundesanstalt für Finanzdienstleistungsaufsicht - German Federal Financial Supervisory Authority), providing insurance companies with more tangible requirements for the implementation of their business processes using IT.

The similarity of the names to BAIT and thus to the banking supervisory requirements for IT is by no means a coincidence: both documents originate from BaFin and also have strong parallels in terms of content. Thus, both documents represent challenges that must be met appropriately, transparently and well-documented by the affected companies. And since these are only refinements, they are valid immediately, because the underlying regulations originally to be refined are already valid.

However, it is not only external requirements that are changing. Companies also understand that IT today is a central component of their core business - or IT is their core business. Backup, contingency management, security, audit and governance are therefore increasingly becoming requirements demanded by a growing number of internal stakeholders to maintain and improve the business basis.  IT risk management leads to meaningful key figures such as "key risk indicators" leading to clear guidelines on possible downtimes and restart times, but also to statements on SoD, privilege management, assignment of rights and access governance.

It is also clear that with BAIT's publication, which was a little earlier, banks can have a certain head start in implementing effective measures. Conversely, it can be very useful for insurance companies to benefit directly or through consolidated best practices from the experience of related industries.

Proactively acting companies that demonstrably have to meet a large number of requirements (both external and internal) through policies, controls, documentation and reporting will want to cover VAIT as part of an efficient "Control once, comply to many" strategy. And with the much more specific (but still interpretable) requirements of VAIT, some insurance companies will have a concrete need for action, be it the analysis of a reliable status quo or the identification and implementation of concrete implementation projects.

Put as a challenge: The VAIT are openly available to everyone and are published on the Internet, with an English version soon to be expected. Truly proactive CISOs in companies beyond the financial sector will take these as a starting point and challenge to the quality of their own, appropriate security and compliance. Beyond concrete regulatory requirements, but to secure their own company.

Managing the Hybrid Multi Cloud

The primary factor that most organizations consider when choosing a cloud service is how well the service meets their functional needs.  However, this must be balanced against the non-functional aspects such as compliance, security and manageability. These aspects are increasingly becoming a challenge in the hybrid multi-cloud IT environment found in most organizations. This point was emphasized by Virtustream during their briefing in London on September 6th, 2018. 

Virtustream was founded in 2009 with a focus on providing cloud services for mission-critical applications like SAP. In order to achieve this Virtustream developed its xStream cloud management platform to meet the requirements of complex production applications in the private, public and hybrid cloud. This uses patented xStream cloud resource management technology (μVM), to deliver assured SLA levels for business-critical applications and services.Through a series of acquisitions Virtustream is now a Dell Technologies business.   

The hybrid multi-cloud IT environment has made the challenges of governance, compliance and security even more complex. There is no single complete solution currently on the market to this problem.   

Typically, organizations use multiple cloud services including office productivity tools from one CSP (Cloud Service Provider), a CRM system from another CSP, and a test and development service from yet another one. At the same time, legacy applications and business critical data may be retained on-premises or in managed hosting. This hybrid multi-cloud environment creates significant challenges relating to the governance, management, security and compliance of the whole system.

Managing the Hybrid Multi Cloud

What is needed is a consistent approach with common processes supported by a single platform that provides all the necessary functions across all the various components involved in delivering all the services.   

Most CSPs offer their own proprietary management portal– which may in some cases extend to cover some on premises cases. This makes it important when choosing a cloud service to evaluate how the needs for management, security and compliance will be integrated with the existing processes and components that make up the enterprise IT architecture. The hybrid IT service model requires an overall IT governance approach as described in KuppingerCole Advisory Note: Security Organization Governance and the Cloud - 72564

An added complexity is that the division of responsibility for the different layers of the service depends upon how the service is delivered. There are 5 layers:

  • The lowest layer is the physical service infrastructure which includes as the data center, the physical network, the physical servers and the storage devices. In the case of IaaS this is the responsibility of the CSP. 
  • Above this sits the Operating Systems, basic storage services and the logical network. For IaaS, the management of this layer is the responsibility of the customer. 
  • The next plane includes the tools and middleware needed to build and deploy business applications. For PaaS (Platform as a Service) these are the responsibility of the CSP. 
  • Above the middleware are the business applications and for SaaS (Software as a Service) these are the responsibility of the CSP. 
  • The highest plane is the governance of business data and control of access to the data and applications. This is always the responsibility of the customer.

An ideal solution would be a common management platform that covers all the cloud and on-premises services and components. However, most cloud services only offer a proprietary management portal that covers the management of their service.    

So, does Virtustream provide a solution that completely meets these requirements? The answer is: Not yet.  However, there are two important points in its favour:

  • Firstly, Virtustream have highlighted that the problem exists. Acceptance is the first step on the road to providing a solution.
  • Secondly, Virtustream is a part of Dell and Dell also own VMware. VMware provides a solution to this problem but only where VMware is used across different IT service delivery models. VMware is used by Virtustream and is also supported by several other CSPs.

In conclusion, the hybrid multi-cloud environment presents a complex management challenge particularly in the areas of security and complianceThere are five layers each with six dimensions that need to be managed, and the responsibilities are shared between the CSP and the customer. It is vital that organizations consider this when selecting cloud services and that they implement a governance-based approach. Look for the emergence of tools to help with this challenge. There was a workshop this subject at KuppingerCole’s EIC earlier this year. 

Decentralized Identity 101: What It Is and Why It Matters

Guest Author: Vinny Lingham, CEO, Civic Technologies

Bitcoin. Blockchain. Crypto. Decentralization. Tokens. A lot of buzzwords have emerged alongside the rise of blockchain technology. Yet, there is often a lack of context about what those terms actually mean and the impact they will have.

Decentralized identity re-envisions the way people share access, control, and share their personal information. It gives people power back over their identity.

Current identity challenges all tie back to the way we collect and store data. The world has evolved from floppy disks to the Cloud, but now, every single time that data is collected, processed, or stored, security and privacy concerns emerge. With the rise of the digital economy, consumers have unintentionally turned banks, governments, and stores into identity management organizations, responsible for the storage and protection of an unprecedented amount of personal data. Unfortunately, as recent hacks have shown, not all of them were ready to deal with this new role.

Decentralized identity puts that power and responsibility back in the hands of the individual, giving them the ability to control and protection their own personal information. This concept is made possible by the decentralized nature of blockchain and the trust created by consensus algorithms.

How Blockchain Creates Trust

The most prominent blockchain application to date is Bitcoin, a technology that emerged following the U.S. financial crisis of 2008 when trust in institutions was at an all-time low. Blockchain technology, specifically the public blockchain, has several unique characteristics that solve problems of trust and make it a great fit for identity solutions.

First, blockchain is immutable, or unchangeable. Blockchain transactions are processed by a network. Computers work together to confirm a transaction, and every computer in the network must eventually confirm every transaction in the chain. These transactions are processed in blocks, and each block is linked to the preceding block. This structure makes it reasonably impossible to go back and alter a transaction. Additionally, blockchain is transparent. Every computer in the network has a record of every transaction that occurred.

Decentralization is the essence of blockchain: no one party control the data, so there is no single point of failure or someone who can override a transaction. Second, it is reasonably impossible to alter blockchain transactions. And this is how blockchain builds trust: when data cannot be modified and is independently verifiable, it can be trusted.

How Blockchain Helps Decentralized Identity

Currently, there is a presumption that knowledge of information is identity. If a person knows a social security number or password, they are presumed to be the person who that information represents. And if a person knows your personal information, they can impersonate you.

Using blockchain technology to decentralize identity is about digital validation and keys. For example, a digital wallet with cryptographic keys that cannot be recreated. You must have physical access to a device to validate identity. With a decentralized identity system, a remote hacker might have access to pieces of personal information but being able to prove an actual identity would require physical possession of that person’s device. Decentralized identity is literally putting the power back in the hands of the people.

Why It Matters

In 2017, Equifax became one of the worst data breaches in corporate history, exposing personal information of over 147 million people, including Social Security numbers, dates of birth, home addresses, driver’s license numbers, and credit card numbers.

In 2018, the Cambridge Analytica scandal about user data misuse has continued to unfold, as the F.B.I and Justice Department are investigating Facebook for failing to safeguard 87 million user profiles.

Equifax and Cambridge Analytica are two prime examples of how current systems for sharing and storing personal information have proven to be not as safe, secure, or trustworthy as previously thought.

And everyone feels this impact.

Governments are implementing more stringent laws and regulations for consumer protection. In May, the General Data Protection Regulation (GDPR), a standard for data collection and storage, went into effect. In July, California passed the California Consumer Protection Act enacting similar standards. And this is probably the first in a wave of consumer protection and privacy policies that will come to life.

Consumers are concerned as well. In a recent Deloitte study, 81 percent of U.S. respondents feel they have lost control over the way their personal data are collected and used. 

The ability to prove you are who you say you are is critical to engaging with the world and being a part of the economy. Decentralized identity gives that control back to people. 

Get to know more about Blockchain and listen to my Keynote "Practical Examples of Decentralized ID's in the Real World" at the Consumer Identity World USA in Seattle in September.

For a deep dive into the Blockchain topic please find the following blog posts:

Entrust Datacard Acquisition

Entrust Datacardfounded in 1969 and headquartered in Minnesota, announced today that it is making a strategic investment in CensorNet and acquiring the SMS Passcode business from CensorNet (originally a Danish company). Entrust Datacard is a strong brand in IAM, with card and certificate issuance, and financial and government sector business. 

CensorNet was founded in 2007 in the UK. Their original product was a secure web gateway. It now includes multi-mode in-line and API-based CASB service. It also has an email security service, which utilizes machine learning algorithms to scan email looking for potential malicious payloadsEntrust Datacard already has substantial capabilities in the adaptive and multi-factor authentication areas, and the SMS Passcode product line will add to that. With this investment and acquisition, Entrust Datacard plans to move beyond digital transformation to realize continuous authentication and enhance its e-government offerings. 

The results of the acquisition will be reflected in product roadmaps, likely starting in 2019. Entrust Datacard products and services will continue to handle initial authentication, and CensorNet’s capabilities will be able to add user activity monitoring through the CASB piece. The integration of identity-linked event data from CensorNet CASB will help security analysts to know, for example, which files users are moving around, and who and what are users emailing. This functionality will help administrators reduce the possibility of fraud and data loss.  

Broadcom acquires CA Technologies in a ‘Broadest’ ever shift of acquisition strategy

Broadcom, after having denied the acquisition of Qualcomm earlier this year by Trump administration based on national security concerns, has decided to acquire CA Technologies showing one of the greatest shifts in an acquisition strategy from a semiconductor business to an IT Software and Solutions business. The proposed Qualcomm acquisition by once Singapore-based Broadcom had the likelihood of several 5G patents passing beyond US control. 

The CA Technologies acquisition still gets over 1200 patents and mission-critical software deployments by CA Technologies at US Govt sites in the hands of Broadcom, and yet appears getting a green signal from the Trump administration. Negating the basics of acquisition with absolutely no or very little commercial synergies, the Broadcom’s objective to acquire ‘established mission-critical technology businesses’ is fully satisfied by this move which could be considered one of the most ambitious acquisitions of this size and scale in the recent times. Not to forget the Intel’s acquisition of McAfee which didn’t work well for the company due to little synergies between McAfee’s endpoint protection business and Intel’s core hardware strategy, finally resulted into a divestment of McAfee after seven years of rough marriage. 

CA Technologies itself is built on a series of smaller acquisitions done in almost every segment of IT software – ranging from IT operations management, application performance, mainframes, DevOps, IT security and automation to analytics. CA Technologies has, however, had a good overall success rate of driving product and roadmap integrations to achieve expected synergies out of the acquisitions done in the pastBroadcom must consider using some of the CA management’s expertise gathered over a decade and more to drive this acquisition towards a successful business integration. There’s no similar business unit at Broadcom that delivers IT software or services, which should make it even easier for CA Technologies to continue operating under the larger shed without the need to make any immediate shift to operating strategy. 

The dissimilarity of businesses and customer-base would only offer limited cross-sell opportunities arising from this acquisition in short to mid-term. However, CA Technologies’ recurring profitable bookings are guaranteed to bring stability by the increased future cash flow for Broadcom in the short term to accommodate for the expected fluctuations to its business due to the uncertainties arising from the recent (though still proposed and under review) US trade tariffs against semiconductor goods manufactured in China. 

Besides mainframes which remain majority revenue stream, and some other areas such as IT project & portfolio management, CA Technology has invested significantly in building its IT Security portfolio over the last decade, starting with NetegrityIDFocusEurekifyArcot, Layer 7, XceediumIdMLogic and Veracode – all within the Identity and Access Management (IAM) domain alone. CA’s aggressive acquisition strategy has kept innovation out of the company’s door for a long time and now with the Broadcom’s acquisition of CA Technologies there’s little hope that innovation will be the key to revenue generation for the new entity anytime in near future. With numerous acquisitions, CA’s Identity and Access Management portfolio has taken a bumpy ride over the past decade but despite all the challenges and long-term ramificationsits excellent IAM product and engineering team has ensured a seamless absorption of acquired products into its IAM and broader security software portfolio.  

While the uncertainties will continue to loom over its acquisition objectives and alignment of synergies for some more timeit will be interesting to see how Broadcom would decide to nurture CA’s enterprise software and services business and where would that lead its still very well-positioned IAM product line. 

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

Modern Cybersecurity Trends & Technologies Learn more

Modern Cybersecurity Trends & Technologies

Companies continue spending millions of dollars on their cybersecurity. With an increasing complexity and variety of cyber-attacks, it is important for CISOs to set correct defense priorities and be aware of state-of-the-art cybersecurity mechanisms. [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00