KuppingerCole Blog

How to Implement IT Governance Requirements Relating to Information Security and IT for Insurances and Beyond: VAIT Now Available in English

A short update blog post:

Earlier this year, in September, I did a blog post about the VAIT. This BaFin document explains the challenges for IT in companies in the insurance industry much more clearly than the original regulatory documents. VAIT ("Versicherungsaufsichtliche Anforderungen an die IT") maps BaFin's requirements to more tangible guidance.

A few days ago, the English translation of this document has been made available. It is described on its announcement page as follows: "The VAIT aims at clarifying BaFin's expectations with regard to governance requirements relating to information security and information technology. These requirements are a core supervisory component in the insurance and occupational pension sector in Germany."

This makes the audience of potential readers of this helpful guide much larger and my challenge to intelligent governance in a multitude of industries all the more important: "Truly proactive CISOs in companies beyond the financial sector will take these as a starting point and challenge to the quality of their own, appropriate security and compliance. Beyond concrete regulatory requirements, but to secure their own company.”

AWS re:Invent Impressions

This year’s flagship conference for AWS – the re:Invent 2018 in Las Vegas – has just officially wrapped. Continuing the tradition, it has been bigger than ever – with more than 50 thousand attendees, over 2000 sessions, workshops, hackathons, certification courses, a huge expo area, and, of course, tons of entertainment programs. Kudos to the organizers for pulling off an event of this scale – I can only imagine the amount of effort that went into it.

I have to confess, however: maybe it’s just me getting older and grumpier, but at times I couldn’t stop thinking that this event is a bit too big for its own good. With the premises spanning no less than 7 resorts along the Las Vegas Boulevard, the simple task of getting to your next session becomes a time-consuming challenge. I have no doubt however that most of the attendees have enjoyed the event program immensely because application development is supposed to be fun – at least according to the developers themselves!

Apparently, this approach is deeply rooted in the AWS corporate culture as well – their core target audience is still “the builders” – people who already have the goals, skills and desire to create new cloud-native apps and services and the only thing they need are the necessary tools and building blocks. And that’s exactly what the company is striving to offer – the broadest choice of tools and technologies at the most competitive prices.

Looking at the business stats, it’s obvious that the company remains a quite distant leader when it comes to Infrastructure-as-a-Service (IaaS) – having such a huge scale advantage over other competitors, the company can still outpace them for years even if its relative growth slows down. Although there have been discussions in the past whether AWS has a substantial Platform-as-a-Service (PaaS) offering, they can be easily dismissed now – in a sense, “traditional PaaS” is no longer that relevant, giving way to modern technology stacks like serverless and containers. Both are strategic for AWS, and, with the latest announcements about expanding the footprint of the Lambda platform, one can say that the competition in the “next-gen PaaS” field would be even tougher.

Perhaps the only part of the cloud playing field where AWS continues to be notoriously absent is Software-as-a-Service (SaaS) and more specifically enterprise application suites. The company’s own rare forays into this field are unimpressive at best, and the general strategy seems to be “leave it to the partners and let them run their services on AWS infrastructure”. In a way, this reflects the approach Microsoft has been following for decades with Windows. Whether this approach is sustainable in the long term or whether cloud service providers should rather look at Apple as their inspiration – that’s a topic that can be debated for hours… In my opinion, this situation leaves a substantial opening in the cloud market for competitors to catch up and overtake the current leader eventually.

The window of opportunity is already shrinking, however, as AWS is aiming at expanding into new markets and doing just about anything technology-related better (or at least bigger and cheaper) than their competitors, as the astonishing number of new product and service announcements during the event shows. They span from the low-level infrastructure improvements (faster hardware, better elasticity, further cost reductions) to catching up with competitors on things like managed Blockchain to all-new almost science fiction-looking stuff like design of robots and satellite management.

However, to me as an analyst, the most important change in the company’s strategy has been their somewhat belated realization that not all their users are “passionate builders”. And even those who are, are not necessarily considering the wide choice of available tools a blessing. Instead, many are looking at the cloud as a means to solve their business problems and the first thing they need is guidance. And then security and compliance. Services like AWS Well-Architected Tool, AWS Control Tower and AWS Security Hub are the first step in the right direction.

Still, the star topic of the whole event was undoubtedly AI/ML. With a massive number of new announcements, AWS clearly indicates that its goal is to make machine learning accessible not just for hardcore experts and data scientists, but to everyone, no ML expertise required. With their own machine learning inference chips along with the most powerful hardware to run model training and a number of significant optimizations in frameworks running on them, AWS promises to become the platform for the most cutting-edge ML applications. However, on the other end, the ability to package machine learning models and offer them on the AWS Marketplace almost as commodity products makes these applications accessible to a much broader audience – another step towards “AI-as-a-Service”.

Another major announcement is the company’s answer to their competitors’ hybrid cloud developments – AWS Outposts. Here, the company’s approach is radically different from offerings like Microsoft’s Azure Stack or Oracle Cloud at Customer, AWS has decided not to try and package their whole public cloud “in a box” for on-premises applications. Instead, only the key services like storage and compute instances (the ones that really have to remain on-premises because of compliance or latency considerations, for example) are brought to your data center, but the whole control plane remains in the cloud and these local services will appear as a seamless extension of the customer’s existing virtual private cloud in their region of choice. The idea is that customers will be able to launch additional services on top of this basic foundation locally - for example, for databases, machine learning or container management. To manage Outposts, AWS offers two choices of a control plane: either through the company’s native management console or through VMware Cloud management tools and APIs.

Of course, this approach won’t be able to address certain use cases like occasionally-connected remote locations (on ships, for example), but for a large number of customers, AWS Outposts promises significantly reduced complexity and better manageability of their hybrid solutions. Unfortunately, not many technical details have been revealed yet, so I’m looking forward to further updates.

There was a number of announcements regarding AWS’s database portfolio, meaning that customers now have an even bigger number of available database engines to choose from. Here, however, I’m not necessarily buying into the notion that more choice translates into more possibilities. Surely, managed MySQL, Memcached or any other open source database will be “good enough” for a vast number of use cases, but meeting the demands of large enterprises is a different story. Perhaps, a topic for an entirely separate blog post.

Oh, and although I absolutely recognize the value of a “cryptographically verifiable ledger with centralized trust” for many use cases which people currently are trying (and failing) to implement with Blockchains, I cannot but note that “Quantum Ledger Database” is a really odd choice of a name for one. What does it have to do with quantum computing anyway?

After databases, the expansion of the company’s serverless compute portfolio was the second biggest part of AWS CTO Werner Vogels’ keynote. Launched four years ago, AWS Lambda has proven to be immensely successful with developers as a concept, but the methods of integrating this radically different way of developing and running code in the cloud into traditional development workflows were not particularly easy. This year the company has announced multiple enhancements both to the Lambda engine itself – you can now use programming languages like C++, PHP or Cobol to write Lambda functions or even bring your own custom runtime – and to the developer toolkit around it including integrations with several popular integrated development environments.

Notably, the whole serverless computing platform has been re-engineered to run on top of AWS’s own lightweight virtualization technology called Firecracker, which ensures more efficient resource utilization and better tenant isolation that translates into better security for customers and even further potential for cost savings.

These were the announcements that have especially caught my attention during the event. I’m pretty sure that you’ll find other interesting things among all the re:Invent 2018 product announcements. Is more always better? You decide. But it sure is more fun!

Another Astounding Data Breach Hits the Confidence of Customers

The dust is still setting, but the information on this case currently available, which also includes the official press release, is worrying: Just this Friday, November 30, the hotel chain Marriott International announced that it has become the target of a hacker attack. Marriott's brand names include W Hotels, St. Regis, Sheraton Hotels & Resorts, Westin Hotels & Resorts, and Le Meridien Hotels & Resorts. The compromised database contains personal information about customers, in particular, reservations made in the chain's hotels before September 10, 2018.

Even more worrying are the sheer numbers and the nature and extent of the stored and leaked data. Allegedly it took 4 years for Marriott to discover the problem, which would mean continuous access to this data for that period.  It's data on more than half a billion accounting transactions (>500,000,000 to show only the zeros - this corresponds approximately to the total number of EU citizens), whereby it is conceivable that individual persons appear several times.

According to the press release, the data contained per record includes ‘combinations of name, mailing address, phone number, email address, passport number, Starwood Preferred Guest ("SPG") account information, date of birth, gender, arrival and departure information, reservation date, and communication preferences’.

For a still unclear portion of these records, the record per person is said to also include payment card numbers and payment card expiration dates, but the payment card numbers were encrypted using Advanced Encryption Standard encryption (AES-128).  This is a symmetric encryption method in which the key for encryption and decryption is the same. If this still sounds trustworthy for these particularly critical attributes, the company has determined that both components required for decrypting payment card numbers may also have been stolen at the same time. This suggests that an unknown percentage of the total unknown data pool might be affected. Given the scale of the leak, a significant absolute number of personal profiles with credit card data "in the wild" must be expected.

It is still unclear what role the above-given deadline of September 10, 2018 plays in this context, but at this point, the leak seems to have been closed. The press release reads as follows: "On September 8, 2018, Marriott received an alert from an internal security tool regarding an attempt to access the Starwood guest reservation database in the United States.  Marriott quickly engaged leading security experts to help determine what occurred.  Marriott learned during the investigation that there had been unauthorized access to the Starwood network since 2014."

Building trust must be the foundation of any business strategy.  The first and only starting point is to design corporate strategies in such a way that they are aware of the importance of customer data and the protection of privacy. This involves both well-thought-out business processes and suitable technologies. Of course, this includes trustworthy storage and processing of personal data. Evidence of this must be provided to many stakeholders, including the relevant data protection authorities and the users themselves.  

So first and foremost it is about trust as a central concept in the relationship between companies and their customers. However, the trust of Starwood/Marriott customers could be fundamentally and lastingly destroyed.

The problem with trust is that it needs to be strategically grown over long periods of time, but as it is highly fragile it can be destroyed within a very short period of time. This might be through a data breach just like in this current case. Or through not building adequate solutions. Or not communicating adequately. The real question is why many organizations have not yet started actively building this trusted relationship with their users/customers/consumers/employees. The awareness is rising, so that security and privacy are moving increasingly into the focus of not only tech-savvy users but also that of everyday customers.

Last but not least, as both a European and customer of this hotel chain (and as a layman, not a lawyer), I really would like to ask the following question: The deadlines for reporting a data breach according to the requirements of the GDPR are the latest 72 hours after the breach becomes known. With what we know until now, shouldn’t we have heard from Marriott much earlier and in some different form?

Cybersecurity Leadership Summit Berlin 2018 - Review

This month we launched our Cybersecurity Leadership Summit in Berlin. A pre-conference workshop entitled “Focusing Your Cybersecurity Investments: What Do You Really Need for Mitigating Your Cyber-risks?” was held on Monday. The workshop was both business-oriented and technical in nature. Contemporary CISOs and CIOs must apply risk management strategies, and it can be difficult to determine which cybersecurity projects should be prioritized. Leaders in attendance heard the latest applied research from Martin Kuppinger, Matthias Reinwarth, and Paul Simmonds.

Tuesday’s opening keynote was delivered by Martin Kuppinger on the topic of User Behavioral Analytics (UBA). UBA has become both the successor and adjunct to SIEMs, and as such are link between traditional network-centric cybersecurity and identity management. Torsten George of Centrify pitched the importance of zero-trust concepts. Zero-trust can be seen as improving security by requiring risk-adaptive and continuous authentication. But trust is also a key component of things like federation architecture, so it won’t be going away altogether.

Innovation Night was held on Tuesday. In this event, a number of different speakers competed by describing how their products successfully incorporated Artificial Intelligence / Machine Learning techniques. The winner was Frederic Stallaert, Machine Learning Engineer/ Data Scientist at ML6. His topic was the adversarial uses of AI, and how to defend against them.

Here are some of the highlights. In the social engineering track, Enrico Frumento discussed the DOGANA project. This is the Advanced Social Engineering and Vulnerability Analysis Framework. They have been performing Social Driven Vulnerability Assessments and have interesting but discouraging results. In a recent study, 59% of users tested in an energy sector organization fell prey to a phishing training email. Malicious actors use every bit of information about targets available to them, regardless of legality. Organizations providing anti-phishing training are encumbered by GDPR.

In Threat intelligence, we had a number of good speakers and panelists. Ammi Virk presented on Contextualizing Threat Intelligence. One of his excellent points was recognizing the “con in context”, or guarding against bias, assumptions, and omissions. Context is essential in turning information into intelligence. This point was also made strongly by John Bryk in his session.

JC Gaillard posed a controversial question in his session, “Is the role of CISO outdated?”. He looked at some of the common problems CISOs face, such as being buried in an org chart, inadequate funding, and lack of authority to solve problems. His recommendations were to 1) elevate the CISO role and give it political power, 2) move the purely technical IT Security functions under the CIO or CTO, and 3) put CISOs on the level with newer positions such as CDOs and DPOs.

Internet Balkanization was a topic in the GDPR and Cybersecurity session. Daniel Schnok gave a thought-provoking presentation on the various political, economic, and technological factors that are putting up barriers and fragmenting the Internet today. For example, we know that countries such as China, Iran, and Russia have politically imposed barriers and content restrictions. GDPR is limiting the flow of personal information in Europe, and in some cases, overreaction to GDPR is impairing the flow of other types of data as well. The increasing consolidation of data under the large, US-based tech firms is also another example of balkanization.

In my final keynote I described the role that AI and ML are playing in cybersecurity today. These technologies are not merely nice-to-haves but are essential components, particularly for anti-malware, EDR/MDR, traffic analysis, etc. Nascent work on using ML techniques to facilitate understanding of access control patterns is underway by some vendors. These techniques may lead to a breakthrough in data governance in the mid-term. AI and ML based solutions are subject to attack (or “gaming”). Determined attackers can fool ML enhanced tools into missing detection of malware, for example. Lastly, Generative Adversarial Networks (GANs) serve as an example of how bad actors can use AI technologies as a means to advance attacks. GAN-based tools exist for password-cracking, steganography, and creating fake fingerprints for fooling biometric readers. In short, ML can help, but it can also be attacked and used to create more powerful cyber attacks.

We would like to thank our sponsors: iC Consult, Centrify, Cisco, One Identity, Palo Alto Networks, Airlock, Axiomatics, BigID, ForgeRock, Nexis, Ping Identity, SailPoint, MinerEye, PlainID, FireEye, Varonis, Thycotic, and Kaspersky Lab.

We will return to Berlin for CSLS 2019 on 12-14 November of next year.

IBM Acquires Red Hat: The AI potential

On October 28th IBM announced its intention to acquire Red Hat. At $34 Billion, this is the largest software acquisition ever.  So why would IBM pay such a large amount of money for an Open Source software company? I believe that this acquisition needs to be seen beyond looking just at DevOps and Hybrid Cloud, rather in the context of IBM’s view of the future where the business value from IT services will come from in future. This acquisition provides near-term tactical benefits from Red Hat’s OpenShift Platform and its participation in the Kubeflow project. It strengthens IBM’s capabilities to deliver the foundation for digital business transformation. However, digital business is increasingly based on AI delivered through the cloud. IBM recently announced a $240M investment in a 10-year research collaboration on AI with MIT and this represents the strategy. This adds to the already significant investments that IBM has already made in Watson, setting up a division in 2016, as well as in cloud services.

Red Hat was founded in 1993 and in 1994 released the Red Hat version of Linux. This evolved into a complete development stack (JBoss) and recently released Red Hat OpenShift - a container- and microservices-based (Kubernetes) DevOps platform. Red Hat operates on a business model based on open-source software development within a community, professional quality assurance, and subscription-based customer support.

The synergy between IBM and Red Hat is clear. IBM has worked with Red Hat on Linux for many years and both have a commitment to Open Source software development. Both companies have a business model in which services are the key element. Although these are two fairly different types of services – Red Hat’s being service fees for software, IBM’s being all types of services including consultancy, development they both fit well into IBM’s overall business.

One critical factor is the need for tools to accelerate the development lifecycle for ML projects. For ML projects this can be much less predictable than for software projects. In the non-ML DevOps world microservices and containers and the key technologies that have helped here. How can these technologies help with ML projects?

There are several differences between developing ML and coding applications. Specifically, ML uses training rather than coding and, in principle, this in itself should accelerate the development of much more sophisticated ways to use data. The ML Development lifecycle can be summarized as:

  • Obtain, prepare and label the data
  • Train the model
  • Test and refine the model
  • Deploy the model

While the processes involved in ML development are different to conventional DevOps, a microservices-based approach is potentially very helpful. ML Training involves multiple parties working together and microservices provide a way to orchestrate various types of functions, so that data scientists, experts and business users can just use the capabilities without caring about coding etc. A common platform based on microservices could also provide automated tracing of the data used and training results to improve traceability and auditing. It is here that there is a great potential for IBM/Red Hat to deliver better solutions.

Red Hat OpenShift provides a DevOps environment to orchestrate the development to deployment workflow for Kubernetes based software. OpenShift is, therefore, a potential solution to some of the complexities of ML development. Red Hat OpenShift with Kubernetes has the potential to enable a data scientist to train and query models as well as to deploy a containerized ML stack on-premises or in the cloud.

In addition, Red Hat is a participant in the Kubeflow project. This is an Open Source project dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Their goal is not to recreate other services, but to provide a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures.

In conclusion, the acquisition has strengthened IBM’s capabilities to deliver ML applications in the near term. These capabilities complement and extend IBM’s Watson and improve and accelerate their ability and the ability of their joint customers to create, test and deploy ML-based applications. They should be seen as part of a strategy towards a future where more and more value is delivered through AI-based solutions.

Read as well: IBM & Red Hat – and now?

IBM & Red Hat – And Now?

On October 28th IBM announced its intention to acquire Red Hat. At $34 Billion, this is the largest software acquisition ever. So why would IBM pay such a large amount of money for Red Hat? Not surprising, there were quite a few negative comments from parts of the Open Source community. However, there is logic behind that intended acquisition.

Aside of the potential it holds for some of the strategic fields of IBM such as AI (Artificial Intelligence) and even security (which is amongst the divisions of IBM showing the biggest growth), there is an obvious potential in the field of Hybrid Cloud as well as for DevOps.

Red Hat has for a long time been a company that is much bigger than just a Linux company. When you look at their portfolio, Red Hat is strong in middleware and technologies supporting hybrid cloud environments. Technology stacks like JBoss, Ansible, OpenShift or OpenStack are well-established.

Red Hat has also been a longstanding supplier preferred by enterprises. They have a strong position in growth markets that play an important role for businesses, Cloud Service Providers (CSPs), and obviously for IBM itself. Red Hat empowers IBM to deliver better and broader services to its customers and strengthen its role as a provider for Hybrid Cloud and DevOps and thus its competitive position in the battle with companies such as AWS, Microsoft, or Oracle. On the other hand, IBM allows Red Hat scaling its business, by delivering both the organizational structure to grow and a global services team and infrastructure.

From our perspective, there is little risk that Red Hat will lose a significant share of its current business – they are already an enterprise player and selling to enterprise customers, and IBM will strengthen not weaken them.

As with every acquisition, this one also brings some risk for customers. There is some overlap in certain parts of the portfolio, particularly around managing hybrid cloud environments, i.e. Cloud Foundry and OpenShift. While this might affect some customers, the overall risk for customers appears to be limited. On the other hand, the joint potential to support business in their Digital Transformation is significant. IBM can increase its offerings and attractiveness for Hybrid Cloud and DevOps, fostered by strong security and with interesting potential for new fields such as AI.

The only question will be whether the price tag of Red Hat is too high. While there is huge potential, the combined IBM and Red Hat will still need to monetize on this.

Read as well: IBM Acquires Red Hat: The AI potential

Impressions from the Oracle OpenWorld

Recently I was in San Francisco again, attending the Oracle OpenWorld for the second time. Just like last year, I cannot but commend the organizers for making the event even bigger, more informative and more convenient to attend – by all means not a small feat when you consider the crowd of over 60000 attendees from  175 countries. By setting up a separate press and analyst workspace in an isolated corner of the convention center, the company gave us the opportunity to work more productively and to avoid the noisy exposition floor environment, thus effectively eliminating the last bit of critique I had for the event back in 2017.

Thematically, things have definitely improved as well, at least for me as an analyst focusing on cybersecurity. Surely, the Autonomous Database continued to dominate, but as opposed to the last year’s mostly theoretical talks about a future roadmap, this time visitors had ample opportunity to see the products (remember, there are two different editions already available: one optimized for data warehouses and another for transactional processing and mixed loads) in action, to talk directly to the technical staff and to learn about the next phase of Oracle’s roadmap for 2019, which includes dedicated Exadata Cloud infrastructure for the most demanding customers as well as Autonomous Database in the Cloud at Customer, which is the closest thing to running an autonomous database on premises.

Last but not least, we had a chance to talk to real customers sharing their success stories. I have to confess however that I had somewhat mixed feelings about those stories. On one hand, I can absolutely understand Oracle’s desire to showcase their most successful customer projects, where things “just worked” after migrating a database to the Oracle Cloud and there were no challenges whatsoever. But for us analysts (and even more so for our customers that are not necessarily already heavily invested into Oracle databases) stories like this sound a bit trivial. What about migrating from a different platform? What about struggles to overcome unexpected obstacles? We need more drama, Oracle!

Another major topic – this year’s big reveal – was however not about databases at all. During his keynote, Larry Ellison has announced Oracle’s Generation 2 Cloud, which is completely redesigned with the “security first” principle in mind. His traditionally dramatic presentation mentioned lasers and threat-fighting robots, but regardless of all that the idea of a “secure by design” cloud is a pretty big deal. The two big components of Oracle’s next-generation cloud infrastructure are “dedicated Cloud Control Computers” and “intelligent analytics”.

The former provide a completely separate control plane and security perimeter of the cloud infrastructure, ensuring that every customer’s resources are not only protected by an isolation barrier from external threats but also that other tenants’ rogue admins cannot somehow access and exploit them. At the same time, this isolation barrier prevents Oracle’s own engineers from ever having unauthorized access to customer data. In addition to that, machine learning is supposed to provide additional capabilities not just for finding and mitigating security threats but also to reduce administration costs by means of intelligent automation.

Combined with the brand-new bare-metal computing infrastructure and fast, low-latency RDMA network, this forms the foundation of Oracle’s new cloud, which the company promises to be not just faster than any competitor, but also substantially cheaper (yes, no Oracle keynote can dispense with poking fun at AWS).

However, the biggest and the most omnipresent topic of this year’s OpenWorld was undoubtedly the Artificial Intelligence. And although I really hate the term and the ways it has been used and abused in marketing, I cannot but appreciate Oracle’s strategy around it. In his second keynote, Larry Ellison has talked how machine learning and AI should not be seen as standalone tools, but as new business enablers, which will eventually find their way into every business application, enabling not just productivity improvements but completely new ways of doing business that were simply impossible before. In a business application, machine learning should not just assist you with getting better answers to your questions but offer you better questions as well (finding hidden correlations in business data, optimizing costs, fighting fraud and so on). Needless to say, if you want to use such intelligent business apps today, you’re expected to look no further than in the Oracle cloud :)

The Ethics of Artificial Intelligence

Famously, in 2014 Prof. Stephen Hawking told the BBC: "The development of full artificial intelligence could spell the end of the human race." The ethical questions around Artificial Intelligence were discussed at a meeting led by the BCS President Chris Rees in London on October 2nd. This is also an area covered by KuppingerCole under the heading of Cognitive Technologies and this blog provides a summary of some of the issues that need to be considered.

Firstly, AI is a generic term and it is important to understand precisely what this means. Currently the state of the art can be described as Narrow AI. This is where techniques such as ML (machine learning) combined with massive amounts of data are providing useful results in narrow fields. For example, the diagnosis of certain diseases and predictive marketing. There are now many tools available to help organizations exploit and industrialise Narrow AI.

At the other extreme is what is called General AI where the systems are autonomous and can decide for themselves what actions to take. This is exemplified by the fictional Skynet that features in the Terminator games and movies. In these stories this system has spread to millions of computers and seeks to exterminate humanity in order to fulfil the mandates of its original coding. In reality, the widespread availability of General AI is still many years away.

In the short term, Narrow AI can be expected to evolve into Broad AI where a system will be able to support or perform multiple tasks using what is learnt in one domain applied to another. Broad AI will evolve to use multiple approaches to solve problems. For example, by linking neural networks with other forms of reasoning. It will be able work with limited amounts of data, or at least data which is not well tagged or curated. For example, in the cyber-security space to be able to identify a threat pattern that has not been seen before.

What is ethics and why is it relevant to AI? The term is derived from the Greek word “ethos” which can mean custom, habit, character or disposition. Ethics is a set of moral principles that govern behaviour, or the conduct of an activity, ethics is also a branch of philosophy that studies these principles. The reason why ethics is important to AI is because of the potential for these systems to cause harm to individuals as well as society in general. Ethics considerations can help to better identify beneficial applications while avoiding harmful ones. In addition, new technologies are often viewed with suspicion and mistrust. This can unreasonably inhibit the development of technologies that have significant beneficial potential. Ethics provide a framework that can be used to understand and overcome these concerns at an early stage.

Chris Rees identified 5 major ethical issues that need to be addressed in relation to AI:

  • Bias;
  • Explainability;
  • Harmlessness;
  • Economic Impact;
  • Responsibility.

Bias is a very current issue with bias related to gender and race as top concerns. AI systems are likely to be biased because people are biased, and AI systems amplify human capabilities. Social media provides an example of this kind of amplification where uncontrolled channels provide the means to share beliefs that may be popular but have no foundation in fact - “fake news”. The training of AI systems depends upon the use of data which may include inherent bias even though this may not be intentional. The training process and the trainers may pass on their own unconscious bias to the systems they train. Allowing systems to train themselves can lead to unexpected outcomes since the systems do not have the common sense to recognize mischievous behaviour. There are other reported examples of bias in facial recognition systems.

Explanation - It is very important in many applications that AI systems can explain themselves. Explanation may be required to justify a life changing decision to the person that it effects, to provide the confidence needed to invest in a project based on a projection, or to justify after the event why a decision was taken in a court of law. While rule-based systems can provide a form of explanation based on the logical rules that were fired to arrive at a particular conclusion neural network are much more opaque. This poses not only a problem to explain to the end user why a conclusion was reached but also to the developer or trainer to understand what needs to be changed to correct the behaviour of the system.

Harmlessness – the three laws of robotics that were devised by Isaac Asimov in the 1940’s and subsequently extended to include a zeroth law apply equally to AI systems. However, the use or abuse of the systems could breach these laws and special care is needed to ensure that this does not happen. For example, the hacking of an autonomous car could turn it into a weapon, which emphasizes the need for strong inbuilt security controls. AI systems can be applied to cyber security to accelerate the development of both defence and offence. It could be used by the cyber adversaries as well as the good guys. It is therefore essential that this aspect is considered and that countermeasures are developed to cover the malicious use of this technology.

Economic impact – new technologies have both destructive and constructive impacts. In the short-term the use of AI is likely to lead to the destruction of certain kinds of jobs. However, in the long term it may lead to the creation of new forms of employment as well as unforeseen social benefits. While the short-term losses are concrete the longer-term benefits are harder to see and may take generations to materialize. This makes it essential to create protection for those affected by the expected downsides to improve acceptability and to avoid social unrest.

Responsibility – AI is just an artefact and so if something bad happens who is responsible morally and in law? The AI system itself cannot be prosecuted but the designer, the manufacturer or the user could be. The designer may claim that the system was not manufactured to the design specification. The manufacturer may claim that the system was not used or maintained correctly (for example patches not applied). This is an area where there will need to be debate and this should take place before these systems cause actual harm.

In conclusion, AI systems are evolving but they have not yet reached the state portrayed in popular fiction. However, the ethical aspects of this technology need to be considered and this should be done sooner rather than later. In the same way that privacy by design is an important consideration we should now be working to develop “Ethical by Design”. GDPR allows people to take back control over how their data is collected and used. We need controls over AI before the problems arise.

Making Sense of the Top Cybersecurity Trends

With each passing year, the CISO’s job is not becoming any easier. As companies continue embracing the Digital Transformation, the growing complexity and openness of their IT infrastructures mean that the attack surface for hackers and malicious insiders is increasing as well. Combined with the recent political developments such as the rise of state-sponsored attacks, new surveillance laws, and harsh privacy regulations, security professionals now have way too many things on their hands that sometimes keep them awake at night. What’s more important – protecting your systems from ransomware or securing your cloud infrastructure? Should you invest in CEO fraud protection or work harder to prepare for a media fallout after a data breach? Decisions, decisions…

The skills gap problem is often discussed by the press, but the journalists usually focus more on the lack of IT experts which are needed to operate complex and sprawling cybersecurity infrastructures. Alas, the related problem of making wrong strategic decisions about the technologies and tools to purchase and deploy is not mentioned that often, but it is precisely the reason for the “cargo cult of cybersecurity”. Educating the public about the modern IT security trends and technologies will be a big part of our upcoming Cybersecurity Leadership Summit, which will be held in Berlin this November, and last week, my fellow analyst John Tolbert and I have presented a sneak peek into this topic by dispelling several popular misconceptions.

After a lengthy discussion about choosing just five out of the multitude of topics we’ll be covering at the summit, we came up with a list of things that, on one hand, are generating enough buzz in the media and vendors’ marketing materials and, on the other hand, are actually relevant and complex enough to warrant a need to dig into them. That’s why we didn’t mention ransomware, for example, which is actually declining along with the devaluation of popular cryptocurrencies…

Artificial Intelligence in Cybersecurity

Perhaps the biggest myth about Artificial Intelligence / Machine Learning (which, incidentally, are not the same even though both terms are often used interchangeably) is that it’s a cutting-edge technology that has arrived to solve all our cybersecurity woes. This cannot be further from the truth, though: the origins of machine learning predate digital computers. Neural networks were invented back in the 1950s and some of their applications are just as old. It’s only the recent surge in available computing power thanks to commodity hardware and cloud computing that has caused this triumphant entering of machine learning into so many areas of our daily lives.

In his recent blog post, our fellow analyst Mike Small has provided a concise overview of various terms and methods related to AI and ML. To his post, I can only add that applications of these methods to cybersecurity are still very much a field of academic research that is yet to mature into advanced off-the-shelf security solutions. Most products that are currently sold with “AI/ML inside” stickers on their boxes are in reality limited to the most basic ML methods that enable faster pattern or anomaly detection in log files. Only some of the more advanced ones offer higher-level functionality like actionable recommendations and improved forensic analysis. Finally, true cognitive technologies like natural language processing and AI-powered reasoning are just beginning to be adapted towards cybersecurity applications by a few visionary vendors.

It’s worth stressing, however, that such solutions will probably never completely replace human analysts if only for numerous legal and ethical problems associated with decisions made by an “autonomous AI”. If anyone, it would be the cybercriminals without moral inhibitions that we will see among the earliest adopters…

Zero Trust Security

The Zero Trust paradigm is rapidly gaining popularity as a modern alternative to the traditional perimeter-based security, which can no longer provide sufficient protection against external and internal advanced cyberthreats. An IT infrastructure designed around this model treats every user, application or data source as untrusted and enforces strict security, access control, and comprehensive auditing to ensure visibility and accountability of all user activities.

However, just like with any other hyped trend, there is a lot of confusion about what Zero Trust actually is. Fueled by massive marketing campaigns by vendors trying to get into this lucrative new market, a popular misconception is that Zero Trust is some kind of a “next-generation perimeter” that’s supposed to replace outdated firewalls and VPNs of old days.

Again, this cannot be further from the truth. Zero Trust is above all a new architectural model, a combination of multiple processes and technologies. And although adopting Zero Trust approach promises a massive reduction of attack surface, reduction of IT complexity, and productivity improvements, there is definitely no off-the-shelf solution that magically transforms your existing IT infrastructure.

Going Zero Trust always starts with a strategy, which must heterogeneous and hybrid by design. It involves discovering, classifying and protecting sensitive data; redefining identities for each user and device; establishing and enforcing strict access controls to each resource; and finally, continuous monitoring and audit of every activity. And remember: you should trust no one. Especially not vendor marketing!

Insider Threat Management

Ten years ago, the riskiest users in every company were undoubtedly the system administrators. Protecting the infrastructure and sensitive data from them potentially misusing their privileged access was the top priority. Nowadays, the situation has changed dramatically: every business user that has access to sensitive corporate data can, either inadvertently or with a malicious intent, cause substantial damage to your business by leaking confidential information, disrupting access to a critical system or simply draining your bank account. The most privileged users in that regard are the CEO or CFO, and the number of new cyber attacks targeting them specifically is on the rise.

The studies show that cyberattacks focusing on infrastructure are becoming too complex and costly for hackers, so they are focusing on social engineering methods instead. One carefully crafted phishing mail can thus cause more damage than an APT attack that takes months of planning… And the best part is that victims do all the work themselves!

Unfortunately, traditional security tools and even specialized Privileged Access Management solutions aren’t suitable for solving this new challenge. Again, the only viable strategy is to combine changes in existing business processes (especially those related to financial transactions) and a multi-layered deployment of different security technologies ranging from endpoint detection and response to email security to data loss prevention and even brand reputation management.

Continuous Authentication

Passwords are dead, biometric methods are easily circumvented, account hijacking is rampant… How can we still be sure that users are who they are claiming they are when they access a system or an application, from anywhere in the world and from a large variety of platforms?

One of the approaches that’s been growing in popularity in the recent year is adaptive authentication – the process of gathering additional context information about the users, their devices and other environmental factors and evaluating them according to risk-based policies. Such solutions usually combine multiple strong authentication methods and present the most appropriate challenge to the user based on their current risk level. However, even this quite complex approach is often not sufficient to combat advanced cyberattacks.

Continuous authentication paradigm takes this to the next level. By combining dynamic context-based authentication with real-time behavior biometrics, it turns authentication from a single event into a seamless ongoing process and thus promises to reduce the impact of a credential compromise. This way, the user’s risk score is not calculated just once during initial authentication but is constantly reevaluated across time, changing as the user moves into a different environment or reacting to anomalies in their behavior.

Unfortunately, this approach requires major changes in the way applications are designed and modernizing legacy systems can be a major challenge. Another problem is AA’s perceived invasiveness – many users do not feel comfortable being constantly monitored, and in many cases, these actions may even be illegal. Thus, although promising solutions are starting to appear on the market, AA is still far from mainstream adoption.

Embedding a Cybersecurity Culture

Perhaps the biggest myth about cybersecurity is that it takes care of itself. Unfortunately, the history of the recent large-scale cybersecurity incidents clearly demonstrates that even the largest companies with massive budgets for security tools are not immune to attacks. Also, many employees and whole business units often see security as a nuisance that hurts their productivity and would sometimes go as far as to actively sabotage it, maintaining their own “shadow IT” tools and services.

However, the most common cause of security breaches is simple negligence stemming primarily from insufficient awareness, lack of established processes and general reluctance to be a part of corporate cybersecurity culture. Unfortunately, there is no technology that can fix these problems, and companies must invest more resources into employee training, teaching them the cybersecurity hygiene basics, explaining the risks of handling personal information and preparing them for the inevitable response to a security incident.

Even more important is for the CISOs and other high-level executives to continuously improve their own awareness of the latest trends and developments in cybersecurity. And what is a better way for that than meeting the leading experts at the KuppingerCole’s Cybersecurity Leadership Summit next month? See you in Berlin!

Artificial Intelligence and Cyber Security

As organizations go through digital transformation, the cyber challenges they face become more important. Their IT systems and applications become more critical and at the same time more open. The recent data breach suffered by British Airways illustrates the sophistication of the cyber adversaries and the difficulties faced by organization to prevent, detect, and respond to these challenges. One approach that is gaining ground is the application of AI technologies to cyber security and, at an event in London on September 24th, IBM described how IBM Watson is being integrated with other IBM security products to meet these challenges.

The current approaches to cyber defence include multiple layers of protection including firewalls and identity and access management as well as event monitoring (SIEM). While these remain necessary they have not significantly reduced the time to detect breaches. For example, the IBM-sponsored 2018 Cost of a Data Breach Study by Ponemon showed that the mean time for organizations to identify a breach was 197 days. This length of time has hardly improved over many years. The reasons for this long delay are many and include: the complexity of the IT infrastructure, the sophistication of the techniques used by cyber adversaries to hide their activities and the sheer volume data available.

So, what is AI and how can it help to mitigate this problem?

AI is a generic term that covers a range of technologies. In general, the term AI refers to systems that “simulate thought processes to assist in finding solutions to complex problems through augmentation and enhancement of human capabilities”. Kuppingercole has analysed in detail what this really means in practice and this is summarized in the following slide from the EIC 2017 Opening Keynote by Martin Kuppinger.

At the lower layer, improved algorithms enable the transformation of Big Data into “Smart Information”. See KuppingerCole Advisory Note: Big Data Security, Governance, Stewardship - 72565. This is augmented by Machine Learning where human reinforcement is used to tune the algorithms to identify those patterns that are of interest and to ignore those that are not. Cognitive technologies add an important element to this mix through their capability to include speech, vision and unstructured data into the analysis. Today, this represents the state of the art for the practical application of AI to cyber security.

The challenges of AI at the state of the art are threefold:

  • The application of common sense – a human applies a very wide context to decision making whereas AI systems tend to be very narrowly focussed and so sometimes reach what the human would consider to be a stupid conclusion.
  • Explanation – of how the conclusions were reached by the AI system to demonstrate that they are valid and can be trusted.
  • Responsibility –for action based on the conclusions from the system.

Cyber security products collect vast amounts of data – the cyber security analyst is literally drowning in data. The challenge is to find the so called IOCS (Indicators of Compromise), that show the existence of a real threat, amongst this enormous amount of data. The problem is to not just to find what is abnormal, but to filter out the many false positives that obscure the real threats. 

There are several vendors that have incorporated Machine Learning (ML) systems into their products to tune the identification of important anomalies. This is useful to reduce false positives, but it is not enough. To be really useful to a security analyst, the abnormal pattern needs to be related to known or emerging threats. While there have been several attempts to standardize the way information on threats is described and shared most of this information is still held in unstructured form in documents, blogs and twitter feeds. It is essential to take account of these.

This is where IBM QRadar Advisor with Watson is different. A Machine Learning system is only as is only as good as its training – training is the key to its effectiveness. IBM say that it has been trained through the ingestion of over 10 billion pieces of structured data and 1.24 million unstructured documents to assist with the investigation of security incidents. This training involved IBM X-Force experts as well as IBM customers. Because of this training, it can now identify patterns that represents potential threats and provide links to the relevant sources that have been used to reach these conclusions. However, while this helps the security analyst to do their job more efficiently and more effectively, it does not yet replace the human.

Organizations now need to assume that cyber adversaries have access to their organizational systems and to constantly monitor for this activity in a way that will enable them to take action before damage is done. AI provides a great potential to help with this challenge and to evolve to help organizations to improve their cyber security posture through intelligent code analysis and configuration scanning as well as activity monitoring. For more information on the future of cyber security attend KuppingerCole’s Cybersecurity Leadership Summit 2018 Europe.

Discover KuppingerCole

KuppingerCole PLUS

Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected



AI for the Future of your Business Learn more

AI for the Future of your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00