KuppingerCole Blog

BAIT and VAIT as Levers to Improving Security and Compliance (And Your IAM)

Usually, when we talk about special compliance and legal requirements in highly regulated industries, usually one immediately thinks of companies in the financial services sector, i.e. banks and insurance companies. This is obvious and certainly correct because these companies form the commercial basis of all economic activities.

Although regulations and their obligations are often formulated on a relatively abstract level, they must be adapted over time to the changing business and technical circumstances. Sometimes they need to be made more concise, more actionable and more specific, to improve their effectiveness. The BaFin (the German Federal Financial Supervisory Authority or "Bundesanstalt für Finanzdienstleistungsaufsicht) as the regulator for the financial services businesses in Germany has recently updated and extended its set of requirements documents. The "Supervisory Requirements for IT in Financial Institutions” (“Bankaufsichtliche Anforderungen an die IT - BAIT" in 2017 detailed the IT-related requirements of §25 KWG (the German Banking Act =„Kreditwirtschaftsgesetz“) and MaRisk ("Minimum requirements for risk management"=”Mindestanforderungen an das Risikomanagement") for the banking sector. An updated version of the BAIT has been subsequently supplemented by specific requirements for critical infrastructures (KRITIS) in this essential sector.

Quite recently as a second step, the BaFin has provided comparable specific requirements for the insurance industry by publishing the "Versicherungsaufsichtliche Anforderungen an die IT" (VAIT) ("Supervisory Requirements for IT in Insurance Undertakings". Both BAIT and VAIT describe what BaFin considers to be the appropriate technical and organizational resources for IT systems. Ultimately these requirements are also used as the benchmarks for audits.

Let’s look at VAIT as an example. Eight focus areas require appropriate consideration and the involvement of suitable stakeholders and experts: Specific guidelines for IT strategy and IT governance define minimum requirements for guidance and implementation in these areas within the organization’s structure and processes. The concept of information risks and their management is integrated into the overall corporate/business risk management. Information security is strengthened by the demand for a widely independent information security officer. With the demand for a uniform authorization management, access management and governance are moving even more into the focus of the auditors. The focus is also on IT projects in addition to traditional IT operations. Application development must now move towards "security by design", to meet the requirements. Outsourcing, the use of third-party services as well as the cloud services that are gradually becoming more relevant are considered part of the "IT services" focus area. Speaking from real life experience: improvements in identity and access management, privileged account management and access governance have proven to be successful controls to implement BAIT and VAIT requirements effectively and measurably. In turn, BAIT and VAIT can provide an excellent justification for finally implementing the improvements to IAM/IAG that have long been needed.

 Figure 1: VAIT and related IAM/IAG fields of action

So, the obvious question is who should care about these German regulations for financial services? If you are an insurance company or a bank with a subsidiary in Germany, there is no question about that. Banks and insurance companies face substantial challenges to implement these very concrete requirements into business practice without delay. They must be implemented appropriately, transparently and in a well-documented way by the companies within their scope. (Talk to us, we can help you.)

But what if your organization is not directly in the scope of these regulations? Why not consider them as a benchmark that could help you to increase your organizational maturity. Both BAIT and VAIT are freely available published in English on the Internet. They provide all organizations, even those outside of the financial sector and outside of Germany, with a set of well-elaborated requirements for trustworthy IT. You can use these as a challenge against which to judge the quality of your own overall security and compliance. Going beyond the regulatory requirements as a way to improve your own policies, organization and processes.

And yes, talk to us, we can help you.

AI in a Nutshell

What AI is and what not

Top 5 CISO Topics for 2019

Where to put your focus on in 2019

Martin Kuppinger's Top 5 IAM Topics for 2019

Where to put your focus on in 2019

How to Implement IT Governance Requirements Relating to Information Security and IT for Insurances and Beyond: VAIT Now Available in English

A short update blog post:

Earlier this year, in September, I did a blog post about the VAIT. This BaFin document explains the challenges for IT in companies in the insurance industry much more clearly than the original regulatory documents. VAIT ("Versicherungsaufsichtliche Anforderungen an die IT") maps BaFin's requirements to more tangible guidance.

A few days ago, the English translation of this document has been made available. It is described on its announcement page as follows: "The VAIT aims at clarifying BaFin's expectations with regard to governance requirements relating to information security and information technology. These requirements are a core supervisory component in the insurance and occupational pension sector in Germany."

This makes the audience of potential readers of this helpful guide much larger and my challenge to intelligent governance in a multitude of industries all the more important: "Truly proactive CISOs in companies beyond the financial sector will take these as a starting point and challenge to the quality of their own, appropriate security and compliance. Beyond concrete regulatory requirements, but to secure their own company.”

AWS re:Invent Impressions

This year’s flagship conference for AWS – the re:Invent 2018 in Las Vegas – has just officially wrapped. Continuing the tradition, it has been bigger than ever – with more than 50 thousand attendees, over 2000 sessions, workshops, hackathons, certification courses, a huge expo area, and, of course, tons of entertainment programs. Kudos to the organizers for pulling off an event of this scale – I can only imagine the amount of effort that went into it.

I have to confess, however: maybe it’s just me getting older and grumpier, but at times I couldn’t stop thinking that this event is a bit too big for its own good. With the premises spanning no less than 7 resorts along the Las Vegas Boulevard, the simple task of getting to your next session becomes a time-consuming challenge. I have no doubt however that most of the attendees have enjoyed the event program immensely because application development is supposed to be fun – at least according to the developers themselves!

Apparently, this approach is deeply rooted in the AWS corporate culture as well – their core target audience is still “the builders” – people who already have the goals, skills and desire to create new cloud-native apps and services and the only thing they need are the necessary tools and building blocks. And that’s exactly what the company is striving to offer – the broadest choice of tools and technologies at the most competitive prices.

Looking at the business stats, it’s obvious that the company remains a quite distant leader when it comes to Infrastructure-as-a-Service (IaaS) – having such a huge scale advantage over other competitors, the company can still outpace them for years even if its relative growth slows down. Although there have been discussions in the past whether AWS has a substantial Platform-as-a-Service (PaaS) offering, they can be easily dismissed now – in a sense, “traditional PaaS” is no longer that relevant, giving way to modern technology stacks like serverless and containers. Both are strategic for AWS, and, with the latest announcements about expanding the footprint of the Lambda platform, one can say that the competition in the “next-gen PaaS” field would be even tougher.

Perhaps the only part of the cloud playing field where AWS continues to be notoriously absent is Software-as-a-Service (SaaS) and more specifically enterprise application suites. The company’s own rare forays into this field are unimpressive at best, and the general strategy seems to be “leave it to the partners and let them run their services on AWS infrastructure”. In a way, this reflects the approach Microsoft has been following for decades with Windows. Whether this approach is sustainable in the long term or whether cloud service providers should rather look at Apple as their inspiration – that’s a topic that can be debated for hours… In my opinion, this situation leaves a substantial opening in the cloud market for competitors to catch up and overtake the current leader eventually.

The window of opportunity is already shrinking, however, as AWS is aiming at expanding into new markets and doing just about anything technology-related better (or at least bigger and cheaper) than their competitors, as the astonishing number of new product and service announcements during the event shows. They span from the low-level infrastructure improvements (faster hardware, better elasticity, further cost reductions) to catching up with competitors on things like managed Blockchain to all-new almost science fiction-looking stuff like design of robots and satellite management.

However, to me as an analyst, the most important change in the company’s strategy has been their somewhat belated realization that not all their users are “passionate builders”. And even those who are, are not necessarily considering the wide choice of available tools a blessing. Instead, many are looking at the cloud as a means to solve their business problems and the first thing they need is guidance. And then security and compliance. Services like AWS Well-Architected Tool, AWS Control Tower and AWS Security Hub are the first step in the right direction.

Still, the star topic of the whole event was undoubtedly AI/ML. With a massive number of new announcements, AWS clearly indicates that its goal is to make machine learning accessible not just for hardcore experts and data scientists, but to everyone, no ML expertise required. With their own machine learning inference chips along with the most powerful hardware to run model training and a number of significant optimizations in frameworks running on them, AWS promises to become the platform for the most cutting-edge ML applications. However, on the other end, the ability to package machine learning models and offer them on the AWS Marketplace almost as commodity products makes these applications accessible to a much broader audience – another step towards “AI-as-a-Service”.

Another major announcement is the company’s answer to their competitors’ hybrid cloud developments – AWS Outposts. Here, the company’s approach is radically different from offerings like Microsoft’s Azure Stack or Oracle Cloud at Customer, AWS has decided not to try and package their whole public cloud “in a box” for on-premises applications. Instead, only the key services like storage and compute instances (the ones that really have to remain on-premises because of compliance or latency considerations, for example) are brought to your data center, but the whole control plane remains in the cloud and these local services will appear as a seamless extension of the customer’s existing virtual private cloud in their region of choice. The idea is that customers will be able to launch additional services on top of this basic foundation locally - for example, for databases, machine learning or container management. To manage Outposts, AWS offers two choices of a control plane: either through the company’s native management console or through VMware Cloud management tools and APIs.

Of course, this approach won’t be able to address certain use cases like occasionally-connected remote locations (on ships, for example), but for a large number of customers, AWS Outposts promises significantly reduced complexity and better manageability of their hybrid solutions. Unfortunately, not many technical details have been revealed yet, so I’m looking forward to further updates.

There was a number of announcements regarding AWS’s database portfolio, meaning that customers now have an even bigger number of available database engines to choose from. Here, however, I’m not necessarily buying into the notion that more choice translates into more possibilities. Surely, managed MySQL, Memcached or any other open source database will be “good enough” for a vast number of use cases, but meeting the demands of large enterprises is a different story. Perhaps, a topic for an entirely separate blog post.

Oh, and although I absolutely recognize the value of a “cryptographically verifiable ledger with centralized trust” for many use cases which people currently are trying (and failing) to implement with Blockchains, I cannot but note that “Quantum Ledger Database” is a really odd choice of a name for one. What does it have to do with quantum computing anyway?

After databases, the expansion of the company’s serverless compute portfolio was the second biggest part of AWS CTO Werner Vogels’ keynote. Launched four years ago, AWS Lambda has proven to be immensely successful with developers as a concept, but the methods of integrating this radically different way of developing and running code in the cloud into traditional development workflows were not particularly easy. This year the company has announced multiple enhancements both to the Lambda engine itself – you can now use programming languages like C++, PHP or Cobol to write Lambda functions or even bring your own custom runtime – and to the developer toolkit around it including integrations with several popular integrated development environments.

Notably, the whole serverless computing platform has been re-engineered to run on top of AWS’s own lightweight virtualization technology called Firecracker, which ensures more efficient resource utilization and better tenant isolation that translates into better security for customers and even further potential for cost savings.

These were the announcements that have especially caught my attention during the event. I’m pretty sure that you’ll find other interesting things among all the re:Invent 2018 product announcements. Is more always better? You decide. But it sure is more fun!

Another Astounding Data Breach Hits the Confidence of Customers

The dust is still setting, but the information on this case currently available, which also includes the official press release, is worrying: Just this Friday, November 30, the hotel chain Marriott International announced that it has become the target of a hacker attack. Marriott's brand names include W Hotels, St. Regis, Sheraton Hotels & Resorts, Westin Hotels & Resorts, and Le Meridien Hotels & Resorts. The compromised database contains personal information about customers, in particular, reservations made in the chain's hotels before September 10, 2018.

Even more worrying are the sheer numbers and the nature and extent of the stored and leaked data. Allegedly it took 4 years for Marriott to discover the problem, which would mean continuous access to this data for that period.  It's data on more than half a billion accounting transactions (>500,000,000 to show only the zeros - this corresponds approximately to the total number of EU citizens), whereby it is conceivable that individual persons appear several times.

According to the press release, the data contained per record includes ‘combinations of name, mailing address, phone number, email address, passport number, Starwood Preferred Guest ("SPG") account information, date of birth, gender, arrival and departure information, reservation date, and communication preferences’.

For a still unclear portion of these records, the record per person is said to also include payment card numbers and payment card expiration dates, but the payment card numbers were encrypted using Advanced Encryption Standard encryption (AES-128).  This is a symmetric encryption method in which the key for encryption and decryption is the same. If this still sounds trustworthy for these particularly critical attributes, the company has determined that both components required for decrypting payment card numbers may also have been stolen at the same time. This suggests that an unknown percentage of the total unknown data pool might be affected. Given the scale of the leak, a significant absolute number of personal profiles with credit card data "in the wild" must be expected.

It is still unclear what role the above-given deadline of September 10, 2018 plays in this context, but at this point, the leak seems to have been closed. The press release reads as follows: "On September 8, 2018, Marriott received an alert from an internal security tool regarding an attempt to access the Starwood guest reservation database in the United States.  Marriott quickly engaged leading security experts to help determine what occurred.  Marriott learned during the investigation that there had been unauthorized access to the Starwood network since 2014."

Building trust must be the foundation of any business strategy.  The first and only starting point is to design corporate strategies in such a way that they are aware of the importance of customer data and the protection of privacy. This involves both well-thought-out business processes and suitable technologies. Of course, this includes trustworthy storage and processing of personal data. Evidence of this must be provided to many stakeholders, including the relevant data protection authorities and the users themselves.  

So first and foremost it is about trust as a central concept in the relationship between companies and their customers. However, the trust of Starwood/Marriott customers could be fundamentally and lastingly destroyed.

The problem with trust is that it needs to be strategically grown over long periods of time, but as it is highly fragile it can be destroyed within a very short period of time. This might be through a data breach just like in this current case. Or through not building adequate solutions. Or not communicating adequately. The real question is why many organizations have not yet started actively building this trusted relationship with their users/customers/consumers/employees. The awareness is rising, so that security and privacy are moving increasingly into the focus of not only tech-savvy users but also that of everyday customers.

Last but not least, as both a European and customer of this hotel chain (and as a layman, not a lawyer), I really would like to ask the following question: The deadlines for reporting a data breach according to the requirements of the GDPR are the latest 72 hours after the breach becomes known. With what we know until now, shouldn’t we have heard from Marriott much earlier and in some different form?

Cybersecurity Leadership Summit Berlin 2018 - Review

This month we launched our Cybersecurity Leadership Summit in Berlin. A pre-conference workshop entitled “Focusing Your Cybersecurity Investments: What Do You Really Need for Mitigating Your Cyber-risks?” was held on Monday. The workshop was both business-oriented and technical in nature. Contemporary CISOs and CIOs must apply risk management strategies, and it can be difficult to determine which cybersecurity projects should be prioritized. Leaders in attendance heard the latest applied research from Martin Kuppinger, Matthias Reinwarth, and Paul Simmonds.

Tuesday’s opening keynote was delivered by Martin Kuppinger on the topic of User Behavioral Analytics (UBA). UBA has become both the successor and adjunct to SIEMs, and as such are link between traditional network-centric cybersecurity and identity management. Torsten George of Centrify pitched the importance of zero-trust concepts. Zero-trust can be seen as improving security by requiring risk-adaptive and continuous authentication. But trust is also a key component of things like federation architecture, so it won’t be going away altogether.

Innovation Night was held on Tuesday. In this event, a number of different speakers competed by describing how their products successfully incorporated Artificial Intelligence / Machine Learning techniques. The winner was Frederic Stallaert, Machine Learning Engineer/ Data Scientist at ML6. His topic was the adversarial uses of AI, and how to defend against them.

Here are some of the highlights. In the social engineering track, Enrico Frumento discussed the DOGANA project. This is the Advanced Social Engineering and Vulnerability Analysis Framework. They have been performing Social Driven Vulnerability Assessments and have interesting but discouraging results. In a recent study, 59% of users tested in an energy sector organization fell prey to a phishing training email. Malicious actors use every bit of information about targets available to them, regardless of legality. Organizations providing anti-phishing training are encumbered by GDPR.

In Threat intelligence, we had a number of good speakers and panelists. Ammi Virk presented on Contextualizing Threat Intelligence. One of his excellent points was recognizing the “con in context”, or guarding against bias, assumptions, and omissions. Context is essential in turning information into intelligence. This point was also made strongly by John Bryk in his session.

JC Gaillard posed a controversial question in his session, “Is the role of CISO outdated?”. He looked at some of the common problems CISOs face, such as being buried in an org chart, inadequate funding, and lack of authority to solve problems. His recommendations were to 1) elevate the CISO role and give it political power, 2) move the purely technical IT Security functions under the CIO or CTO, and 3) put CISOs on the level with newer positions such as CDOs and DPOs.

Internet Balkanization was a topic in the GDPR and Cybersecurity session. Daniel Schnok gave a thought-provoking presentation on the various political, economic, and technological factors that are putting up barriers and fragmenting the Internet today. For example, we know that countries such as China, Iran, and Russia have politically imposed barriers and content restrictions. GDPR is limiting the flow of personal information in Europe, and in some cases, overreaction to GDPR is impairing the flow of other types of data as well. The increasing consolidation of data under the large, US-based tech firms is also another example of balkanization.

In my final keynote I described the role that AI and ML are playing in cybersecurity today. These technologies are not merely nice-to-haves but are essential components, particularly for anti-malware, EDR/MDR, traffic analysis, etc. Nascent work on using ML techniques to facilitate understanding of access control patterns is underway by some vendors. These techniques may lead to a breakthrough in data governance in the mid-term. AI and ML based solutions are subject to attack (or “gaming”). Determined attackers can fool ML enhanced tools into missing detection of malware, for example. Lastly, Generative Adversarial Networks (GANs) serve as an example of how bad actors can use AI technologies as a means to advance attacks. GAN-based tools exist for password-cracking, steganography, and creating fake fingerprints for fooling biometric readers. In short, ML can help, but it can also be attacked and used to create more powerful cyber attacks.

We would like to thank our sponsors: iC Consult, Centrify, Cisco, One Identity, Palo Alto Networks, Airlock, Axiomatics, BigID, ForgeRock, Nexis, Ping Identity, SailPoint, MinerEye, PlainID, FireEye, Varonis, Thycotic, and Kaspersky Lab.

We will return to Berlin for CSLS 2019 on 12-14 November of next year.

IBM Acquires Red Hat: The AI potential

On October 28th IBM announced its intention to acquire Red Hat. At $34 Billion, this is the largest software acquisition ever.  So why would IBM pay such a large amount of money for an Open Source software company? I believe that this acquisition needs to be seen beyond looking just at DevOps and Hybrid Cloud, rather in the context of IBM’s view of the future where the business value from IT services will come from in future. This acquisition provides near-term tactical benefits from Red Hat’s OpenShift Platform and its participation in the Kubeflow project. It strengthens IBM’s capabilities to deliver the foundation for digital business transformation. However, digital business is increasingly based on AI delivered through the cloud. IBM recently announced a $240M investment in a 10-year research collaboration on AI with MIT and this represents the strategy. This adds to the already significant investments that IBM has already made in Watson, setting up a division in 2016, as well as in cloud services.

Red Hat was founded in 1993 and in 1994 released the Red Hat version of Linux. This evolved into a complete development stack (JBoss) and recently released Red Hat OpenShift - a container- and microservices-based (Kubernetes) DevOps platform. Red Hat operates on a business model based on open-source software development within a community, professional quality assurance, and subscription-based customer support.

The synergy between IBM and Red Hat is clear. IBM has worked with Red Hat on Linux for many years and both have a commitment to Open Source software development. Both companies have a business model in which services are the key element. Although these are two fairly different types of services – Red Hat’s being service fees for software, IBM’s being all types of services including consultancy, development they both fit well into IBM’s overall business.

One critical factor is the need for tools to accelerate the development lifecycle for ML projects. For ML projects this can be much less predictable than for software projects. In the non-ML DevOps world microservices and containers and the key technologies that have helped here. How can these technologies help with ML projects?

There are several differences between developing ML and coding applications. Specifically, ML uses training rather than coding and, in principle, this in itself should accelerate the development of much more sophisticated ways to use data. The ML Development lifecycle can be summarized as:

  • Obtain, prepare and label the data
  • Train the model
  • Test and refine the model
  • Deploy the model

While the processes involved in ML development are different to conventional DevOps, a microservices-based approach is potentially very helpful. ML Training involves multiple parties working together and microservices provide a way to orchestrate various types of functions, so that data scientists, experts and business users can just use the capabilities without caring about coding etc. A common platform based on microservices could also provide automated tracing of the data used and training results to improve traceability and auditing. It is here that there is a great potential for IBM/Red Hat to deliver better solutions.

Red Hat OpenShift provides a DevOps environment to orchestrate the development to deployment workflow for Kubernetes based software. OpenShift is, therefore, a potential solution to some of the complexities of ML development. Red Hat OpenShift with Kubernetes has the potential to enable a data scientist to train and query models as well as to deploy a containerized ML stack on-premises or in the cloud.

In addition, Red Hat is a participant in the Kubeflow project. This is an Open Source project dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Their goal is not to recreate other services, but to provide a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures.

In conclusion, the acquisition has strengthened IBM’s capabilities to deliver ML applications in the near term. These capabilities complement and extend IBM’s Watson and improve and accelerate their ability and the ability of their joint customers to create, test and deploy ML-based applications. They should be seen as part of a strategy towards a future where more and more value is delivered through AI-based solutions.

Read as well: IBM & Red Hat – and now?

IBM & Red Hat – And Now?

On October 28th IBM announced its intention to acquire Red Hat. At $34 Billion, this is the largest software acquisition ever. So why would IBM pay such a large amount of money for Red Hat? Not surprising, there were quite a few negative comments from parts of the Open Source community. However, there is logic behind that intended acquisition.

Aside of the potential it holds for some of the strategic fields of IBM such as AI (Artificial Intelligence) and even security (which is amongst the divisions of IBM showing the biggest growth), there is an obvious potential in the field of Hybrid Cloud as well as for DevOps.

Red Hat has for a long time been a company that is much bigger than just a Linux company. When you look at their portfolio, Red Hat is strong in middleware and technologies supporting hybrid cloud environments. Technology stacks like JBoss, Ansible, OpenShift or OpenStack are well-established.

Red Hat has also been a longstanding supplier preferred by enterprises. They have a strong position in growth markets that play an important role for businesses, Cloud Service Providers (CSPs), and obviously for IBM itself. Red Hat empowers IBM to deliver better and broader services to its customers and strengthen its role as a provider for Hybrid Cloud and DevOps and thus its competitive position in the battle with companies such as AWS, Microsoft, or Oracle. On the other hand, IBM allows Red Hat scaling its business, by delivering both the organizational structure to grow and a global services team and infrastructure.

From our perspective, there is little risk that Red Hat will lose a significant share of its current business – they are already an enterprise player and selling to enterprise customers, and IBM will strengthen not weaken them.

As with every acquisition, this one also brings some risk for customers. There is some overlap in certain parts of the portfolio, particularly around managing hybrid cloud environments, i.e. Cloud Foundry and OpenShift. While this might affect some customers, the overall risk for customers appears to be limited. On the other hand, the joint potential to support business in their Digital Transformation is significant. IBM can increase its offerings and attractiveness for Hybrid Cloud and DevOps, fostered by strong security and with interesting potential for new fields such as AI.

The only question will be whether the price tag of Red Hat is too high. While there is huge potential, the combined IBM and Red Hat will still need to monetize on this.

Read as well: IBM Acquires Red Hat: The AI potential

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

AI for the Future of your Business Learn more

AI for the Future of your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00