KuppingerCole Blog

Data Security and Governance (DSG) for Big Data and BI

Today, organizations are capturing trillions of bytes of data every day on their employees, consumers, services and operations through multiple sources and data streams. As organizations explore new ways to collect more data, the increased use of a variety of consumer devices and embedded sensors continue to fuel this exponential data growth. Large pools of data, often referred to as data lakes, are created as a result of this massive data aggregation, collection and storage – which remains the easiest of all processes in a Big Data and BI value chain.

What’s concerning is the complete ignorance of data owners, data privacy officers as well as security leaders towards a defined scope for collection and use of this data. Very frequently, not only the scope for use of this data is poorly defined but the legal implications that might arise from the incompliant use of this data remain unknown or are ignored in broad daylight.

An example that recently made it to the news was the storage of the millions of user passwords by Facebook in clear text. There was no data breach involved, nor the passwords were abused but ignoring the fundamentals of data encryption outrightly puts Facebook in an undeniable defiant position against cybersecurity basics. The absence of controls for restricting users’ access to sensitive customer data further violates the data privacy and security norms by allowing the user passwords to be freely accessed for potential abuse by 20,000 Facebook employees.

It is important for data owners, privacy officers and security leaders to know what data they have in order to classify, analyze and protect it. Obviously, you can’t protect what you don’t know you have in your possession. Therefore, it's necessary for data leaders to have a continually updated catalogue of data assets, data sources and the data privacy and residency regulations that the data elements in your possession directly attract.

Most Big Data environments comprise of massive data sets of structured, unstructured and semi-structured data that can’t be processed through traditional database and software techniques. This distributed processing of data across unsecured processing nodes put the data as the interactions between the distributed nodes are not secured. A lack of visibility into the information flows, particularly the unstructured data leads to inconsistent access policies.

Business Intelligence platforms, on the other hand, are increasingly offering capabilities such as self-service data modeling, data mining and dynamic data content sharing – all of which only exaggerates the problem of understanding the data flows and complying with data privacy and residency regulations.

Most data security tools, including database security and IAM tools, only cater to the part of the problem and have their own limitations. With the massive collection of data through multiple data sources including third-party data streams, it becomes increasingly important for CIOs, CISOs and CDOs to implement effective data security and governance (DSG) for the Big Data and BI platforms to gain the required visibility and appropriate level of control over the data flowing through the enterprise systems, applications and databases.

Some security tools and technologies that are commonly in use and can be extended to certain components within a Big Data or BI platform are:

  • Database Security
  • Data Discovery & Classification
  • Database & Data Encryption
  • UBA (User Behaviour Analytics)
  • Data Masking & Tokenization
  • Data Virtualization
  • IGA (Identity Governance & Administration)
  • PAM (Privileged Access Management)
  • Dynamic Authorization Management
  • DLP (Data Leakage Prevention)
  • API (Application Programming Interface) Security

There remain specific limitations of each of these technologies in addressing the broader security requirements of a Big Data and BI platform. However, using them wisely and selectively for the right Big Data and BI component potentially reduces the risks of data espionage and misuse arising from these components and thereby contributing to the overall security state of the environment.

Data governance for Big Data and BI is fast becoming an urgent requirement and has largely been absent from the existing IGA tools. Existing IGA tools provide basic access governance, mostly for structured data but lack in-built capabilities to support the complex access governance requirements of the massive unstructured data as well as do not support the multitude of data dimensions required for driving authorizations and access control including access requests and approvals at a granular level.

It is therefore recommended that security leaders work with application and data owners to understand the data flows and authorization requirements of the Big Data and BI environments. Besides practicing standard data sanitization and encryption, security leaders are advised to evaluate the right set of existing data security technologies to meet the urgent Big Data and BI security requirements and build on additional security capabilities in the long term.

We, at KuppingerCole, deliver our standardized Strategy Compass and Portfolio Compass methodology to help security leaders assess their Big Data and BI security requirements and identify the priorities. The methodology also helps leaders provide ratings to available security technologies based on these priorities – eventually providing strong and justifiable recommendations for use of the right set of technologies. Please get in touch with our sales team for more information on relevant research and how we can help you in your plans to secure your Big Data and BI environment.

Smart Manufacturing: Locking the Doors You've Left Open When Connecting Your Factory Floor

Smart Manufacturing or, as the Germans tend to say, Industry 4.0, has already become a reality for virtually any business in manufacturing. However, as just recently demonstrated by the attack on Norsk Hydro, this evolution comes at a price: There are doors created and opened for attackers that are not easy to close again.

These new challenges are not a surprise when looking at what the quintessence of Smart Manufacturing is from a security perspective. Smart Manufacturing is about connecting business processes to manufacturing processes or, in other words, the (business) value chain to the physical processes (or process chains) on the factory floor.

The factory floor has seen some cyber-attacks even before Smart Manufacturing became popular. However, these were rare attacks, some of them being highly targeted on specific industries. Stuxnet, while having been created in the age of Smart Manufacturing, is a sample of such an attack targeted at non-connected environments, in that case, nuclear plants.

In contrast, cyber-attacks on business IT environments are common, with numerous established attack vectors, but also a high degree of “innovation” in the attacks. There are many attacks. Smart Manufacturing, by connecting these two environments, opens these new doors – at the network level as well as at the application layer. The quintessence of Smart Manufacturing, from the IT perspective, is thus “connecting everything = everything is under attack”. Smart Manufacturing extends the reach of cybercriminals.

But how to lock these doors again? It all starts with communication, and communication starts with a common language. The most important words here are not SCADA or ICS or the likes, but “safety” and “security”. Manufacturing is driven by safety. IT is driven by security. Both can align, but both also need to understand the differences and how one affects the other. Machines that are under attack due to security issues might cause safety issues. Besides that, there are other aspects such as availability and others that differ in their relevance and other characteristics between the OT (Operational Technology) and the IT world. If an HR system is down for a day, that is annoying, but most people will not notice. If a production line is down for a day, that might cause massive costs.

Thus, as always, it begins with people – knowing, understanding, and respecting each other – and processes. The latter includes risk management, incident handling, etc. But, also common, there is a need for technology (or tools). Basically, this involves a combination of two groups of tools: Specific solutions for OT networks such as unidirectional gateways for SCADA environments, and the well-thought-out use of standard security technologies. This includes Patch Management, which is more complex in OT environments due to the restrictions regarding availability and planned downtimes. This includes the use of Security Intelligence Platforms and Threat Intelligence to monitor and analyze what is happening in such environments and identify anomalies and potential attacks. It also includes various IAM (Identity & Access Management) capabilities. Enterprise Single Sign-On, while no longer being a hyped technology, might help in moving from open terminals to individual access, using fast user switching such as in healthcare environments. Privileged Access Management might help in restricting privileged user access to critical systems. Identity Provisioning can be used to manage users and their access to such environments.

There are many technologies from IT Security that can help in locking the doors in OT environments again. It is the about time for people from OT and IT to start working together, by communicating and learning from each other. Smart Manufacturing is here to stay – now it is time to do it right not only from a business but from a security perspective.

 Connecting Everything = Everything is Under AttackFigure: Connecting Everything = Everything is Under Attack

There Is a Price to Pay for Using the Shiny, Bright Cloud Service

One of the slides I use most frequently these days is about Identity Brokers or Identity Fabrics, that manage the access of everyone to every service. This slide is based on recent experience from several customer advisories, with these customers needing to connect an ever-increasing number of users to an ever-increasing number (and complexity) of services, applications, and systems.

This reflects the complex reality of most businesses. Aside of the few “cloud born” businesses that don’t have factory floors, large businesses commonly have a history in their IT. Calling this “legacy” ignores that many of these platforms deliver essential capabilities to run the business. They neither can be replaced easily, nor are there always simple “cloud born” alternatives that deliver even the essential capabilities. Businesses must check whether all capabilities of existing tools are essential. Simple answer: They are not. Complex answer: Not all; but identifying and deciding on the essentials is not that easy. Thus, businesses today just can’t do all they need with the shiny, bright cloud services that are hyped.

There are two aspects to consider: One is the positive side of maturity (yes, there is a downside, by being overloaded with features, monolithic, hard to maintain,…), the other is the need to support an existing environment of services, applications, and systems ranging from the public cloud service to on-premises applications that even might rely on a mainframe.

When looking at the hyped cloud services, they always start lean – in the positive sense of being not overly complex, overloaded with features, hard to maintain, etc. Unfortunately, these services also start lean in the sense of focusing on some key features, but frequently falling short in support for the more complex challenges such as connecting to on-premises systems or coming with strong security capabilities.

Does that mean you shouldn’t look for innovative cloud services? No, on the contrary, they can be good options in many areas. But keep in mind that there might be a price to pay for capabilities. If these are not essential, that’s fine. If you consider them essential, you best first check whether they really are. If they remain essential after that check, think about how to deal with that. Can you integrate with existing tools? Will these capabilities come soon, anyway? Or will you finally end up with a shiny, bright point solution or, even worse, a zoo of such shiny, bright tools?

I’m an advocate of the shift to the cloud. And I believe in the need to get rid of many of the perceived essential capabilities that aren’t essential. But we should not be naïve regarding the hybrid reality of businesses that we need to support. That is the complex part when building services–integrating and supporting the hybrid IT. Just know of the price and do it right (which equals “well-thought-out” here).

 Figure 1: Identity Fabrics: Connecting every user to every serviceFigure: Identity Fabrics: Connecting every user to every service

Oslo, We Have a Problem!

As you have certainly already heard, Norsk Hydro, one of the world’s largest aluminum manufacturers and the second biggest hydropower producer in Norway, has suffered a massive cyber attack earlier today. According to a very short statement issued by the company, the attack has impacted operations in several of its business areas. To maintain the safety and continuity of their industrial processes, many of the operations had to be switched to manual mode.

The details of the incident are still pretty sparse, but according to the statement at their press conference, it may have been hit by a ransomware attack. Researchers are currently speculating that it most likely has been LockerGoga, a strain of malware that affected a French company Altran Technologies back in January. This particular strain is notable for having been signed with a valid digital certificate, although it has been revoked since then. Also, only a few of antimalware products are currently able to detect and block it.

It appears that the IT people at Norsk Hydro are currently trying to contain the fallout from the attack, including asking their employees not to turn on their computers and even shutting down the corporate website. Multiple shifts are working manually at the production facilities to ensure that there is no danger to people’s safety and to minimize financial impact.

We will hopefully see more details about the incident later, but what could we learn from the Norsk Hydro’s initial response? First and foremost, we have another confirmation that this kind of incident can happen to anybody. No company, regardless of its industry, size and security budget can assume that their business or industrial networks are immune to such attacks, or that they already have controls in place that defend against all possible security risks.

Second, here we have another textbook example of how not to handle public relations during a security incident. We can assume that a company of that scale should have at least some kind of plan for worst-case scenarios like this – but does it go beyond playbooks for security experts? Have the company’s executives ever been trained to prepare for such level of media attention? And whose idea was it anyway to limit public communications to a Facebook page?

Studies in other countries (like this report from the UK government) indicate that companies are shockingly unprepared for such occasions, with many lacking even a basic incident response plan. However, even having one on paper does not guarantee that everything will go according to it. The key to effective incident management is preparation and this should include awareness among all the people involved, clearly defined roles and responsibilities, access to external experts if needed, but above anything else – practice!

KuppingerCole’s top three recommendations would be the following:

  1. Be prepared! You must have an incident response plan that covers not just the IT aspects of a cyberattack, but organizational, legal, financial and public relations and other means of dealing with its fallout. It is essential that company’s senior executives are involved in its design and rehearsals, since they will be the front and center of any actual operation.
  2. Invest in the right technologies and products to reduce the impact of cyber incidents as well as those to prevent them from happening in the first place. Keep in mind however that no security tool vendor can do the job of assessing the severity and likelihood of your own business risks. Also, always have a backup set of tools and even “backup people” ready to ensure that essential business operations can continue even during a full shutdown.
  3. You will need help from specialists in multiple areas ranging from cyber forensic to PR, and most companies do not have all those skills internally. Look for partnerships with external experts and do it before the incident occurs.

If you need neutral and independent advice, we are here to assist you as well!

Ignorance is Risk

#RSAC2019 is in the history books, and thanks to the expansion of the Moscone Center, there was ample space in the expo halls to house vendor booths more comfortably. In fact, there seemed to be a record number of exhibitors this year. As always, new IAM and cybersecurity products and services make their debut at RSAC.

Despite the extra room, it can be difficult for the security practitioner and executive to navigate the show floor. Some plan ahead and make maps of which booths to visit, others walk from aisle 100 to the end. It can take a good deal of time to peruse and discover what’s new. But most difficult of all it is digesting what we’ve seen and heard, considering it in a business context, and prioritizing possible improvement projects.

Security practitioners tend to hit the booths of vendors they have worked with, those with competing products, and others in their areas of specialty, including startups. For example, an identity architect will likely keep on walking past the “next gen” anti-malware and firewall booths but will stop at the booth offering a new identity proofing service. If a product does something novel or perhaps better than their current vendor’s product, they’ll know it and be open to it, even if it’s a small vendor and it means managing another product or service.

Executives gravitate toward the stack vendors in the front and middle, ignoring the startups on the sides and back. [It’s also increasingly likely execs will have meetings with specific vendors in the hotels surrounding Moscone, and not even set foot in the halls.] Why? IT execs and particularly CISOs are concerned with reducing complexity as well as securing the enterprise. A few stack vendors with consolidated functionality are easier to manage than dozens of point solutions.

Who is right? Well, it depends. Sometimes both, sometimes neither. It depends on knowing your cyber risk in relation to your business and understanding which technology enhancements will decrease your cyber risk and by approximately how much. Oftentimes practitioners and executives disagree on the cyber risk analysis and priorities set as a result.

Risk is conjunction of consequence and likelihood. At RSAC and other conferences we hear anecdotes of consequences and see products that reduce the likelihood and severity of those consequences. Executives and practitioners alike have to ask, “are the threats addressed by product X something we realistically face?”. If not, implementing it won’t reduce your cyber risk. Or, if there are two or more similar products, which one offers the most possible risk reduction?

The biggest risk is that the decision-makers don’t truly understand the threats and risks they face. There are cases where SMBs have built defenses against zero-day APTs that will never come their way yet have neglected to automate patch management or user de-provisioning. In other cases, a few big enterprises have naively dismissed the possibility that they could be the target of corporate or foreign state espionage and failed to protect against such attacks.

The riskiest time for organizations is the period when executive leadership changes and for 12-18 months afterward, or even longer. If an organization brings in a CIO or CISO from a different industry, it takes time for the person to learn the lay of the land and the unique challenges in which that organization operates. Long-held strategies and roadmaps get re-evaluated and changed. Mid-level managers and practitioners may leave during this time. That org’s overall cybersecurity posture is weakened during the transition time. Adversaries know this too.

Risk is a difficult subject for humans to grasp. No one gets it right all the time. Risk involves processing probabilities, and our brains didn’t really evolve to do that well. For an excellent in-depth look at that subject, read Leonard Mlodinow’s book The Drunkard’s Walk.

External risk assessments and benchmarks can be good mechanisms to overcome these circumstances; such as when tech teams and management disagree on priorities, when one or more parties is unsure of the likelihood of threats and risks, and when executive leadership changes. Having an objective view from advisors experienced in your particular industry can facilitate the re-alignment of tactics and strategies that can reduce cyber and overall risk. For information on the types of assessments and benchmarking KuppingerCole offers, see our advisory offerings.

Building Trust by Design

Trust has somehow become a marketing buzzword recently. There is a lot of talks about “redefining trust”, “trust technologies” or even “trustless models” (the latter is usually applied to Blockchain, of course). To me, this has always sounded… weird.

After all, trust is the foundation of the very society we live in, the key notion underlying the “social contract” that allows individuals to coexist in a mutually beneficial way. For businesses, trust has always been a resulting combination of two crucial driving forces – reputation and regulation. Gaining a trustworthy reputation takes time but ruining it can be instantaneous – and it is usually in a businesses’ best interest not to cheat their customers or at least not to get caught (and that’s exactly where regulation comes into play!). Through the lengthy process of trial and error, we have more or less figured out already how to maintain trust in traditional “tangible” businesses. And then the Digital Transformation happened.

Unfortunately, the dawn of the digital era has not only enabled many exciting new business models but also completely shattered the existing checks and balances. On one hand, the growing complexity of IT infrastructures and the resulting skills shortage made sensitive digital data much more vulnerable to cyberattacks and breaches. On the other hand, unburdened by regulations and free from public scrutiny, many companies have decided that the lucrative business of hoarding and reselling personal information is worth more than any moral obligation towards their customers. In a way, the digital transformation has brought back the Wild West mentality to modern businesses – completely with gangs of outlaws, bounty hunters, and snake oil peddlers…

All this has led to a substantial erosion of public trust – between another high-profile data breach and a political scandal about harvesting personal data people no longer know whom to trust. From banks and retailers to social media and tech companies – this “trust meltdown” isn’t just bad publicity, it leads to substantial brand damage and financial losses. The recent introduction of strict data protection regulations like GDPR with their massive fines for privacy violations is a sign that legislation is finally catching up, but will compliance alone fix the trust issue? What other methods and technologies can companies utilize to restore their reputations?

Well, the first and foremost measure is always transparency and open communications with customers. And this isn’t just limited to breach disclosure – on the contrary, the companies must demonstrate their willingness to improve data protection and educate customers about the hidden challenges of the “digital society”. Another obvious approach is simply minimizing personal data collection from customers and implementing proper consent management. Sure, this is already one of the primary stipulations of regulations like GDPR, but compliance isn’t even the primary benefit here: for many companies, the costs savings on data protection and reputation improvements alone will already outweigh the potential (and constantly dwindling) profits from collecting more PII than necessary.

Finally, we come to the notion of security and privacy “by design”. This term has also become a buzzword for security vendors eager to sell you another data protection or cybersecurity solution. Again, it’s important to stress that just purchasing a security product does not automatically make a business more secure and thus more trustworthy. However, incorporating certain security- and privacy-enhancing technologies into the very fabric of your business processes may, in fact, bring noticeable improvements, and not just to your company’s public reputation.

Perhaps, the most obvious example of such a technology is encryption. It’s ubiquitous, cheap to implement and gives you a warm feeling of safety, right? Yes, but making encryption truly inclusive and end-to-end, ensuring that it covers all environments from databases to cloud services, and, last but not least, that the keys are managed properly is not an easy challenge. However, to make data-centric security the foundation of your digital business, you would need to go deeper still. Without identity, modern security simply cannot fulfill its potential, so you’ll need to add dynamic centralized access control to the mix. And then security monitoring and intelligence with a pinch of AI. Thus, step by step, you’ll eventually reach the holy grail of the modern IT – Zero Trust (wait, weren’t we going to boost trust, not get rid of it? Alas, that’s the misleading nature of many popular buzzwords nowadays).

For software development companies, investing into security by design can look complicated at first, too. From source code testing to various application hardening techniques to API security – writing secure applications is hard, and modern technologies like containers and microservices make it even harder, don’t they? This cannot be farther from the truth, however: modern development methodologies like DevOps and DevSecOps are in fact focusing on reducing the strain on programmers with intelligent automation, unified architectures across hybrid environments, and better experience for users, who are learning to appreciate programs that do not break under high load or cyberattacks.

But it does not even have to be that complicated. Consider Consumer Identity and Access Management platforms, for example. Replacing a homegrown user management system with such a platform not only dramatically improves the experience for your current and potential customers – with built-in privacy and consent management features, it also gives users better control over their online identities, boosting their trust considerably. And in the end, you get to know your customers better while reducing your own investments into IT infrastructure and operations. It can’t really get better than this.

You see, trust, privacy, and security don’t have to be a liability and a financial burden. With an open mind and a solid strategy, even the harshest compliance regulations can be turned into new business enablers, cost-saving opportunities and powerful messages to the public. And we are always here to support you on this journey.

The Wrong Click: It Can Happen to Anyone of Us

The Wrong Click: It Can Happen to Anyone of Us

AI Myths, Reality and Challenges

The dream of being able to create systems that can simulate human thought and behaviour is not new. Now that this dream appears to be coming closer to reality there is both excitement and alarm. Famously, in 2014 Prof. Stephen Hawking told the BBC: "The development of full artificial intelligence could spell the end of the human race”. Should we be alarmed by these developments and what in practice does this mean today?

The origins of today’s AI (Artificial Intelligence) can be traced back to the seminal work on computers by Dr Alan Turing. He proposed an experiment that became known as the “Turing Test”, to define the standard for a machine to be called "intelligent". A computer could only be said to "think" if a human was not able to distinguish it from a human being through a conversation with it.

The theoretical work that underpins today’s AI and ML (Machine Learning) was developed in the 1940s and 1950s. The early computers of that era were slow and could only store limited amounts of data, this restricted what could practically be implemented. This has now changed – the cloud provides the storage for vast amounts of data and the computing power needed for ML.

The theoretical basis for ML stems from work published in 1943 by Warren McCulloch and Walter Pitts on a computational model for neural networks based on mathematics and algorithms called threshold logic. Artificial neural networks provide a framework for machine learning algorithms to learn based on examples without being formally programmed. This learning needs large amounts of data and the significant computing power which the cloud can provide.

Analysing this the vast amount of data now available in the cloud creates its own challenges and ML provides a potential solution to these. Normal statistical approaches may not be capable of spotting patterns that a human would see, and programming individual analyses is laborious and slow. ML provides a way to supercharge human ability to analyse data. However, it changes the development cycle from programming to training based on curated examples overseen by human trainers. Self-learning systems may provide a way around the programming bottleneck. However, the training-based development cycle creates new challenges around testing, auditing and assurance.

ML has also provided a way to enhance algorithmic approaches to understanding visual and auditory data. It has for example enabled facial recognition systems as well as chatbots for voice-based user interactions. However, ML is only as good as the training and is not able to provide explanations for the conclusions that it reaches. This leads to the risk of adversarial attacks – where a third party spots a weakness in the training and exploits this to subvert the system. However, it has been applied very successfully to visual component inspection in manufacturing where it is faster and more accurate than a human.

One significant challenge is how to avoid bias – there are several reported examples of bias in facial recognition systems. Bias can come from several sources. There may be insufficient data to provide a representative sample. The data may have been consciously or unconsciously chosen in a way that introduces bias. This latter is difficult to avoid since every human is part of a culture which is inherently founded on a set of shared beliefs and behaviours which may not be the same as in other cultures.

Another problem is one of explanation – ML systems are not usually capable of providing an explanation for their conclusions. This makes training ML doubly difficult because when the system being trained gets the wrong answer it is hard to figure out why. The trainer needs to know this to correct the error. In use, an explanation may be required to justify a life-changing decision to the person that it affects, to provide the confidence needed to invest in a project based on a projection, or to justify why a decision was taken in a court of law.

A third problem is that ML systems do not have what most people would call “common sense”. This is because currently each is narrowly focussed on one specialized problem. Common sense comes from a much wider understanding of the world and allows the human to recognize and discard what may appear to be a logical conclusion because in the wider context it is clearly stupid. This was apparent when Microsoft released a chatbot that was supposed to train itself did not recognize mischievous behaviour.

 Figure 1: AI, Myths, Reality and ChallengesFigure: AI, Myths, Reality and Challenges

In conclusion, AI systems are evolving but they have not yet reached the state portrayed in popular science fiction. ML is ready for practical application and major vendors offer tools to support this. The problems where AI is ready can be applied today can be described in two dimensions – the scope of knowledge required and the need for explanation. Note that the need for explanation is related to the need for legal justification or where potential consequences of mistakes are high.

Organizations are recommended to look for applications that fit the green area in the diagram and to use caution when considering those that would lie in the amber areas. The red area is still experimental and should only be considered for research.

For more information on this subject attend the AI track at EIC in Munich in May 2019.

Ledger for the Masses: The Blockchain Has Come to Stay

Hype topics are important. They are important for vendors, startups, journalists, consultants, analysts, IT architects and many more. The problem with hypes is that they have an expiration date. Who remembers 4GL or CASE tools as an exciting discussion topic in IT departments? Well, exactly, that's the point...

From that expiration date on, they either have to be used for some very good purposes within a reasonable period of time, or they turn out to be hot air. There have been quite a few hype topics lately. Think for example of DevOps, Machine Learning, Artificial Intelligence, IoT, Containers and Microservices, Serverless Computing, and the Blockchain. All of these will be evaluated against their impact in the real world. The Blockchain can even be called a prototype for hype topics. The basic concept of trust in hostile environments through technology and the implementation of crypto currencies laid the groundwork for an unparalleled hype. However, there are still no compelling new implementations of solutions using this technology, which any IT-savvy hype expert could refer to immediately.

This week I attended the Berlin AWS Summit as an analyst for KuppingerCole. Many important (including many hype) topics, which have now arrived in reality, were looked at in the keynotes, combined with exciting success stories and AWS product and service offerings. These included migration to the cloud, big data, AI and ML, noSQL databases, more AI and ML, containers and microservices, data lakes and analytics, even more AI and ML and much more that is available for immediate use in the cloud and "as a service" to today's architects, developers and creators of new business models.

But if you weren't attentive just for a short moment, you could have missed the first appearance of the Blockchain topic: at the bottom of the presentation slide about databases in the column "Purpose-Built" you could find "Document-DBs", "Key-Value"-, "In-Memory-", "Time series-" and Graph databases as well as "Ledger: Amazon QLDB".

Even the word "Blockchain" was missing. A clear technological and conceptual categorization.

Behind this first dry mention is the concept of QLDB as a fully managed ledger solution in the AWS cloud, announced on the next presentation slide as "a transparent, immutable, cryptographically verifiable transaction log owned by a central trusted authority" which many purists will not even think of as a Blockchain. Apart from that AWS provides also a preview of a fully managed Blockchain based on Hyperledger Fabric or Ethereum.

This development, which has of course already manifested before in several other comparable offers from competitors, is not the end, but probably only the beginning of the real Blockchain hype. It proves that there is demand for these conceptional and technological building blocks and that this technology has come to stay.

This clearly corresponds directly and stunningly accurate to the development depicted in the trend compass for Blockchain and Blockchain Identity that Martin Kuppinger presented in this video blog post. Less hype, less volume in investment, but much better understood.

 Figure 1: The Trend Compass: Blockchain Hype Figure: The Trend Compass - Blockchain Hype

Like every good hype topic that is getting on in years, it has lost a bit of its striking attractiveness to laymen, but gained in maturity for IT, security and governance professionals. In practice, however, it can now play a central role in the choice of the adequate tools for the right areas of application. And we will for sure need trust in hostile environments through software, technology and processes in the future.

The QLDB product offered by AWS and the underlying concept cited above is certainly not the only possible and meaningful form of Blockchain or decentralized, distributed and public digital ledger in general. But for an important class of applications of this still disruptive technology another efficient and cost-effective implementation for real life (beyond the hype) becomes available. Having the Blockchain available in such an accessible form will potentially drive Blockchain in a maturing market on to the upper right sector of the trend compass, as an established technology with substantial market volume, even if might not even be called explicitly „Blockchain“ in every context.

Web Access & Federation

An organization’s need to support communication and collaboration with external parties such as business partners and customers is just as an essential technical foundation today as it has been in the past. Web Access Management and Identity Federation are two vital and inseparable technologies that organizations can use to manage access to and from external systems, including cloud services, consistently. While the core Web Access Management and Identity Federation technologies have been well established for years, organizations will still need a strategic approach to address the growing requirement list that can support a Connected and Intelligent Enterprise.

New IT challenges are driving the shift in IT from a traditional, internal-facing approach towards an open IT infrastructure supporting this Connected and Intelligent Enterprise. At the core of these changes is the need to become more agile in an increasingly complex and competitive business environment. Because of this, business models have to adapt more rapidly, and organizations need to react more quickly to new attack vectors that are continually changing. Having a Connected Enterprise means that organizations have to deal with more and larger user populations than ever before. Given these new challenges, the technologies that help to support this complex and changing landscape include Cloud, Mobile, Social and Intelligent Computing.

As the changing workforce looks to work from anywhere from any device, the need to manage mobile devices are being leveraged onto organizations. Amongst these other technologies are new types of cloud-based directory services as well as various other kinds of Cloud services that include Cloud Identity Services that give flexibility and control for both internal and external identities. Support for social logins such as Facebook, Google+, etc., are also needed and is now considered standard support for established Cloud Service Providers today. In addition to the foundational Access Management and Identity Federation capabilities, improvements to authentication and authorization technologies such as risk- and context-based Access Management, sometimes called “adaptive” authentication and authorization, are needed too.

 Figure 1: Web Access and Federation Figure: Overall Leadership rating for the Access Management and Federation market segment

In the market segment of Web Access Management and Identity Federation, KuppingerCole is seeing an evolutionary shift in vendor solutions towards the support of the Connected and Intelligent Enterprise in various degrees. In the latest Web Access Management and Identity Federation Leadership Compass, we evaluated 15 vendors in this market segment as depicted here in this overall leadership chart. So, when considering your organizational requirements for Web Access Management and Identity Federation, you should also think about how your IT infrastructure is connecting and intelligently adapting on-premise IT to the outer world in its many different and changing ways.

To get the latest information on the market, that includes detailed technical descriptions of the leading solutions, see our most recent Web Access Management and Identity Federation Leadership Compass.

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

AI for the Future of your Business Learn more

AI for the Future of your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00