KuppingerCole Blog

Could Less Data Be More Data?

Data, a massive amount of data, seems to be the holy grail in building more sophisticated AI’s, creating human-like chatbots and selling more products. But is more data actually better? With GDPR significantly limiting the way we generate intelligence through collecting personally identifiable data, what is next? How can we create a specific understanding of our customers to exceed their expectations and needs with less data?

Many of us collect anything we can get our hands on from personal information, behavioral data, to “soft” data that one might run through a natural language processing (NLP) program to examine interactions between devices and humans to pull meaning from conversations to drive sales. We believe the more we collect, the more we know. But is that belief true or merely a belief?

Today, there are an estimated 2.7 Zettabytes of data that exists in the digital world, but only 0.5% of this data is analyzed. With all of this data being collected and only a fraction of it being analyzed, it is no wonder that 85% of data science projects fail, according to a recent analysis by Gartner.

Why?

Data science is utterly complex. If you are familiar with the data science hierarchy of needs, AI and machine learning, you know that for a machine to process data and learn on its own, without our constant supervision, it needs massive amounts of quality data. And within that lies a primary question — even if you have an enormous amount of quality data, is it the right data to drive the insights we need to know our customers?

A client of ours shared their dilemma, which might sound familiar to you. They collected all possible data points about their customers, data that was bias-free and clean.

“We know where the user came from, what they bought, how often they used our product, how much was paid, when they returned. Etc. We created clusters within our database, segmented them, overlaid with additional identifiable data about the user, and built our own ML algorithm so we could push the right marketing message and price point to existing and new customers to increase sales volume.”

All this effort and all this data did not lead to increased sales, they still struggled to convert customers. Was their data actionable? Our client assumed that the data available was directly correlated to driving the purchases.  And here lies the dilemma with our assumptions and datasets — we believe the data we collect has something to do with the purchase when in reality the answer is, not always.

Solution?

Humans are “wired up” to use multiple factors to form a buying decision. Those decision are not made in a vacuum. They don’t just happen online, or offline. Each product, service or brand is surrounded by a set of factors or needs by the customer that play a vital role in their decision-making process, which in most cases has nothing or little to do with one’s demographics, lifestyle, price of the product, etc.

This idea is based on the principle of behavioral economics, which explains that multiple factors - cognitive, social, environmental and economic — play a role in one’s decision process and directly correlates to how we decide what to spend our money on and how we choose between competing products/experiences.

All these factors combined allow customers to individually determine if a brand can fulfill their expectations or why they may prefer one brand over another.

Relevance?

Product preference is directly linked to market share in sales volume. An analysis by MASB found a direct linkage between brand preference and market share across 120 brands and 12 categories. So, if the data that is collected does not directly link to preference how can any data model, ML algorithm, or AI be useful in stimulating sales?

Stephen Diorio, from Forbes, argues, “if more executives understood the economics of customer behavior and the financial power of brand preference they would be better armed to work with CMOs to generate better financial performance.” To remain competitive and stop the rat race for more data, many urge that “companies must apply the latest advances in decision science and behavioral economics to ensure that investment in market research and measurement will yield metrics that isolate the most critical drivers of brand preference.”

In the future, collecting data will become more complicated and in some senses, limited by GDPR, CCPA, and other privacy laws worldwide. To get ahead of the curve companies should quickly shift their perspective from gathering data about who, where, when, what and how, to a broader understanding as to why customers prefer what they prefer and what drives those preferences. Understanding your customers from a behavioral economics point of view will deliver superiority over the competition, how to exceed your shoppers’ expectations and how lean their preference in your favor.

The Perfect Shot!

Shooting from the hip is easy, because it is fast and sound like you’re making an impact. But do you hit the mark? When you study the ‘art of shooting’ a bit there is a whole lot of practice to it, it takes time and every shot is highly contextual. No soldier goes into battle without a thorough preparation and training. The target, the terrain, the road in and the road out, weather, it all plays a role in hitting the mark. Becoming really good is hard, takes a long time, and ultimately also depends on context. Yet it always beats shooting from the hip.

Every so often I talk to people in the field of Identity and Access Management and within a minute I’m feeling like I’m talking to the trigger happy hip-shooter. I can’t help to think that they’ve never seen a line of code of an IAM solution, never talked to the end-user, never were first responder to an incident or a breach. Because IAM is hard, complex and highly contextual. Yet it seems so simple to the outsider. Because it’s about logging in, and how hard can that be, right? Everyone logs in, sometimes hundreds of times per day. Sometimes without even realizing it (through SSO solutions for example).

For Identity and Access Management you need to be able to combine competencies and skills that you rarely need to combine in another area of expertise.

  • The conversation with the business and executives needs to be simple yet clear. The complexities of IAM need to be hidden because these will not be understood and will obfuscate any real question to business or decision by business. In these conversations the IAM expert needs to put himself in the shoes of either the user (logging in, how hard can it be) or in the shoes of the stakeholder (the project manager of a large IT project, requiring proper access and changes to authorizations, in time). One can mention technology, but always from a use or management perspective.
  • Talking to the CISO and the security team it’s about risks, threats and vulnerabilities. And how IAM can aide in reducing the attack surface, reducing the issued permissions to a need-to-have, preventing segregation of duty conflicts and also monitoring actual use through user behaviour analytics. Often this conversation also includes audit and audit-ability of the IAM processes and solutions that are in place. These conversations involve the risk managers and internal auditors. Technical detail can be part of the conversation, but always from a risk and security angle.
  • Engaging with architects and policy makers can be a challenge since it requires a more conceptual approach to technology and IAM services. One should not immediately look at the applicability of what is discussed here, but much more on a longer term of what is required and desirable. Since these discussions are also about the guidelines and architectural boundaries that are defined it can feel a bit restrictive. Yet when understood properly as an IAM expert you can influence the architectural conditions in a way that benefits the service now and in the future. In addition architects require a broad approach and (should) see IAM in the context of enterprise or IT architecture as well.
  • The conversations with colleagues in the IAM department itself are more detailed. Be it with operational support processing requests and providing customer support, product owners, engineers (devops), service owners, customer representatives or managers. These are the internal conversations where the functional conversation and the technical conversation merge with the customer perspective on IAM and the management perspective on IAM. Here the IAM experts do not only need to understand what services they deliver and how technological solutions enable them, but especially how the people work together and what the ‘dot on the horizon’ is for everyone. Since most colleagues in an IAM department have deep expertise and knowledge it is essential to engage with them from a single starting point that combines all perspectives on IAM. (for this we’ve created in Rabobank four perspectives on employee IAM that are leading for everything we do)
  • Talking to vendors of IAM solutions it’s about technology, integration and benefits for the organization. Not all vendors are open to discussing a functional perspective on IAM first, but the good ones are. They understand that their technology serves a functional and business purpose and that without it the technology itself is just expensive and not usefull. As an IAM expert you need to know your technology but also be skilled in vendor management, discussing potential solutions not only based on a successful POC but also based on long term maintenance effort, integration with legacy environments, efforts of upgrades and the (always lurking) risk of takeovers. Some products ceased to exist after the vendor was taken over by another vendor with a different focus.

And I can imagine that I’m forgetting some of the conversations that are taking place with Identity and Access Management as a topic.

Is it possible that this is one person? It is highly likely that it is not. When dealing with IAM the range and spread of skills and competencies is so wide that you need a team. Therefore for IAM I come back to the same statement that was also made for digital: digital success depends on peole (not on technology). It’s almost as if I hear Richard Branson speaking ‘take care of your employees, they will take care of your …’. With a solid team that has the right skills combined and is able to work together you can fire the perfect shot. A team takes time to built, and the temptation is present to quickly shoot from the hip. But I would urge you to start slow in order to go fast later. Focus on the people and the team, and they will move IAM forward.

Artificial Intelligence in Cybersecurity: Are We There Yet?

Artificial Intelligence (along with Machine Learning) seems to be the hottest buzzword in just about every segment of the IT industry nowadays, and not without reason. The very idea of teaching a machine to mimic the way humans think (but much, much quicker) without the need to develop millions of complex rules sounds amazing: instead, machine learning models are simply trained by feeding them with large amounts of carefully selected data.

There is however a subtle but crucial distinction between “thinking like a human” (which in academic circles is usually referred as “Strong AI” and to this day remains largely a philosophical concept) and “performing intellectual tasks like a human”, which is the gist of Artificial General Intelligence (AGI). The latter is an active research field with dozens of companies and academic institutions working on various practical applications of general AI. Much more prevalent, however, are the applications of Weak Artificial Intelligence or “Narrow AI”, which can only be trained to solve a single and rather narrow task – like language processing or image recognition.

Although the theoretical foundations of machine learning go back to the 1940s, only recently a massive surge in available computing power thanks to cloud services and specialized hardware has made it accessible to everyone. Thousands of startups are developing their AI-powered solutions for various problems. Some of those, like intelligent classification of photos or virtual voice assistants, are already an integral part of our daily lives; others, like driverless cars, are expected to become reality in a few years.

AIs are already beating humans at games and even in public debates – surely they will soon replace us in other important fields, like cybersecurity? Well, this is exactly where reality often fails to match customer expectations fueled by the intense hype wave that still surrounds AI and machine learning. Looking at various truly amazing AI applications developed by companies like Google, IBM or Tesla, some customers tend to believe that sooner or later AIs are going to replace humans completely, at least in some less creative jobs.

When it comes to cybersecurity, it’s hard to blame them, really… As companies go through the digital transformation, they are facing new challenges: growing complexity of their IT infrastructures, massive amounts of sensitive data spread across multiple clouds, and the increasing shortage of skilled people to deal with them. Even large businesses with strong security teams cannot keep up with the latest cybersecurity risks.

Having AI as potential replacement for overworked humans to ensure that threats and breaches are detected and mitigated in real time without any manual forensic analysis and decision-making – that would be awesome, wouldn’t it? Alas, people waiting for solutions like that need a reality check.

First, artificial intelligence, at least in its practical definition, was never intended to replace humans, but rather to augment their powers by automating the most tedious and boring parts of their jobs and leaving more time for creative and productive tasks. Upgrading to AI-powered tools from traditional “not-so-smart” software products may feel like switching from pen and paper to a computer, but both just provide humans with better, more convenient tools to do their job faster and with less effort.

Second, even leaving all potential ethical consequences aside, there are several technological challenges that need to be addressed specifically for the field of cybersecurity.

  • Availability and quality of training data that are required for training cybersecurity-related ML models. This data almost always contains massive amounts of sensitive information – intellectual property, PII or otherwise strictly regulated data – which companies aren’t willing to share with security vendors.
  • Formal verification and testing of machine learning models is a massive challenge of its own. Making sure that an AI-based cybersecurity product does not misbehave under real-world conditions (or indeed under adversarial examples specifically crafted to deceive ML models) is something that vendors are still figuring out, and in many cases, this is only possible through a collaboration with customers.
  • While in many applications it’s perfectly fine to train a model once and then use it for years, the field of cybersecurity is constantly evolving, and threat models must be continuously updated, expanded and retrained on newly discovered threats.

Does it mean that AI cannot be used in cybersecurity? Not at all, and in fact, the market is already booming, with numerous AI/ML-powered cybersecurity solutions available right now – the solutions that aim to offer deeper, more holistic real-time visibility into the security posture of an organization across multiple IT environments; to provide intelligent assistance for human forensic analysts by making their job more productive; to help identify previously unknown threats. In other words, to augment but definitely not to replace humans!

Perhaps the most popular approach is applying Big Data Analytics methods to raw security data for detecting patterns or anomalies in network traffic flows, application activities or user behavior. This method has led to the creation of whole new market segments variously referred to as security intelligence platforms or next-generation SIEM. These tools manage to reduce the number of false positives and other noise generated by traditional SIEMs and provide a forensic analyst with a low number of context-enriched alerts ranked by risk scores and often accompanied by actionable mitigation recommendations.

Another class of AI solutions for cybersecurity is based around true cognitive technologies – such as language processing and semantic reasoning. Potential applications include generating structured threat intelligence from unstructured textual and multimedia data (ranging from academic research papers to criminal communications on the Dark Web), proactive protection against phishing attacks or, again, intelligent decision support for human experts. Alas, we are yet to see sufficiently mature products of this kind on the market.

It’s also worth noting that some vendors are already offering products bearing the “autonomous” label. However, customers should take such claims with a pinch of salt. Yes, products like the Oracle Autonomous Database or Darktrace’s autonomous cyber-defense platform are based on AI and are, to a degree, capable of automated mitigation of various security problems, but they are still dependent on their respective teams of experts ready to intervene if something does not go as planned. That’s why such solutions are only offered as a part of a managed service package – even the best “autonomous AIs” still need humans from time to time…

So, is Artificial Intelligence the solution for all current and future cybersecurity challenges? Perhaps, but please do not let over-expectations or fears affect your purchase decisions. Thanks to the ongoing developments both in narrow and general AI, we already have much better security tools than just several years before. Yet, when planning your future security strategy, you still must think in terms of risks and the capabilities needed to mitigate them, not in terms of technologies.

Also, don’t forget that cybercriminals can use AI to create better malware, too. In fact, things are just starting to get interesting!

Data Security and Governance (DSG) for Big Data and BI

Today, organizations are capturing trillions of bytes of data every day on their employees, consumers, services and operations through multiple sources and data streams. As organizations explore new ways to collect more data, the increased use of a variety of consumer devices and embedded sensors continue to fuel this exponential data growth. Large pools of data, often referred to as data lakes, are created as a result of this massive data aggregation, collection and storage – which remains the easiest of all processes in a Big Data and BI value chain.

What’s concerning is the complete ignorance of data owners, data privacy officers as well as security leaders towards a defined scope for collection and use of this data. Very frequently, not only the scope for use of this data is poorly defined but the legal implications that might arise from the incompliant use of this data remain unknown or are ignored in broad daylight.

An example that recently made it to the news was the storage of the millions of user passwords by Facebook in clear text. There was no data breach involved, nor the passwords were abused but ignoring the fundamentals of data encryption outrightly puts Facebook in an undeniable defiant position against cybersecurity basics. The absence of controls for restricting users’ access to sensitive customer data further violates the data privacy and security norms by allowing the user passwords to be freely accessed for potential abuse by 20,000 Facebook employees.

It is important for data owners, privacy officers and security leaders to know what data they have in order to classify, analyze and protect it. Obviously, you can’t protect what you don’t know you have in your possession. Therefore, it's necessary for data leaders to have a continually updated catalogue of data assets, data sources and the data privacy and residency regulations that the data elements in your possession directly attract.

Most Big Data environments comprise of massive data sets of structured, unstructured and semi-structured data that can’t be processed through traditional database and software techniques. This distributed processing of data across unsecured processing nodes put the data as the interactions between the distributed nodes are not secured. A lack of visibility into the information flows, particularly the unstructured data leads to inconsistent access policies.

Business Intelligence platforms, on the other hand, are increasingly offering capabilities such as self-service data modeling, data mining and dynamic data content sharing – all of which only exaggerates the problem of understanding the data flows and complying with data privacy and residency regulations.

Most data security tools, including database security and IAM tools, only cater to the part of the problem and have their own limitations. With the massive collection of data through multiple data sources including third-party data streams, it becomes increasingly important for CIOs, CISOs and CDOs to implement effective data security and governance (DSG) for the Big Data and BI platforms to gain the required visibility and appropriate level of control over the data flowing through the enterprise systems, applications and databases.

Some security tools and technologies that are commonly in use and can be extended to certain components within a Big Data or BI platform are:

  • Database Security
  • Data Discovery & Classification
  • Database & Data Encryption
  • UBA (User Behaviour Analytics)
  • Data Masking & Tokenization
  • Data Virtualization
  • IGA (Identity Governance & Administration)
  • PAM (Privileged Access Management)
  • Dynamic Authorization Management
  • DLP (Data Leakage Prevention)
  • API (Application Programming Interface) Security

There remain specific limitations of each of these technologies in addressing the broader security requirements of a Big Data and BI platform. However, using them wisely and selectively for the right Big Data and BI component potentially reduces the risks of data espionage and misuse arising from these components and thereby contributing to the overall security state of the environment.

Data governance for Big Data and BI is fast becoming an urgent requirement and has largely been absent from the existing IGA tools. Existing IGA tools provide basic access governance, mostly for structured data but lack in-built capabilities to support the complex access governance requirements of the massive unstructured data as well as do not support the multitude of data dimensions required for driving authorizations and access control including access requests and approvals at a granular level.

It is therefore recommended that security leaders work with application and data owners to understand the data flows and authorization requirements of the Big Data and BI environments. Besides practicing standard data sanitization and encryption, security leaders are advised to evaluate the right set of existing data security technologies to meet the urgent Big Data and BI security requirements and build on additional security capabilities in the long term.

We, at KuppingerCole, deliver our standardized Strategy Compass and Portfolio Compass methodology to help security leaders assess their Big Data and BI security requirements and identify the priorities. The methodology also helps leaders provide ratings to available security technologies based on these priorities – eventually providing strong and justifiable recommendations for use of the right set of technologies. Please get in touch with our sales team for more information on relevant research and how we can help you in your plans to secure your Big Data and BI environment.

Smart Manufacturing: Locking the Doors You've Left Open When Connecting Your Factory Floor

Smart Manufacturing or, as the Germans tend to say, Industry 4.0, has already become a reality for virtually any business in manufacturing. However, as just recently demonstrated by the attack on Norsk Hydro, this evolution comes at a price: There are doors created and opened for attackers that are not easy to close again.

These new challenges are not a surprise when looking at what the quintessence of Smart Manufacturing is from a security perspective. Smart Manufacturing is about connecting business processes to manufacturing processes or, in other words, the (business) value chain to the physical processes (or process chains) on the factory floor.

The factory floor has seen some cyber-attacks even before Smart Manufacturing became popular. However, these were rare attacks, some of them being highly targeted on specific industries. Stuxnet, while having been created in the age of Smart Manufacturing, is a sample of such an attack targeted at non-connected environments, in that case, nuclear plants.

In contrast, cyber-attacks on business IT environments are common, with numerous established attack vectors, but also a high degree of “innovation” in the attacks. There are many attacks. Smart Manufacturing, by connecting these two environments, opens these new doors – at the network level as well as at the application layer. The quintessence of Smart Manufacturing, from the IT perspective, is thus “connecting everything = everything is under attack”. Smart Manufacturing extends the reach of cybercriminals.

But how to lock these doors again? It all starts with communication, and communication starts with a common language. The most important words here are not SCADA or ICS or the likes, but “safety” and “security”. Manufacturing is driven by safety. IT is driven by security. Both can align, but both also need to understand the differences and how one affects the other. Machines that are under attack due to security issues might cause safety issues. Besides that, there are other aspects such as availability and others that differ in their relevance and other characteristics between the OT (Operational Technology) and the IT world. If an HR system is down for a day, that is annoying, but most people will not notice. If a production line is down for a day, that might cause massive costs.

Thus, as always, it begins with people – knowing, understanding, and respecting each other – and processes. The latter includes risk management, incident handling, etc. But, also common, there is a need for technology (or tools). Basically, this involves a combination of two groups of tools: Specific solutions for OT networks such as unidirectional gateways for SCADA environments, and the well-thought-out use of standard security technologies. This includes Patch Management, which is more complex in OT environments due to the restrictions regarding availability and planned downtimes. This includes the use of Security Intelligence Platforms and Threat Intelligence to monitor and analyze what is happening in such environments and identify anomalies and potential attacks. It also includes various IAM (Identity & Access Management) capabilities. Enterprise Single Sign-On, while no longer being a hyped technology, might help in moving from open terminals to individual access, using fast user switching such as in healthcare environments. Privileged Access Management might help in restricting privileged user access to critical systems. Identity Provisioning can be used to manage users and their access to such environments.

There are many technologies from IT Security that can help in locking the doors in OT environments again. It is the about time for people from OT and IT to start working together, by communicating and learning from each other. Smart Manufacturing is here to stay – now it is time to do it right not only from a business but from a security perspective.

 Connecting Everything = Everything is Under AttackFigure: Connecting Everything = Everything is Under Attack

There Is a Price to Pay for Using the Shiny, Bright Cloud Service

One of the slides I use most frequently these days is about Identity Brokers or Identity Fabrics, that manage the access of everyone to every service. This slide is based on recent experience from several customer advisories, with these customers needing to connect an ever-increasing number of users to an ever-increasing number (and complexity) of services, applications, and systems.

This reflects the complex reality of most businesses. Aside of the few “cloud born” businesses that don’t have factory floors, large businesses commonly have a history in their IT. Calling this “legacy” ignores that many of these platforms deliver essential capabilities to run the business. They neither can be replaced easily, nor are there always simple “cloud born” alternatives that deliver even the essential capabilities. Businesses must check whether all capabilities of existing tools are essential. Simple answer: They are not. Complex answer: Not all; but identifying and deciding on the essentials is not that easy. Thus, businesses today just can’t do all they need with the shiny, bright cloud services that are hyped.

There are two aspects to consider: One is the positive side of maturity (yes, there is a downside, by being overloaded with features, monolithic, hard to maintain,…), the other is the need to support an existing environment of services, applications, and systems ranging from the public cloud service to on-premises applications that even might rely on a mainframe.

When looking at the hyped cloud services, they always start lean – in the positive sense of being not overly complex, overloaded with features, hard to maintain, etc. Unfortunately, these services also start lean in the sense of focusing on some key features, but frequently falling short in support for the more complex challenges such as connecting to on-premises systems or coming with strong security capabilities.

Does that mean you shouldn’t look for innovative cloud services? No, on the contrary, they can be good options in many areas. But keep in mind that there might be a price to pay for capabilities. If these are not essential, that’s fine. If you consider them essential, you best first check whether they really are. If they remain essential after that check, think about how to deal with that. Can you integrate with existing tools? Will these capabilities come soon, anyway? Or will you finally end up with a shiny, bright point solution or, even worse, a zoo of such shiny, bright tools?

I’m an advocate of the shift to the cloud. And I believe in the need to get rid of many of the perceived essential capabilities that aren’t essential. But we should not be naïve regarding the hybrid reality of businesses that we need to support. That is the complex part when building services–integrating and supporting the hybrid IT. Just know of the price and do it right (which equals “well-thought-out” here).

 Figure 1: Identity Fabrics: Connecting every user to every serviceFigure: Identity Fabrics: Connecting every user to every service

Oslo, We Have a Problem!

As you have certainly already heard, Norsk Hydro, one of the world’s largest aluminum manufacturers and the second biggest hydropower producer in Norway, has suffered a massive cyber attack earlier today. According to a very short statement issued by the company, the attack has impacted operations in several of its business areas. To maintain the safety and continuity of their industrial processes, many of the operations had to be switched to manual mode.

The details of the incident are still pretty sparse, but according to the statement at their press conference, it may have been hit by a ransomware attack. Researchers are currently speculating that it most likely has been LockerGoga, a strain of malware that affected a French company Altran Technologies back in January. This particular strain is notable for having been signed with a valid digital certificate, although it has been revoked since then. Also, only a few of antimalware products are currently able to detect and block it.

It appears that the IT people at Norsk Hydro are currently trying to contain the fallout from the attack, including asking their employees not to turn on their computers and even shutting down the corporate website. Multiple shifts are working manually at the production facilities to ensure that there is no danger to people’s safety and to minimize financial impact.

We will hopefully see more details about the incident later, but what could we learn from the Norsk Hydro’s initial response? First and foremost, we have another confirmation that this kind of incident can happen to anybody. No company, regardless of its industry, size and security budget can assume that their business or industrial networks are immune to such attacks, or that they already have controls in place that defend against all possible security risks.

Second, here we have another textbook example of how not to handle public relations during a security incident. We can assume that a company of that scale should have at least some kind of plan for worst-case scenarios like this – but does it go beyond playbooks for security experts? Have the company’s executives ever been trained to prepare for such level of media attention? And whose idea was it anyway to limit public communications to a Facebook page?

Studies in other countries (like this report from the UK government) indicate that companies are shockingly unprepared for such occasions, with many lacking even a basic incident response plan. However, even having one on paper does not guarantee that everything will go according to it. The key to effective incident management is preparation and this should include awareness among all the people involved, clearly defined roles and responsibilities, access to external experts if needed, but above anything else – practice!

KuppingerCole’s top three recommendations would be the following:

  1. Be prepared! You must have an incident response plan that covers not just the IT aspects of a cyberattack, but organizational, legal, financial and public relations and other means of dealing with its fallout. It is essential that company’s senior executives are involved in its design and rehearsals, since they will be the front and center of any actual operation.
  2. Invest in the right technologies and products to reduce the impact of cyber incidents as well as those to prevent them from happening in the first place. Keep in mind however that no security tool vendor can do the job of assessing the severity and likelihood of your own business risks. Also, always have a backup set of tools and even “backup people” ready to ensure that essential business operations can continue even during a full shutdown.
  3. You will need help from specialists in multiple areas ranging from cyber forensic to PR, and most companies do not have all those skills internally. Look for partnerships with external experts and do it before the incident occurs.

If you need neutral and independent advice, we are here to assist you as well!

Ignorance is Risk

#RSAC2019 is in the history books, and thanks to the expansion of the Moscone Center, there was ample space in the expo halls to house vendor booths more comfortably. In fact, there seemed to be a record number of exhibitors this year. As always, new IAM and cybersecurity products and services make their debut at RSAC.

Despite the extra room, it can be difficult for the security practitioner and executive to navigate the show floor. Some plan ahead and make maps of which booths to visit, others walk from aisle 100 to the end. It can take a good deal of time to peruse and discover what’s new. But most difficult of all it is digesting what we’ve seen and heard, considering it in a business context, and prioritizing possible improvement projects.

Security practitioners tend to hit the booths of vendors they have worked with, those with competing products, and others in their areas of specialty, including startups. For example, an identity architect will likely keep on walking past the “next gen” anti-malware and firewall booths but will stop at the booth offering a new identity proofing service. If a product does something novel or perhaps better than their current vendor’s product, they’ll know it and be open to it, even if it’s a small vendor and it means managing another product or service.

Executives gravitate toward the stack vendors in the front and middle, ignoring the startups on the sides and back. [It’s also increasingly likely execs will have meetings with specific vendors in the hotels surrounding Moscone, and not even set foot in the halls.] Why? IT execs and particularly CISOs are concerned with reducing complexity as well as securing the enterprise. A few stack vendors with consolidated functionality are easier to manage than dozens of point solutions.

Who is right? Well, it depends. Sometimes both, sometimes neither. It depends on knowing your cyber risk in relation to your business and understanding which technology enhancements will decrease your cyber risk and by approximately how much. Oftentimes practitioners and executives disagree on the cyber risk analysis and priorities set as a result.

Risk is conjunction of consequence and likelihood. At RSAC and other conferences we hear anecdotes of consequences and see products that reduce the likelihood and severity of those consequences. Executives and practitioners alike have to ask, “are the threats addressed by product X something we realistically face?”. If not, implementing it won’t reduce your cyber risk. Or, if there are two or more similar products, which one offers the most possible risk reduction?

The biggest risk is that the decision-makers don’t truly understand the threats and risks they face. There are cases where SMBs have built defenses against zero-day APTs that will never come their way yet have neglected to automate patch management or user de-provisioning. In other cases, a few big enterprises have naively dismissed the possibility that they could be the target of corporate or foreign state espionage and failed to protect against such attacks.

The riskiest time for organizations is the period when executive leadership changes and for 12-18 months afterward, or even longer. If an organization brings in a CIO or CISO from a different industry, it takes time for the person to learn the lay of the land and the unique challenges in which that organization operates. Long-held strategies and roadmaps get re-evaluated and changed. Mid-level managers and practitioners may leave during this time. That org’s overall cybersecurity posture is weakened during the transition time. Adversaries know this too.

Risk is a difficult subject for humans to grasp. No one gets it right all the time. Risk involves processing probabilities, and our brains didn’t really evolve to do that well. For an excellent in-depth look at that subject, read Leonard Mlodinow’s book The Drunkard’s Walk.

External risk assessments and benchmarks can be good mechanisms to overcome these circumstances; such as when tech teams and management disagree on priorities, when one or more parties is unsure of the likelihood of threats and risks, and when executive leadership changes. Having an objective view from advisors experienced in your particular industry can facilitate the re-alignment of tactics and strategies that can reduce cyber and overall risk. For information on the types of assessments and benchmarking KuppingerCole offers, see our advisory offerings.

Building Trust by Design

Trust has somehow become a marketing buzzword recently. There is a lot of talks about “redefining trust”, “trust technologies” or even “trustless models” (the latter is usually applied to Blockchain, of course). To me, this has always sounded… weird.

After all, trust is the foundation of the very society we live in, the key notion underlying the “social contract” that allows individuals to coexist in a mutually beneficial way. For businesses, trust has always been a resulting combination of two crucial driving forces – reputation and regulation. Gaining a trustworthy reputation takes time but ruining it can be instantaneous – and it is usually in a businesses’ best interest not to cheat their customers or at least not to get caught (and that’s exactly where regulation comes into play!). Through the lengthy process of trial and error, we have more or less figured out already how to maintain trust in traditional “tangible” businesses. And then the Digital Transformation happened.

Unfortunately, the dawn of the digital era has not only enabled many exciting new business models but also completely shattered the existing checks and balances. On one hand, the growing complexity of IT infrastructures and the resulting skills shortage made sensitive digital data much more vulnerable to cyberattacks and breaches. On the other hand, unburdened by regulations and free from public scrutiny, many companies have decided that the lucrative business of hoarding and reselling personal information is worth more than any moral obligation towards their customers. In a way, the digital transformation has brought back the Wild West mentality to modern businesses – completely with gangs of outlaws, bounty hunters, and snake oil peddlers…

All this has led to a substantial erosion of public trust – between another high-profile data breach and a political scandal about harvesting personal data people no longer know whom to trust. From banks and retailers to social media and tech companies – this “trust meltdown” isn’t just bad publicity, it leads to substantial brand damage and financial losses. The recent introduction of strict data protection regulations like GDPR with their massive fines for privacy violations is a sign that legislation is finally catching up, but will compliance alone fix the trust issue? What other methods and technologies can companies utilize to restore their reputations?

Well, the first and foremost measure is always transparency and open communications with customers. And this isn’t just limited to breach disclosure – on the contrary, the companies must demonstrate their willingness to improve data protection and educate customers about the hidden challenges of the “digital society”. Another obvious approach is simply minimizing personal data collection from customers and implementing proper consent management. Sure, this is already one of the primary stipulations of regulations like GDPR, but compliance isn’t even the primary benefit here: for many companies, the costs savings on data protection and reputation improvements alone will already outweigh the potential (and constantly dwindling) profits from collecting more PII than necessary.

Finally, we come to the notion of security and privacy “by design”. This term has also become a buzzword for security vendors eager to sell you another data protection or cybersecurity solution. Again, it’s important to stress that just purchasing a security product does not automatically make a business more secure and thus more trustworthy. However, incorporating certain security- and privacy-enhancing technologies into the very fabric of your business processes may, in fact, bring noticeable improvements, and not just to your company’s public reputation.

Perhaps, the most obvious example of such a technology is encryption. It’s ubiquitous, cheap to implement and gives you a warm feeling of safety, right? Yes, but making encryption truly inclusive and end-to-end, ensuring that it covers all environments from databases to cloud services, and, last but not least, that the keys are managed properly is not an easy challenge. However, to make data-centric security the foundation of your digital business, you would need to go deeper still. Without identity, modern security simply cannot fulfill its potential, so you’ll need to add dynamic centralized access control to the mix. And then security monitoring and intelligence with a pinch of AI. Thus, step by step, you’ll eventually reach the holy grail of the modern IT – Zero Trust (wait, weren’t we going to boost trust, not get rid of it? Alas, that’s the misleading nature of many popular buzzwords nowadays).

For software development companies, investing into security by design can look complicated at first, too. From source code testing to various application hardening techniques to API security – writing secure applications is hard, and modern technologies like containers and microservices make it even harder, don’t they? This cannot be farther from the truth, however: modern development methodologies like DevOps and DevSecOps are in fact focusing on reducing the strain on programmers with intelligent automation, unified architectures across hybrid environments, and better experience for users, who are learning to appreciate programs that do not break under high load or cyberattacks.

But it does not even have to be that complicated. Consider Consumer Identity and Access Management platforms, for example. Replacing a homegrown user management system with such a platform not only dramatically improves the experience for your current and potential customers – with built-in privacy and consent management features, it also gives users better control over their online identities, boosting their trust considerably. And in the end, you get to know your customers better while reducing your own investments into IT infrastructure and operations. It can’t really get better than this.

You see, trust, privacy, and security don’t have to be a liability and a financial burden. With an open mind and a solid strategy, even the harshest compliance regulations can be turned into new business enablers, cost-saving opportunities and powerful messages to the public. And we are always here to support you on this journey.

The Wrong Click: It Can Happen to Anyone of Us

The Wrong Click: It Can Happen to Anyone of Us

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

AI for the Future of your Business Learn more

AI for the Future of your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00