KuppingerCole Blog

For Oracle, the Future Is Autonomous

Recently, I have attended the Oracle OpenWorld in San Francisco. For five days, the company has spared no expenses to inform, educate and (last but not least) entertain its customers and partners as well as developers, journalists, industry analysts and other visitors – in total, a crowd of over 50 thousand. As a person somewhat involved in organizing IT conferences (on a much smaller scale, of course), I could not but stand in awe thinking about all the challenges organizers of such an event had to overcome to make it successful and safe.

More important, however, was the almost unexpected thematic twist that dominated the whole conference. As I was preparing for the event, browsing the agenda and the list of exhibitors, I found way too many topics and products quite outside of my area of coverage. Although I do have some database administrator (DBA) experience, my current interests lie squarely within the realm of cybersecurity and I wasn’t expecting to hear a lot about it. Well, I could not be more wrong! In the end, cybersecurity was definitely one of the most prominent topics, starting right with Larry Ellison’s opening keynote.

The Autonomous Database, the world’s first database, according to Oracle, that comes with fully automated management, was the first and the biggest announcement. Built upon the latest Oracle Database 18c, this solution promises to completely eliminate human labor and hence human error thanks to complete automation powered by machine learning. This includes automated upgrades and patches, disaster recovery, performance tuning and more. In fact, an autonomous database does not have any controls available for a human administrator – it just works™. Of course, it does not replace all the functions of a DBA: a database specialist can now focus on more interesting, business-related aspects of his job and leave the plumbing maintenance to a machine.

The offer comes with a quite unique SLA that guarantees 99.995% availability without any exceptions. And, thanks to more elastic scalability and optimized performance, “it’s cheaper than AWS” as we were told at least a dozen times during the keynote. For me however, the security implications of this offer are extremely important. Since the database is no longer directly accessible to administrators, this not only dramatically improves its stability and resilience against human errors, but also substantially reduces the potential cyberattack surface and simplifies compliance with data protection regulations. This does not fully eliminate the need for database security solutions, but at least simplifies the task quite a bit without any additional costs.

Needless to say, this announcement has caused quite a stir among database professionals: does it mean that a DBA is now completely replaced by an AI? Should thousands of IT specialists around the world fear for their jobs? Well, the reality is a bit more complicated: the Autonomous Database is not really a product, but a managed service combining the newest improvements in the latest Oracle Database release with the decade-long evolution of various automation technologies, running on the next generation of Oracle Exadata hardware platform supported by the expertise of Oracle’s leading engineers. In short, you can only get all the benefits of this new solution when you become an Oracle Cloud customer.

This is, of course, a logical continuation of Oracle’s ongoing struggle to position itself as a Cloud company. Although the company already has an impressive portfolio of cloud-based enterprise applications and it continues to invest a lot in expanding their SaaS footprint, when it comes to PaaS and IaaS, Oracle still cannot really compete with its rivals that started in this business years earlier. So, instead of trying to beat competitors on their traditional playing fields, Oracle is now focusing on offering unique and innovative solutions that other cloud service providers simply do not have (and in the database market probably never will).

Another security-related announcement was the unveiling of the Oracle Security Monitoring and Analytics – a cloud-based solution that enables detection, investigation and remediation of various security threats across on-premises and cloud assets. Built upon the Oracle Management Cloud platform, this new service is also focusing on solving the skills gap problem in cybersecurity by reducing administration burden and improving efficiency of cybersecurity analysts.

Among other notable announcements are various services based on applied AI technologies like intelligent conversation bots and the newly launched enterprise-focused Blockchain Cloud Service based on the popular Hyperledger Fabric project. These offerings, combined with the latest rapid application development tools unveiled during the event as well, will certainly make the Oracle Cloud Platform more attractive not just for existing Oracle customers, but for newcomers of all sizes – from small startups with innovative ideas to large enterprises struggling to make their transition to the cloud as smooth as possible.

The Power of Utility in the Future of Marketing

Guest post by Christian Goy, Co-founder and Managing Director of Behavioral Science Lab

In the future, marketing will be driven neither by demographics, on- or off-line behavioral identifiers or psychographics, but by understanding and fulfilling the individual utility expectations of the consumer.

Mitch Joel captures this view of future marketing by concluding, “If the past decade was about developing content and engagement strategies in social channels (in order to provide value, humanize the brand, be present in search engines and more), the next decade will be about the brands that can actually create a level of utility for the consumer.” 

No one disputes that a persuasive marketing message or social media campaign drives web traffic. However, if your brand does not deliver utility, it will not be purchased. Consumers do not love brands because of their brilliant ad campaigns or funny videos on Facebook. Consumers love brands that create utility or true value for themselves; this is what creates affinity between the consumer and the brand, not just the brand attributes. Utility is what consumers believe they cannot live without.

Utility is the heart of behavioral economics. The utility of each product or service is determined by a very specific set of psychological and economic elements, which determine how the consumer determines the expected value (utility) of each brand. Relative differences in expected utility associated with each choice option determines how much consumers will pay, what they purchase and how loyal they expect to be. Interestingly, we are learning that the economic and psychological factors that determine utility and purchase have little or nothing to with the buyer’s demographics or psychographics.

In none of our studies did demographic or psychographic segmentation explain why consumers switch or remain loyal to a brand. However, when consumers were typed by their utility expectation for individual brands, our clients were able to predict with extreme accuracy whether the consumer would stay loyal, switch away from their brand, and more importantly, why.

Knowing the expectation of utility explains why Instacart — an app that lets shoppers buy all their groceries online from any grocery store and have them delivered to their doorstep — became an instant hit for a small, but important, percentage of US shoppers.

Old line marketers assumed that a certain percentage of US shoppers with relatively high household income and education, who were environmentally savvy and attracted to organic produce would remain loyal Whole Foods or Trader Joe’s shoppers. What they didn’t understand was that the utility for those shoppers was not driven by their demographics or psychographics, but by what they were looking for — convenience, ease of shopping and minimal shopping time which could not be fulfilled by either Whole Foods or Trader Joe’s.

What is next for marketing?

To remain effective, marketing must move beyond traditional segmentation, psychographics, and message development strategies. Marketers should first understand what drives true utility for their consumers — what consumers value, what they can or cannot live without. As some are already doing, marketers will create personalized messages that maximize individual utility expectations requirements by:

Deeply understanding what drives the expectation of the utility of their products — These are the psychological and economic decision elements used by the buyer to define utility.

Defining buyers by their utility expectation — Group buyers on the basis of a similar utility expectation. This allows marketers to be more cost-efficient and effective in their messaging and product offerings because the specific needs of their customers will be met.

Creating products and services that address consumers’ psychological and economic needs — Do not just focus on the product. Understand how the consumer defines utility, and then deliver on it. In our studies, we have found that by only addressing and fulfilling the primary driver in consumers’ utility “equation,” the likelihood of purchase is very high. Just imagine how much greater the likelihood of purchase could be if the second and third drivers were addressed as well.

Product and service utility are the future of effective marketing. Start today with an understanding how your consumers arrive at their utility expectation to stay ahead of the game. 

Learn more about in my session at the Consumer Identity World from November 27-29, 2017 in Paris.

Cryptography’s Darkest Hour

For anyone working in IT security, this week surely did not start well. Not one, but two major cryptography-related vulnerabilities have been disclosed, and each of them is at least as massive in scale and potential consequences as the notorious Heartbleed incident from 2014.

First, a Belgian researcher Mathy Vanhoef from the University of Leuven has published the details of several critical weaknesses discovered in WPA2 – the de-facto standard protocol used for securing modern Wi-Fi networks. By exploiting these weaknesses, an attacker can launch so-called key reinstallation attacks (hence the name KRACK, and we’ve discussed the importance of catchy names for vulnerabilities before) and eventually decrypt any sensitive data transmitted over a supposedly secured wireless channel.

As opposed to Heartbleed, however, the vulnerability is not found in a particular library or a product – it’s caused by an ambiguity in the definition of the WPA2 protocol itself, so any operating system or library that implements it correctly is still vulnerable. Thus, all desktop and mobile operating systems were affected by this attack, as well as numerous embedded and IoT devices with built-in Wi-Fi capabilities. Somewhat luckily, this protocol weakness can be fixed in a backwards-compatible manner, so we do not have to urgently switch to WPA3 (and by no means you should switch to WEP or any other even less secure connection method in your wireless network). However, there is no other way to mitigate the problem without patching each client device. Changing the Wi-Fi password, for example, won’t help.

Of course, quite a few vendors have already released updates (including Microsoft), but how long will it take for everyone to apply these? And what about huge numbers of legacy products which will never be patched? The only way to secure them properly is to disable Wi-Fi and basically repurpose them as expensive paperweights. For desktop and mobile users, using HTTPS-only websites or encrypted VPN tunnels for accessing sensitive resources is recommended, just like for any other untrusted network, wireless or not. In general, one should slowly get used to the notion of treating every network as untrusted, even their own home Wi-Fi.

 The second vulnerability revealed just recently is of a different nature, but already classified as even more devastating by many experts. The ROCA (Return of the Coppersmith’s Attack) vulnerability is an implementation flaw discovered by an international team of British, Czech and Italian researchers in a cryptographic library used in security chips produced by Infineon Technology. This flaw essentially means that RSA encryption keys generated using these chips are not cryptographically strong and are much easier to crack.

In theory, this problem should not be as widespread as the KRACK vulnerability, but in reality, it affects numerous security products from such vendors as Microsoft, Google, HP or Lenovo and existing RSA keys dating as far back as 2012 can be vulnerable. Also, since public key cryptography is so widely used in IT – from network encryption to signing application code to digital signatures in eGovernment projects – this opens a broad range of potential exploits: spreading malware, preforming identity theft or bypassing Trusted Platform Modules to run malicious code in secure environments.

What can we do to minimize the damage of this vulnerability? Again, it’s first and foremost about checking for available security updates and applying them in timely manner. Secondly, all potentially affected keys must be replaced (nobody should be using 1024-bit RSA keys in 2017 anyway).

And, of course, we always have be ready for new announcements. The week has only just begun, after all!

The Need for Speed: Why the 72-hour breach notification rule in GDPR is good for industry

The EU’s General Data Protection Regulation (GDPR) will force many changes in technology and processes when it comes into effect in May 2018.  We have heard extensively about how companies and other organizations will have to provide capabilities to:

  • Collect explicit consent for the use of PII per purpose
  • Allow users to revoke previously given consent
  • Allow users to export their data
  • Comply with users’ requests to delete the data you are storing about them
  • Provide an audit trail of consent actions

Software vendors are preparing, particularly those providing solutions for IAM, CIAM, ERP, CRM, PoS, etc., by building in these features if not currently available. These are necessary precursors for GDPR compliance.  However, end user organizations have other steps to take, and they should begin now.

GDPR mandates that, 72 hours after discovering a data breach, the responsible custodian, in many cases it will be the organization’s Data Protection Officer (DPO), must notify the Supervisory Authority (SA).  If EU persons’ data is found to have been exfiltrated, those users should also be notified. Organizations must begin preparing now how to execute notifications: define responsible personnel, draft the notifications, and plan for remediation.

Consider some recent estimated notification intervals for major data breaches in the US:

  • Equifax: 6 weeks to up to 4-5 months
  • Deloitte:  perhaps 6 months
  • SEC: up to 1 year
  • Yahoo: the latest revelations after the Verizon acquisition indicate up to 4 years for complete disclosure

The reasons data custodians need to be quick about breach notifications are very clear and very simple:

  • The sooner victims are notified, the sooner they can begin to remediate risks.  For example, Deloitte’s customers could have begun to assess which of their intellectual property assets were at risk and how to respond earlier.
  • Other affected entities can begin to react.  In the SEC case, the malefactors had plenty of time to misuse the information and manipulate stock prices and markets. 
  • Cleanup costs will be lower for the data custodian.  Selling stocks after breaches are discovered but prior to notification may be illegal in many jurisdictions.
  • It will be better for the data custodian’s reputation in the long run if they quickly disclose and fix the problems.  The erosion of Yahoo’s share price prior to purchase is clear evidence here.

Understandably, executives can be reticent in these matters.  But delays give the impression of apathy, incompetence, and even malicious intent on the part of executives by attempting to hide or cover up such events. Though GDPR is an EU regulation, it directly applies to other companies and organizations who host data on EU member nations’ citizens.  Even for those organizations not subject to GDPR, fast notification of data breaches is highly recommended. 

CIAM Vendor Gigya to be Acquired by SAP Hybris

This past weekend we learned that Gigya will be acquired by SAP Hybris.  California-based Gigya has been a top vendor in our CIAM Platforms Leadership Compass reports. Gigya offers a pure SaaS CIAM solution, and has one of the largest customer bases in the market.  SAP’s Identity solution was previously positioned more as an IDaaS for SAP customers for SAP use cases.

What is most interesting is the pairing of Gigya with SAP Hybris.  Hybris is SAP’s marketing tools, analytics, and automation suite.  It already has a considerable customer base and big feature set. Gigya is also very strong in this area, with specialties for leveraging consumer data for personalization and more accurate targeted marketing.

This Gigya – SAP transaction is the latest in an active year of CIAM startups, funding rounds, buyouts, and even an IPO.  CIAM is the fastest growing section of the Identity market, and all aforementioned activity is evidence that this trend is being recognized and rewarded by investors.

CIAM is an essential component in contemporary business architectures.  Consider the following:

  • EU GDPR – CIAM solutions collect and retain user consent for compliance
  • EU PSD2 – CIAM solutions are significant competitive advantages for banks and financial services providers that implement them; CIAM can also offer strong customer authentication options
  • Retail – consumers will shop elsewhere if the user experience is cumbersome
  • Digital transformation – IoT and SmartHome gadgets are best managed via consumer identities with strong consumer security and privacy protections 

Traditional IAM is also capturing a larger portion of most organizations’ budgets, as the C-Suite begins to understand the importance of cybersecurity and the pivotal function of digital identity within cybersecurity strategies. The lines between IDaaS and CIAM have begun to blur.  Traditional IAM vendors have been modifying and rebranding their solutions to meet the changing needs of consumer identity.  Some CIAM vendors are offering their services as IDaaS.  If SAP chooses to leverage the full CIAM feature set that Gigya has, rather than just integrating the marketing analytics and automation capabilities with Hybris, it will broaden SAP’s reach in the CIAM and IDaaS space.

KuppingerCole will continue to monitor and research the dynamic CIAM market.  If CIAM is a trending topic for you, join us at our Consumer Identity World events, in Paris (November 28-29) and Singapore (December 13-14)

Microsoft Azure Confidential Computing – a Step Forward in Cloud Security

A few days ago, Microsoft announced Azure Confidential Computing. As the name implies, the technology is about adding a new layer of protection to cloud services, specifically Microsoft Azure, but also Windows 10 and Windows Server 2016 running in other public cloud infrastructures on specific hardware.

The foundation for Azure Confidential Computing are so-called TEEs (Trusted Execution Environments). Such environments protect the code running in that environment and data used by the code from other parties’ access. Neither administrators, neither people having direct access to hardware, nor attackers that gain access to administrator accounts can bypass that protection layer – at least this is what the TEE concept promises.

Based on TEEs, data can be held encrypted in the cloud services and their data stores and are only decrypted and processed within the TEE. That means that data is not always encrypted, but it remains – if the application is implemented correctly – encrypted in the accessible areas of the public cloud.

For now, there are two supported TEEs. One is Virtual Secure Mode, a software-based TEE that is based on Microsoft Hyper-V in Windows 10 and Windows Server 2016. The other is Intel SGX (Software Guard Extensions), which is a hardware-based TEE. Based on Intel SGX, secure TEEs can be used outside of the Microsoft Azure Cloud.

Microsoft has been using such technologies as part of their Coco Framework for enterprise blockchain networks for some weeks already, and now is moving support to Microsoft SQL Server and Azure SQL Database. This is achieved by delegating computations on sensitive data to an “enclave”, which is based on a TEE. However, Azure Confidential Computing supports broader use of this capability for various types of data.

Microsoft Azure Confidential Computing, which is available in an early adopter’s program, is a great improvement for security and confidentiality in public cloud environments and will enable customers to port workloads to the cloud which, so far, have been considered as being too sensitive. The announcement stands in line with the recent IBM announcement for their IBM Z14 systems, where factually the entire system acts as a TEE. While the use of TEEs in Azure Confidential Computing is restricted to parts of the application that are moved to the TEE specifically, both announcements are about significantly increasing the level of security in computing. That is good news.

Recapping CIW Seattle 2017

Last week we completed the opening dates on the Consumer Identity World Tour in Seattle.  To kick off the event, the Kantara Initiative held a one-day workshop to showcase the work that they do.  Kantara is an international standards organization which develops technical specifications promoting User Managed Access, Consent Receipt, Identities of Things, and Identity Relationship Management.  Kantara is also a Trust Framework Provider, approved by the US Federal Government´s Identity and Access Management (ICAM), which accredits Assessors and Approve CSPs at Levels 1, 2 & non-PKI Level 3.  Kantara will be joining us on our subsequent CIW tour dates.

The CIW conference is all about consumer and customer identities.  It is the only conference to focus specifically on the use cases, requirements, and technical solutions in the Consumer Identity and Access Management (CIAM) space.  Keynotes were delivered by distinguished guests and our sponsors, including:

  • Christian Goy – Behavioral Science Lab
  • Ryan Fox – Capital One
  • Tim Maiorino – Osborne Clark
  • Jason Keenaghan – IBM
  • Katryna Dow – Meeco
  • Colin Wallis – Kantara
  • Phil Lam – Lam Advisory
  • Jason Rose – Gigya
  • Steve Tout - VeriClouds
  • Heather Flanagan – DIACC
  • Allan Foster – ForgeRock

With a looming May 2018 implementation date, the EU General Data Protection Regulation (GDPR) was a subject that bubbled up in many sessions.  Though it will be a regulation in the EU, it applies to any company or organization that does business with or processes data of EU citizens, including enterprises here in the US.  Thus, privacy and consent for use of PII were important conversations:  Denise Tayloe of Privo and Lisa Hayes from the Center for Democracy and Technology discussed family management and building-in privacy in their sessions.   John Anderson of Facebook, the world’s largest social identity provider, presented on consumer authentication trends.

Many organizations deploy CIAM solutions to gain insights into consumer behavior to create more effective marketing campaigns and increase revenue.  Lars Helgeson of Greenrope went into detail on the intersection of CRM, CIAM, and Marketing Automation.  The MarTech panel continued the conversation on marketing in CIAM.

On both days, we had many expert panels covering a range of topics within CIAM.  One of the things that distinguishes KuppingerCole conferences from others is our use of panel discussions.  This format allows conference attendees to hear multiple viewpoints on a subject within a short time, and engages the audience more directly. Delegates and panelists interacted on topics such as:

  • Next Generation Authentication for Health Care
  • Moving Beyond Passwords
  • CJAM Strategies
  • The Business Value of Consumer Identity
  • Informed Consent
  • Security and Privacy
  • Mobile Biometrics
  • User Experience
  • CIAM Case Studies
  • Economics of GDPR
  • The Risks of Compromised Credentials
  • Marketing Technology

 

Christian Goy, Co-founder & Managing Director of Behavioral Science Lab said about CIW Seattle, "Consumer Identity World Tour is one of those rare events that truly inspires forward thinking in the consumer identity management, privacy and security space. Stirred by thought leaders in the industry, this must-attend event provides invaluable discussions and learning experiences for everyone present." 

Katryna Dow, CEO & Founder of Meeco added, “The European regulatory changes are driving major shifts in CIAM, privacy, and consent. These changes represent challenges for global enterprises, including those headquartered in the USA. Enterprises are faced with the choice to either focus on compliance or harness these changes to drive digital transformation and trust based personalization. KuppingerCole is uniquely positioned to bring deep knowledge of the European market to provide strategic insight in North America. Great to see the Consumer Identity World tour kicked off here in Seattle, I hope it will be the first of many USA events”.

Finally, we’d like to thank our platinum sponsors ForgeRock, Gigya, IBM, and VeriClouds; association sponsors Global Platform, Kantara, and Open Identity Exchange; and component sponsors PlainID, Ping Identity, Saviynt, Axiomatics, CloudEntity, Deloitte, and Login Radius.

The next stops on the CIW tour are Paris, on November 28-29, and Singapore on December 13-14.  We hope to see you there.

Keep Calm and Carry on Implementing

The trouble with hypes is that they have an expiration date. From that date on they either need to be made real for some very good purposes within a reasonable timeframe, or they go bad. There have been quite a few hype topics around recently. But there have not been many single topics that have been covered by media at a frequency and from many different angles and with as many different focal areas as the Blockchain (or distributed ledgers in general). And most probably none of those articles failed to include the adjective "disruptive".

There have been books, conferences, articles, reference implementations, hackathons, webinars and lots more indicators proving that the blockchain is somehow the prototype of a hype topic.

But apart from the bitcoin currency as one regularly cited (and initial) usage of this technology, there have not been too many clearly visible use cases of the blockchain for the everyday user. Actually, it could be doubted that even bitcoin really is something for the everyday user. Other than every now and then needing one of those for paying ransom to get your encrypted files back...

Many great ideas have been developed and implemented in PoC (Proof of Concept) scenarios, but the truth is that the technology still is not very visible in general (which is quite normal for infrastructure concepts) and that there are no commonly known outstanding blockchain use cases in the wild, at least any that everybody knows. The main challenge is the identification of an adequate use case that can be implemented with blockchain technology and that is immediately offering benefits for the end user and the provider of the service.

This might change with the announcements a major insurance company, namely Axa, has made recently. The geeky name "fizzy" stands for an Ethereum-based implementation of a modern insurance concept that allows user-travelers to be covered by an "automatic" insurance policy, in case of a booked (and insured) flight being delayed for more than two hours. Blockchain technology provides adequate security. Automation, smart contracts and parameterization make it adaptable and available.

By doing this, “fizzy” provides smart, real-life benefit while leveraging the advantages of the blockchain technology. This is exactly what we should expect manifestations of this technology to look like. Instead of aiming at disrupting complete business models, organizations across industries should look into implementing smart and adequate solutions that provide real benefit to the end-user. Until this has proven successful at a convincing scale we might want to postpone the task of "reinventing and disrupting complete industries" into the next project phase.

Changes in the Scope of Investors for IAM

As a long-term observer of the IAM market, KuppingerCole finds it interesting to see the change in both the size of investments and the type of investors in this market. Just recently, ForgeRock announced an $88 million round in series D funding. This follows other major investments in IAM vendors such as Okta, Ping Identity, and SailPoint, to name a few.

What is interesting with the recent funding for ForgeRock is that KKR appears on the list, one of the very big names amongst the investors. I found that particularly telling because it means that IAM is now on the radar of a different type of investor, beyond the more focused IT- and Information Security-focused investors we have primarily seen until now.

Obviously, such major investment helps ForgeRock to continue their growth, to further expand their product offerings, and strengthens their market position. We will closely follow the plans and releases of ForgeRock for their Identity Platform and keep you up-to-date.

The Cargo Cult of Cybersecurity

I’ve been working in IT my whole life and since I’ve joined KuppingerCole ten years ago, cybersecurity has been my job. Needless to say, I like my job: even though we industry analysts are not directly involved in forensic investigations or cyberthreat mitigation, being up-to-date with the latest technological developments and sharing our expertise with both end users and security vendors is our daily life, which is always challenging and exciting at the same time.

However, occasionally I am having doubts about my career choice. Does everything I do even matter? Cybersecurity market is booming, predicted to reach nearly 250 billion USD within the next 5 years. However, do we notice any downward trend in the number of security breaches or financial losses due to cyberattacks? Not really…

Last time I was having these thoughts back in May after the notorious Wannacry incident: just as hundreds of top experts were discussing the most highbrowed cybersecurity problems at our European Identity and Cloud Conference, a primitive piece of malware exploiting a long-fixed problem in Windows operating system has disrupted hundreds of thousands computers around the world, affecting organizations from public hospitals to international telecom providers. How could this even have happened? All right, those poor underfunded and understaffed British hospitals at least have an (still questionable) excuse for not being able to maintain the most basic cybersecurity hygiene principles within their IT departments. But what excuse do large enterprises have for letting their users open phishing emails and not having proper backups of their servers?

“But users do not care about their security or privacy,” people say. This can’t be further from truth though! People care about not being killed very much, so they arm themselves with guns. People care about their finances, so they do not keep their money under mattresses. And people surely care about their privacy, so they buy curtains and lock their doors. However, many people still do not realize that having an antivirus on their mobile phone is just as important for their financial stability and sometimes even physical safety as having a gun on their night table. And even those who are already aware of that, are often sold security products like some kind of magical amulets that are supposed to solve their problems without any effort. But should users really be blamed for that?

With enterprises, the situation is often even worse. Apparently, a substantial percentage of security products purchased by companies never even gets deployed at all. And more often than not, even those that get deployed, will be actively sabotaged by users who see them as a nuisance hindering their business productivity. Add the “shadow IT” problem into the mix, and you’ll realize that many companies that spend millions on cybersecurity are not really getting any substantial return of their investments. This is a classical example of a cargo cult. Sometimes, after reading about another large-scale security breach I cannot completely suppress a mental image of a firewall made out of a cardboard box or a wooden backup appliance not connected to anything.

However, the exact reason for my today’s rant is somewhat different and, in my opinion, even more troubling. While reading the documentation for a security-related product of one reputable vendor, I’ve realized that it uses an external MySQL database to store its configuration. That got me thinking: a security product is sold with a promise to add a layer of protection around an existing business application with known vulnerabilities. However, this security product itself relies on another application with known vulnerabilities (MySQL isn’t exactly known for its security) to fulfill its basic functions. Is the resulting architecture even a tiny bit more secure? Not at all – due to added complexity it’s in fact even more open to malicious attacks.

Unfortunately, this approach towards secure software design is very common. The notorious Heartbleed vulnerability of the OpenSSL cryptographic library has affected millions of systems around the world back in 2014, and three years later at least 200.000 still have not been patched. Of course, software vendors have their reasons for not investing into security of their products: after all, just like any other business, they are struggling to bring their products to the market as quickly as possible, and often they have neither budgets nor enough qualified specialists to design a properly secured one.

Nowadays, this problem is especially evident in consumer IoT products, and this definitely needs a whole separate blog post to cover. However, security vendors not making their products sufficiently secure pose an even greater danger: as I mentioned earlier, for many individuals and organizations, a cybersecurity product is a modern equivalent of a safe. Or an armored car. Or an insulin pump. How can we trust a security product that in fact is about as reliable as a safe with plywood walls?

Well, if you’ve read my past blog posts, you probably know that I’m a strong proponent of government regulation of cybersecurity. I know that this idea isn’t exactly popular among software vendors, but is there really a viable alternative? After all, gunsmiths or medical equipment manufacturers have been under strict government control for ages, and even security guards and private investigators must obtain licenses first. Why not security vendors? For modern digital businesses, the reliability of cybersecurity products is at least as important as pick resistance of their door locks.

Unfortunately, this kind of government regulation isn’t probably going to happen anytime soon, so companies looking for security solutions are still stuck with the “Caveat Emptor” principle. Without enough own experience to judge whether a particular product is really capable of fulfilling its declared functionality, one, of course, should turn to an independent third party for a qualified advice. For example, to an analyst house like us :)

However, the next most useful thing to look for is probably certification according to government or industry standards. For example, when choosing an encryption solution, it’s wise to look for a FIPS 140-2 certification with level 2 or higher. There are appropriate security certifications for cloud service providers, financial institutions, industrial networks, etc.

In any case, do not take any vendor’s claims for granted. Ask for details regarding the architecture of their products, which security standards they implement or whether they rely on open source libraries or third-party products. The more pressure about secure design you put on vendors, the higher are the chances that in the future, they will see security by design as their unique selling proposition and not a waste of resources. And as always, when you don’t know where to start, just ask an expert!

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

Privacy & the European Data Protection Regulation Learn more

Privacy & the European Data Protection Regulation

The EU GDPR (General Data Protection Regulation), becoming effective May 25 th , 2018, will have a global impact not only on data privacy, but on the interaction between businesses and their customers and consumers. Organizations must not restrict their GDPR initiatives to technical changes in consent management or PII protection, but need to review how they onboard customers and consumers and how to convince these of giving consent, but also review the amount and purposes of PII they collect. The impact of GDPR on businesses will be far bigger than most currently expect. [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00