Blog posts by Alexei Balaganski

Security Vendor Imperva Reports a Breach

Imperva, a US-based cybersecurity company known for its web application security and data protection products, has disclosed a breach of their customer data. According to the announcement, a subset of the customers for its cloud-based Web Application Firewall solution (formerly known as Incapsula) had their data exposed, including their email addresses, password hashes, API keys, and SSL certificates.

Adding insult to injury, this breach seems to be that of the worst kind: it happened long ago, probably in September 2017, and was unnoticed until a third party notified Imperva a week ago. Even though the investigation is still ongoing and not many details are revealed yet, the company did the right thing by providing a prompt full disclosure along with recommended security measures.

Still, what can we learn or at least guess from this story? First and foremost, even the leading cybersecurity vendors are not immune (or should I say, “impervious”?) to hacking and data breaches, exposing not only their own corporate infrastructures and sensitive data, but creating unexpected attack vectors for their customers. This is especially critical for SaaS-based security solutions, where a single data leak may give a hacker convenient means to attack multiple other companies using the service.

More importantly, however, this highlights the critical importance of having monitoring and governance tools in place in addition to traditional protection-focused security technologies. After all, having an API key for a cloud-based WAF gives a hacker ample opportunity to silently modify its policies, weakening or completely disabling protection of the application behind it. If the customer has no means of detecting these changes and reacting quickly, he will inevitably end up being the next target.

Having access to the customer’s SSL certificates opens even broader opportunities for hackers: application traffic can be exposed to various Man-in-the-Middle attacks or even silently diverted to a malicious third party for all kinds of misuse: from data exfiltration to targeted phishing attacks. Again, without specialized monitoring and detection tools in place, such attacks may go unnoticed for months (depending on how long your certificate rotation cycles are). Quite frankly, having your password hashes leaked feels almost harmless in comparison.

So, does this mean that Imperva’s Cloud WAF should no longer be trusted at all? Of course not, but the company will surely have to work hard to restore its product’s reputation after this breach.

Does it mean that SaaS-based security products, in general, should be avoided? Again, not necessarily, but additional risks of relying on security solutions outside of your direct control must be taken into account. Alas, finding the right balance between complexity and costs of an on-premises solution vs. scalability and convenience of “security from the cloud” has just become even more complicated than it was last week.

The bottom line is that although 100% security is impossible to achieve, even with multi-layered security architecture, the only difference between a good and a bad strategy here is in properly identifying the business risks and investing in appropriate mitigation controls. However, without continuous monitoring and governance in place, you will inevitably end up finding about a data breach long after it has occurred – and you’ll be extremely lucky if you learn it from your security vendor and not from the morning news.

The ultimate security of an organization, and thus its residual risk, depends on the proper mix of complementary components within an IT security portfolio. Gaps in safeguarding of sensitive systems must be identified and eliminated. Functional overlaps and ineffective measures must give way to more efficient concepts. The KuppingerCole Analysts Portfolio Compass Advisory Services offers you support in the evaluation and validation of existing security controls in your specific infrastructure, with the aim of designing a future-proof and cost-efficient mix of measures. Learn more here or just talk to us.

VMware to Acquire Carbon Black and Pivotal, Aims at the Modern, Secure Cloud Vision

Last week, VMware has announced its intent to acquire Carbon Black, one of the leading providers of cloud-based endpoint security solutions. This announcement follows earlier news about acquiring Pivotal, a software development company known for its Cloud Foundry cloud application platform, as well as Bitnami, a popular application delivery service. The combined value of these acquisitions would reach five billion dollars, so it looks like a major upgrade of VMware’s long-term strategy with regards to the cloud.

Looking back at the company’s 20-year history, one cannot but admit VMware’s enormous influence on the very foundation and development of cloud computing, yet their relationship with the cloud was quite uneven. As a pioneer in hardware virtualization, VMware has basically laid the technology foundation for scalable and manageable computing infrastructures, first in on-premises datacenters and later in the public cloud. Over the years, the company has dabbed in IaaS and PaaS services as well, but those attempts weren’t particularly successful: The Cloud Foundry platform was spun out as a separate company in 2013 (the very same Pivotal that VMware is about to buy back now!) and the vCloud Air service was sold out in 2017.

This time however the company seems quite resolute to try it again. Why? What has changed in recent years that may give VMware another chance? Quite a lot, to be fair.

First of all, Cloud is no longer a buzzword: most businesses have already figured out its capabilities and potential limitations, outlined their long-term strategies and are now working on integrating cloud technologies into their business goals. Becoming cloud-native is no longer an answer to all problems, nowadays it always raises the next question: which cloud is good enough for us?

Second, developing modern applications, services or other workloads specifically for the public cloud to fully unlock all its benefits is not an easy job: old-school development tools and methods, legacy on-premises applications (many of which run on VMware-powered infrastructure, by the way) and strict compliance regulations limit the adoption rate. “Lift and shift” approach is usually frowned upon, but many companies have no other alternative: the best thing they can dream of is a method of making their applications work the same way in every environment, both on-prem and in any of the existing clouds.

Last but not least, the current state of cloud security leaves a lot to be desired, as numerous data breaches and embarrassing hacks of even the largest enterprises indicate. Even though cloud service providers are working hard to offer numerous security tools for their customers, implementing and managing dozens of standalone agents and appliances without leaving major gaps between them is a challenge few companies can master.

This is where VMware’s new vision is aiming at: offering an integrated platform for developing, running and securing business applications that work consistently across every on-premises or mobile device and in every major cloud, with consistent proactive security built directly into this unified platform instead of being bolted on it in many places. VMware’s own infrastructure technologies, which can now run natively on AWS or Azure clouds, combined with Pivotal’s Kubernetes-powered application platform and Carbon Black’s cloud-native security analytics that can now monitor every layer of the computing stack are expected to provide an integrated foundation for such a platform in the very near future.

How quickly and consistently VMware will be able to deliver on this promise remains to be seen, of course. Hopefully, third time’s a charm! 

Passwordless for the Masses

What an interesting coincidence: I’m writing this just after finishing a webinar where we talked about the latest trends in strong authentication and the ways to eliminate passwords within an enterprise. Well, this could not have been a better time for the latest announcement from Microsoft, introducing Azure Active Directory support for passwordless sign-in using FIDO2 authentication devices.

Although most people agree that passwords are no longer an even remotely adequate authentication method for the modern digital and connected world, somehow the adoption of more secure alternatives is still quite underwhelming. For years, security experts warned about compromised credentials being the most common reason for data breaches, pointing out that just by enabling multi-factor authentication, companies may prevent 99% of all identity-related attacks. Major online service providers like Google or Microsoft have been offering this option for years already. The number of vendors offering various strong authentication products, ranging from hardware-based one-time password tokens to various biometric methods to simple smartphone apps is staggering – sure there is a solution for any use case on the market today…

Why then are so few individuals and companies using MFA? What are the biggest reasons preventing its universal adoption? Arguably, it all boils down to three major perceived problems: high implementation costs, poor user experience, and lack of interoperability between all those existing products. Alas, having too many options does not encourage wider adoption – if anything, it has the opposite effect. If an organization wants to provide consistently strong authentication experiences to users of different hardware platforms, application stacks, and cloud services, they are forced to implement multiple incompatible solutions in parallel, driving costs and administration efforts up, not down.

FIDO Alliance was founded back in 2013, promising to establish certified interoperability among various strong authentication products. KuppingerCole has been following their developments closely ever since, even awarding the Alliance twice with our EIC Awards for the Best Innovation and Best Standard project. Unfortunately, the adoption rate of FIDO-enabled devices was not particularly universal, and mostly limited  to individuals, although large-scale consumer-oriented projects supported by vendors like Samsung, Google or PayPal succeeded. Lack of consistent support of the standard in browsers restricted its popularity even further.

Fast forward to early 2019, however, and the second version of the FIDO specification has been adopted as a W3C standard, ensuring its consistent support in all major web browsers, as well as in Windows 10 and Android platforms. The number of online services that support FIDO2-based strong authentication is now growing much faster than in previous years and yet, many experts would still argue that the standard is too focused on consumers and not a good fit for enterprise deployments.

Well, this week, Microsoft has announced that FIDO2 security devices are now supported in Azure Active Directory, meaning that any Azure AD-connected application or service can immediately benefit from this secure, standards-based and convenient experience. Users can now authenticate themselves using a Yubikey or any other compatible security device, the Microsoft Authenticator mobile app, or the native Windows Hello framework.

With Azure Active Directory being the identity platform behind Microsoft’s own cloud services like Office 365 and Azure Cloud, as well as one of the most popular cloud-based IAM services for numerous 3rd party applications, can this be any more “enterprise-y”?

We realize that the service is still in the preview stage, so there are still a few kinks to iron out, but in the end, this announcement may be the final push for many companies that were considering adopting some form of modern strong authentication but were wary of the challenges mentioned earlier. Going fully passwordless is not something that can be achieved in a single step, but Microsoft has made it even easier now, with more traditional MFA options and even support for legacy apps still available when needed.

And, of course, this could be a major boost for FIDO2 adoption in the enterprise world, which we can only wholeheartedly welcome.

API Security in Microservices Architectures

Microservice-based architectures allow businesses to develop and deploy their applications in a much more flexible, scalable and convenient way – across multiple programming languages, frameworks and IT environments. Like with any other new technology that DevOps and security teams started to explore in the recent years, there is still quite a lot of confusion about the capabilities of new platforms, misconceptions about new attack vectors and renewed discussions about balancing security with the pace of innovation. And perhaps the biggest myth of microservices is that their security somehow takes care of itself.

Let’s get this thing out of the way first: microservices on their own are nothing more than a method of designing applications as an interconnected system of loosely coupled business-focused components. There is nothing inherent to microservices that would make them more resilient against cyber threats or prevent sensitive data from being stolen. On the contrary, microservice-based architectures rely on new tools and technologies, and those bring in new security challenges and new skills needed to mitigate them efficiently.

In fact, even if we disregard the “architectural” risks of microservices, like cascading failures or service discovery abuse, we have to agree that a modern loosely coupled application is subjected to the same risks as a traditional monolithic one – ranging from the low-level infrastructure exploits to the communication layer and all the way up to attacks targeting the application users. And perhaps no other attack vector is more critical than APIs.

As we have discussed in a recent KuppingerCole webinar, even for more traditional scenarios, API security is still something that many businesses tend to underestimate and neglect, hoping that existing tools like web application firewalls will be sufficient to secure their business APIs. Unfortunately, this could not be further from truth – APIs are subject to numerous risks that can only be successfully mitigated with a properly designed strategy that covers the whole API lifecycle – even before any code is written, let alone deployed to a backend.

In microservice-based applications, where hundreds of individual microservices are communicating with each other and with the outside world exclusively through APIs, the difficulty of securing all those interactions increases exponentially. Due to the nature of these applications, individual API endpoints become ephemeral, appearing as new containers are spun up, migrating between environments and disappearing again. And yet each of them must be secured by proper access control, threat protection, input validation, bot mitigation, and activity monitoring solutions – all those jobs which are typically performed by an API gateway. How many API gateways would you need for that?

Another challenge of microservice-based architectures is their diversity – when individual microservices are written using different development frameworks and deployed to different platforms, providing consistent authentication and authorization becomes a problem – ensuring that all components agree on a common access rights model, that they understand the same access token format, that this token exchange scales properly, and that sensitive attributes flowing between services are not exposed to the outside world. The same considerations apply to network-level communications: isolation, segmentation, traffic encryption - these are just some issues developers have to think about. Preferably, in advance.

Does all this mean that making microservices secure is too much of a hassle that undoes all the speed and convenience of the architecture? Not at all, but the key point here is that you need to do it the right way from the very beginning of your microservice journey. And luckily, you do not have to walk alone – everyone had faced the same challenges, and many have already figured them out. Others have even come up with convenient tools and frameworks that will take care of some of these problems for you.

Consider modern API security solutions that do not just focus on static infrastructure, but cover everything from proactive risk assessment of your API contracts to ensuring that each of your microservices is secured by a tiny centrally managed API microgateway. Or the protocols and standards designed specifically for microservices like Secure Production Identity Framework for Everyone (SPIFFE) – essentially the “next-gen PKI” for dynamic heterogeneous software systems. Or even full-featured service mesh implementations that provide a control and security foundation for your microservices – reinventing the wheel is the last thing you need to think about.

In fact, the only thing you absolutely must do yourself is to keep an open mind and never stop learning – about the recent technologies and tools, about the newest design patterns and best practices, and, of course, about the latest cyber threats and other risks. Needless to say, we are here to support you on this journey. See you at one of our upcoming events!

Oops, Google Did It Again!

Like many people with a long career in IT, I have numerous small computer-related side duties I’m supposed to perform for my less skilled friends and relatives. Among those, I’m helping manage a G Suite account for a small business a friend of mine has. Needless to say, I was a bit surprised to receive an urgent e-mail alert from Google yesterday, telling me that several users in that G Suite domain were impacted by a password storage problem.

Turns out, Google has just discovered that they’ve accidentally stored some of those passwords unencrypted, in plain text. Apparently, this problem can be traced back to a bug in the G Suite admin console, which has been around since 2005 (which, if I remember correctly, predates not just the “G Suite” brand, but the whole idea of offering Google services for businesses).

Google is certainly not the first large technology vendor caught violating one of the most basic security hygiene principles – just a couple months earlier we’ve heard the same story about Facebook. I’m pretty sure they won’t be the last as well – with the ever-growing complexity of modern IT infrastructures and the abundance of legacy IAM systems and applications, how can you be sure you don’t have a similar problem somewhere?

In Google’s case, the problem wasn’t even in their primary user management and authentication frameworks – it only affected the management console where admins typically create new accounts and then distribute credentials to their users. Including the passwords in plain text. In theory, this means that a rogue account admin could have access to other users’ accounts without their knowledge, but that’s a problem that goes way beyond just e-mail…

So, what can normal users do to protect themselves from this bug? Not much, actually – according to the mail from the G Suite team, they will be forcing a password reset for every affected user as well as terminating all active user sessions starting today. Combined with fixing the vulnerability in the console, this should prevent further potential exploits. 

However, considering the number of similar incidents with other companies, this should be another compelling reason for everyone to finally activate Multi-Factor Authentication for each service that supports it, including Google. Anyone who is already using any reliable MFA authentication method – ranging from smartphone apps like Google Authenticator to FIDO2-based Google Security Keys – is automatically protected from any kind of credential abuse. Just don’t use SMS-based one-time passwords, ok? They’ve been compromised years ago and should not be considered secure anymore.

As for service providers themselves – how do you even start protecting sensitive information under your control if you do not know about all places it can be stored? Comprehensive data discovery and classification strategy should be the first step towards knowing what needs to be protected. Without it, both large companies like Google and smaller like the one that just leaked 50 million Instagram account details, will remain not just subjects of sensationalized publications in press, but constant targets for lawsuits and massive fines for compliance violations.

Remember, the rumors of password’s death are greatly exaggerated – and protecting these highly insecure but so utterly convenient bits of sensitive data is still everyone’s responsibility.

Artificial Intelligence in Cybersecurity: Are We There Yet?

Artificial Intelligence (along with Machine Learning) seems to be the hottest buzzword in just about every segment of the IT industry nowadays, and not without reason. The very idea of teaching a machine to mimic the way humans think (but much, much quicker) without the need to develop millions of complex rules sounds amazing: instead, machine learning models are simply trained by feeding them with large amounts of carefully selected data.

There is however a subtle but crucial distinction between “thinking like a human” (which in academic circles is usually referred as “Strong AI” and to this day remains largely a philosophical concept) and “performing intellectual tasks like a human”, which is the gist of Artificial General Intelligence (AGI). The latter is an active research field with dozens of companies and academic institutions working on various practical applications of general AI. Much more prevalent, however, are the applications of Weak Artificial Intelligence or “Narrow AI”, which can only be trained to solve a single and rather narrow task – like language processing or image recognition.

Although the theoretical foundations of machine learning go back to the 1940s, only recently a massive surge in available computing power thanks to cloud services and specialized hardware has made it accessible to everyone. Thousands of startups are developing their AI-powered solutions for various problems. Some of those, like intelligent classification of photos or virtual voice assistants, are already an integral part of our daily lives; others, like driverless cars, are expected to become reality in a few years.

AIs are already beating humans at games and even in public debates – surely they will soon replace us in other important fields, like cybersecurity? Well, this is exactly where reality often fails to match customer expectations fueled by the intense hype wave that still surrounds AI and machine learning. Looking at various truly amazing AI applications developed by companies like Google, IBM or Tesla, some customers tend to believe that sooner or later AIs are going to replace humans completely, at least in some less creative jobs.

When it comes to cybersecurity, it’s hard to blame them, really… As companies go through the digital transformation, they are facing new challenges: growing complexity of their IT infrastructures, massive amounts of sensitive data spread across multiple clouds, and the increasing shortage of skilled people to deal with them. Even large businesses with strong security teams cannot keep up with the latest cybersecurity risks.

Having AI as potential replacement for overworked humans to ensure that threats and breaches are detected and mitigated in real time without any manual forensic analysis and decision-making – that would be awesome, wouldn’t it? Alas, people waiting for solutions like that need a reality check.

First, artificial intelligence, at least in its practical definition, was never intended to replace humans, but rather to augment their powers by automating the most tedious and boring parts of their jobs and leaving more time for creative and productive tasks. Upgrading to AI-powered tools from traditional “not-so-smart” software products may feel like switching from pen and paper to a computer, but both just provide humans with better, more convenient tools to do their job faster and with less effort.

Second, even leaving all potential ethical consequences aside, there are several technological challenges that need to be addressed specifically for the field of cybersecurity.

  • Availability and quality of training data that are required for training cybersecurity-related ML models. This data almost always contains massive amounts of sensitive information – intellectual property, PII or otherwise strictly regulated data – which companies aren’t willing to share with security vendors.
  • Formal verification and testing of machine learning models is a massive challenge of its own. Making sure that an AI-based cybersecurity product does not misbehave under real-world conditions (or indeed under adversarial examples specifically crafted to deceive ML models) is something that vendors are still figuring out, and in many cases, this is only possible through a collaboration with customers.
  • While in many applications it’s perfectly fine to train a model once and then use it for years, the field of cybersecurity is constantly evolving, and threat models must be continuously updated, expanded and retrained on newly discovered threats.

Does it mean that AI cannot be used in cybersecurity? Not at all, and in fact, the market is already booming, with numerous AI/ML-powered cybersecurity solutions available right now – the solutions that aim to offer deeper, more holistic real-time visibility into the security posture of an organization across multiple IT environments; to provide intelligent assistance for human forensic analysts by making their job more productive; to help identify previously unknown threats. In other words, to augment but definitely not to replace humans!

Perhaps the most popular approach is applying Big Data Analytics methods to raw security data for detecting patterns or anomalies in network traffic flows, application activities or user behavior. This method has led to the creation of whole new market segments variously referred to as security intelligence platforms or next-generation SIEM. These tools manage to reduce the number of false positives and other noise generated by traditional SIEMs and provide a forensic analyst with a low number of context-enriched alerts ranked by risk scores and often accompanied by actionable mitigation recommendations.

Another class of AI solutions for cybersecurity is based around true cognitive technologies – such as language processing and semantic reasoning. Potential applications include generating structured threat intelligence from unstructured textual and multimedia data (ranging from academic research papers to criminal communications on the Dark Web), proactive protection against phishing attacks or, again, intelligent decision support for human experts. Alas, we are yet to see sufficiently mature products of this kind on the market.

It’s also worth noting that some vendors are already offering products bearing the “autonomous” label. However, customers should take such claims with a pinch of salt. Yes, products like the Oracle Autonomous Database or Darktrace’s autonomous cyber-defense platform are based on AI and are, to a degree, capable of automated mitigation of various security problems, but they are still dependent on their respective teams of experts ready to intervene if something does not go as planned. That’s why such solutions are only offered as a part of a managed service package – even the best “autonomous AIs” still need humans from time to time…

So, is Artificial Intelligence the solution for all current and future cybersecurity challenges? Perhaps, but please do not let over-expectations or fears affect your purchase decisions. Thanks to the ongoing developments both in narrow and general AI, we already have much better security tools than just several years before. Yet, when planning your future security strategy, you still must think in terms of risks and the capabilities needed to mitigate them, not in terms of technologies.

Also, don’t forget that cybercriminals can use AI to create better malware, too. In fact, things are just starting to get interesting!

Oslo, We Have a Problem!

As you have certainly already heard, Norsk Hydro, one of the world’s largest aluminum manufacturers and the second biggest hydropower producer in Norway, has suffered a massive cyber attack earlier today. According to a very short statement issued by the company, the attack has impacted operations in several of its business areas. To maintain the safety and continuity of their industrial processes, many of the operations had to be switched to manual mode.

The details of the incident are still pretty sparse, but according to the statement at their press conference, it may have been hit by a ransomware attack. Researchers are currently speculating that it most likely has been LockerGoga, a strain of malware that affected a French company Altran Technologies back in January. This particular strain is notable for having been signed with a valid digital certificate, although it has been revoked since then. Also, only a few of antimalware products are currently able to detect and block it.

It appears that the IT people at Norsk Hydro are currently trying to contain the fallout from the attack, including asking their employees not to turn on their computers and even shutting down the corporate website. Multiple shifts are working manually at the production facilities to ensure that there is no danger to people’s safety and to minimize financial impact.

We will hopefully see more details about the incident later, but what could we learn from the Norsk Hydro’s initial response? First and foremost, we have another confirmation that this kind of incident can happen to anybody. No company, regardless of its industry, size and security budget can assume that their business or industrial networks are immune to such attacks, or that they already have controls in place that defend against all possible security risks.

Second, here we have another textbook example of how not to handle public relations during a security incident. We can assume that a company of that scale should have at least some kind of plan for worst-case scenarios like this – but does it go beyond playbooks for security experts? Have the company’s executives ever been trained to prepare for such level of media attention? And whose idea was it anyway to limit public communications to a Facebook page?

Studies in other countries (like this report from the UK government) indicate that companies are shockingly unprepared for such occasions, with many lacking even a basic incident response plan. However, even having one on paper does not guarantee that everything will go according to it. The key to effective incident management is preparation and this should include awareness among all the people involved, clearly defined roles and responsibilities, access to external experts if needed, but above anything else – practice!

KuppingerCole’s top three recommendations would be the following:

  1. Be prepared! You must have an incident response plan that covers not just the IT aspects of a cyberattack, but organizational, legal, financial and public relations and other means of dealing with its fallout. It is essential that company’s senior executives are involved in its design and rehearsals, since they will be the front and center of any actual operation.
  2. Invest in the right technologies and products to reduce the impact of cyber incidents as well as those to prevent them from happening in the first place. Keep in mind however that no security tool vendor can do the job of assessing the severity and likelihood of your own business risks. Also, always have a backup set of tools and even “backup people” ready to ensure that essential business operations can continue even during a full shutdown.
  3. You will need help from specialists in multiple areas ranging from cyber forensic to PR, and most companies do not have all those skills internally. Look for partnerships with external experts and do it before the incident occurs.

If you need neutral and independent advice, we are here to assist you as well!

Building Trust by Design

Trust has somehow become a marketing buzzword recently. There is a lot of talks about “redefining trust”, “trust technologies” or even “trustless models” (the latter is usually applied to Blockchain, of course). To me, this has always sounded… weird.

After all, trust is the foundation of the very society we live in, the key notion underlying the “social contract” that allows individuals to coexist in a mutually beneficial way. For businesses, trust has always been a resulting combination of two crucial driving forces – reputation and regulation. Gaining a trustworthy reputation takes time but ruining it can be instantaneous – and it is usually in a businesses’ best interest not to cheat their customers or at least not to get caught (and that’s exactly where regulation comes into play!). Through the lengthy process of trial and error, we have more or less figured out already how to maintain trust in traditional “tangible” businesses. And then the Digital Transformation happened.

Unfortunately, the dawn of the digital era has not only enabled many exciting new business models but also completely shattered the existing checks and balances. On one hand, the growing complexity of IT infrastructures and the resulting skills shortage made sensitive digital data much more vulnerable to cyberattacks and breaches. On the other hand, unburdened by regulations and free from public scrutiny, many companies have decided that the lucrative business of hoarding and reselling personal information is worth more than any moral obligation towards their customers. In a way, the digital transformation has brought back the Wild West mentality to modern businesses – completely with gangs of outlaws, bounty hunters, and snake oil peddlers…

All this has led to a substantial erosion of public trust – between another high-profile data breach and a political scandal about harvesting personal data people no longer know whom to trust. From banks and retailers to social media and tech companies – this “trust meltdown” isn’t just bad publicity, it leads to substantial brand damage and financial losses. The recent introduction of strict data protection regulations like GDPR with their massive fines for privacy violations is a sign that legislation is finally catching up, but will compliance alone fix the trust issue? What other methods and technologies can companies utilize to restore their reputations?

Well, the first and foremost measure is always transparency and open communications with customers. And this isn’t just limited to breach disclosure – on the contrary, the companies must demonstrate their willingness to improve data protection and educate customers about the hidden challenges of the “digital society”. Another obvious approach is simply minimizing personal data collection from customers and implementing proper consent management. Sure, this is already one of the primary stipulations of regulations like GDPR, but compliance isn’t even the primary benefit here: for many companies, the costs savings on data protection and reputation improvements alone will already outweigh the potential (and constantly dwindling) profits from collecting more PII than necessary.

Finally, we come to the notion of security and privacy “by design”. This term has also become a buzzword for security vendors eager to sell you another data protection or cybersecurity solution. Again, it’s important to stress that just purchasing a security product does not automatically make a business more secure and thus more trustworthy. However, incorporating certain security- and privacy-enhancing technologies into the very fabric of your business processes may, in fact, bring noticeable improvements, and not just to your company’s public reputation.

Perhaps, the most obvious example of such a technology is encryption. It’s ubiquitous, cheap to implement and gives you a warm feeling of safety, right? Yes, but making encryption truly inclusive and end-to-end, ensuring that it covers all environments from databases to cloud services, and, last but not least, that the keys are managed properly is not an easy challenge. However, to make data-centric security the foundation of your digital business, you would need to go deeper still. Without identity, modern security simply cannot fulfill its potential, so you’ll need to add dynamic centralized access control to the mix. And then security monitoring and intelligence with a pinch of AI. Thus, step by step, you’ll eventually reach the holy grail of the modern IT – Zero Trust (wait, weren’t we going to boost trust, not get rid of it? Alas, that’s the misleading nature of many popular buzzwords nowadays).

For software development companies, investing into security by design can look complicated at first, too. From source code testing to various application hardening techniques to API security – writing secure applications is hard, and modern technologies like containers and microservices make it even harder, don’t they? This cannot be farther from the truth, however: modern development methodologies like DevOps and DevSecOps are in fact focusing on reducing the strain on programmers with intelligent automation, unified architectures across hybrid environments, and better experience for users, who are learning to appreciate programs that do not break under high load or cyberattacks.

But it does not even have to be that complicated. Consider Consumer Identity and Access Management platforms, for example. Replacing a homegrown user management system with such a platform not only dramatically improves the experience for your current and potential customers – with built-in privacy and consent management features, it also gives users better control over their online identities, boosting their trust considerably. And in the end, you get to know your customers better while reducing your own investments into IT infrastructure and operations. It can’t really get better than this.

You see, trust, privacy, and security don’t have to be a liability and a financial burden. With an open mind and a solid strategy, even the harshest compliance regulations can be turned into new business enablers, cost-saving opportunities and powerful messages to the public. And we are always here to support you on this journey.

Who's the Best Security Vendor of Them All?

This week I had an opportunity to visit the city of Tel Aviv, Israel to attend one of the Microsoft Ignite | The Tour events the company is organizing to bring the latest information about their new products and technologies closer to IT professionals around the world. Granted, the Tour includes other cities closer to home as well, but the one in Tel Aviv was supposed to have an especially strong focus on security and the weather in January is so warm, so here I was!

I do have to confess however that the first day was somewhat boring– although I could imagine that around 2000 visitors were enjoying the show, for me as an analyst most of the information presented in sessions wasn’t really that new. But on the second day, we have visited the Microsoft Israel Development Center in nearby Herzliya and had a chance to talk directly to people leading the development of some of the most interesting products from Microsoft’s security portfolio.

At this moment some readers would probably ask me: wait a minute, are you suggesting that Microsoft is really a security vendor, let alone the best one? Well, that’s where it starts getting interesting! In one of the sessions, the speaker made a strong point for the notion of “good enough security”, explaining that most end-user companies do not really need the best of breed security products, because they’ll eventually end up with a massive number of disjointed tools that need to be managed separately.

Not only does it further increase the complexity of your corporate IT infrastructure that is already complex enough without security; these disconnected tools fail to deliver a unified view into everything happening within it and thus are unable to detect the most advanced cyber threats. Instead, he argued, a perfectly integrated solution covering multiple areas of cybersecurity would be more beneficial for most, even if it’s not the best of breed in individual areas. And who was the best opportunity to offer such an integrated solution? Well, Microsoft of course, given their leading positions in several key markets like on endpoints with Windows, in the cloud with Azure and, of course, in the workplace with Office 365.

Now, I’m not sure I like the term “good enough security” and I definitely do not believe that market domination in one area automatically translates into better opportunities in others, but there is actually a grain of truth behind this bold claim. First of all, being present on so many endpoints, cloud computers, mail servers, and other connected systems, Microsoft is able to collect vast amounts of telemetry data that end up in their Intelligent Security Graph – a single database of security events that can provide security insights and threat intelligence.

Second, even though many people still do not realize it, Microsoft has been a proper security vendor for quite some time already. Even though the company was a late starter in many areas, they are quickly closing the gaps in areas like Endpoint Protection or Cloud Security and in others, like Information Protection, they are already ahead of competitors. In recent years, the company has acquired a number of security startups, primarily here in Israel, and making these new products work together seamlessly has been one of their top priorities. This will certainly not happen overnight but talking to the actual developers gave me a strong impression of their motivation and commitment.

Now, Microsoft has an interesting history of working hard for years to win a completely new market, with impressive successes (like Azure or Xbox) and spectacular failures (remember Windows Mobile?). It seems also that technology excellence plays less of a role here than quality marketing. Unfortunately, this is where the company is still falling short – for example, how many potential customers are even considering Windows Defender Advanced Threat Protection for a shortlist of EDR solutions? Do they even know that Windows Defender is a full-featured EPP/EDR solution and not just a basic antivirus it used to be?

It seems to me that the company is still exploring their marketing strategy, judging by the number of new product names and licensing changes I’ve seen during the last year. We’re down to 4 product lines now, but I really wish they’d choose one name and stick to it. In the end, do I think that Microsoft is the best security vendor of them all? Of course not, they still have a very long way to go towards that, and there is no such thing as the single “best” security vendor anyway. But they are definitely already beyond the “good enough” stage.

AWS re:Invent Impressions

This year’s flagship conference for AWS – the re:Invent 2018 in Las Vegas – has just officially wrapped. Continuing the tradition, it has been bigger than ever – with more than 50 thousand attendees, over 2000 sessions, workshops, hackathons, certification courses, a huge expo area, and, of course, tons of entertainment programs. Kudos to the organizers for pulling off an event of this scale – I can only imagine the amount of effort that went into it.

I have to confess, however: maybe it’s just me getting older and grumpier, but at times I couldn’t stop thinking that this event is a bit too big for its own good. With the premises spanning no less than 7 resorts along the Las Vegas Boulevard, the simple task of getting to your next session becomes a time-consuming challenge. I have no doubt however that most of the attendees have enjoyed the event program immensely because application development is supposed to be fun – at least according to the developers themselves!

Apparently, this approach is deeply rooted in the AWS corporate culture as well – their core target audience is still “the builders” – people who already have the goals, skills and desire to create new cloud-native apps and services and the only thing they need are the necessary tools and building blocks. And that’s exactly what the company is striving to offer – the broadest choice of tools and technologies at the most competitive prices.

Looking at the business stats, it’s obvious that the company remains a quite distant leader when it comes to Infrastructure-as-a-Service (IaaS) – having such a huge scale advantage over other competitors, the company can still outpace them for years even if its relative growth slows down. Although there have been discussions in the past whether AWS has a substantial Platform-as-a-Service (PaaS) offering, they can be easily dismissed now – in a sense, “traditional PaaS” is no longer that relevant, giving way to modern technology stacks like serverless and containers. Both are strategic for AWS, and, with the latest announcements about expanding the footprint of the Lambda platform, one can say that the competition in the “next-gen PaaS” field would be even tougher.

Perhaps the only part of the cloud playing field where AWS continues to be notoriously absent is Software-as-a-Service (SaaS) and more specifically enterprise application suites. The company’s own rare forays into this field are unimpressive at best, and the general strategy seems to be “leave it to the partners and let them run their services on AWS infrastructure”. In a way, this reflects the approach Microsoft has been following for decades with Windows. Whether this approach is sustainable in the long term or whether cloud service providers should rather look at Apple as their inspiration – that’s a topic that can be debated for hours… In my opinion, this situation leaves a substantial opening in the cloud market for competitors to catch up and overtake the current leader eventually.

The window of opportunity is already shrinking, however, as AWS is aiming at expanding into new markets and doing just about anything technology-related better (or at least bigger and cheaper) than their competitors, as the astonishing number of new product and service announcements during the event shows. They span from the low-level infrastructure improvements (faster hardware, better elasticity, further cost reductions) to catching up with competitors on things like managed Blockchain to all-new almost science fiction-looking stuff like design of robots and satellite management.

However, to me as an analyst, the most important change in the company’s strategy has been their somewhat belated realization that not all their users are “passionate builders”. And even those who are, are not necessarily considering the wide choice of available tools a blessing. Instead, many are looking at the cloud as a means to solve their business problems and the first thing they need is guidance. And then security and compliance. Services like AWS Well-Architected Tool, AWS Control Tower and AWS Security Hub are the first step in the right direction.

Still, the star topic of the whole event was undoubtedly AI/ML. With a massive number of new announcements, AWS clearly indicates that its goal is to make machine learning accessible not just for hardcore experts and data scientists, but to everyone, no ML expertise required. With their own machine learning inference chips along with the most powerful hardware to run model training and a number of significant optimizations in frameworks running on them, AWS promises to become the platform for the most cutting-edge ML applications. However, on the other end, the ability to package machine learning models and offer them on the AWS Marketplace almost as commodity products makes these applications accessible to a much broader audience – another step towards “AI-as-a-Service”.

Another major announcement is the company’s answer to their competitors’ hybrid cloud developments – AWS Outposts. Here, the company’s approach is radically different from offerings like Microsoft’s Azure Stack or Oracle Cloud at Customer, AWS has decided not to try and package their whole public cloud “in a box” for on-premises applications. Instead, only the key services like storage and compute instances (the ones that really have to remain on-premises because of compliance or latency considerations, for example) are brought to your data center, but the whole control plane remains in the cloud and these local services will appear as a seamless extension of the customer’s existing virtual private cloud in their region of choice. The idea is that customers will be able to launch additional services on top of this basic foundation locally - for example, for databases, machine learning or container management. To manage Outposts, AWS offers two choices of a control plane: either through the company’s native management console or through VMware Cloud management tools and APIs.

Of course, this approach won’t be able to address certain use cases like occasionally-connected remote locations (on ships, for example), but for a large number of customers, AWS Outposts promises significantly reduced complexity and better manageability of their hybrid solutions. Unfortunately, not many technical details have been revealed yet, so I’m looking forward to further updates.

There was a number of announcements regarding AWS’s database portfolio, meaning that customers now have an even bigger number of available database engines to choose from. Here, however, I’m not necessarily buying into the notion that more choice translates into more possibilities. Surely, managed MySQL, Memcached or any other open source database will be “good enough” for a vast number of use cases, but meeting the demands of large enterprises is a different story. Perhaps, a topic for an entirely separate blog post.

Oh, and although I absolutely recognize the value of a “cryptographically verifiable ledger with centralized trust” for many use cases which people currently are trying (and failing) to implement with Blockchains, I cannot but note that “Quantum Ledger Database” is a really odd choice of a name for one. What does it have to do with quantum computing anyway?

After databases, the expansion of the company’s serverless compute portfolio was the second biggest part of AWS CTO Werner Vogels’ keynote. Launched four years ago, AWS Lambda has proven to be immensely successful with developers as a concept, but the methods of integrating this radically different way of developing and running code in the cloud into traditional development workflows were not particularly easy. This year the company has announced multiple enhancements both to the Lambda engine itself – you can now use programming languages like C++, PHP or Cobol to write Lambda functions or even bring your own custom runtime – and to the developer toolkit around it including integrations with several popular integrated development environments.

Notably, the whole serverless computing platform has been re-engineered to run on top of AWS’s own lightweight virtualization technology called Firecracker, which ensures more efficient resource utilization and better tenant isolation that translates into better security for customers and even further potential for cost savings.

These were the announcements that have especially caught my attention during the event. I’m pretty sure that you’ll find other interesting things among all the re:Invent 2018 product announcements. Is more always better? You decide. But it sure is more fun!

Discover KuppingerCole

KuppingerCole PLUS

Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

AI for the Future of Your Business Learn more

AI for the Future of Your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00