Blog posts by Alexei Balaganski

Increase Accuracy in Demand Forecasting with Artificial Intelligence

Demand forecasting is one of the most crucial factors that determine the success of every business, online or offline, retail or wholesale. Being able to predict future customer behavior is essential for optimal purchase planning, supply chain management, reducing potential risks and improving profit margins. In some form, demand prediction has existed since the dawn of civilization, just as long as commerce itself.

Yet, even nowadays, when businesses have much more historical data available for analysis and a broad range of statistical methods to crunch it, demand forecasting is still not hard science, often relying on expert decisions based on intuition alone. With all the hype surrounding artificial intelligence’s potential applications in just about any line of business, it’s no wonder then that many experts believe it will have the biggest impact on demand planning as well.

Benefits of AI applications in demand forecasting

But what exactly are the potential benefits of this new approach as opposed to traditional methods? Well, the most obvious one is efficiency due to the elimination of the human factor. Instead of relying on serendipity, machine learning-based methods operate on quantifiable data, both from the business's own operational history and on various market intelligence than may influence demand fluctuations (like competitor activities, price changes or even weather).

On the other hand, most traditional statistical demand prediction methods were designed to better approximate specific use cases: quick vs. slow fluctuations, large vs. small businesses and so on. Selecting the right combination of those methods requires you to be able to deal with a lot of questions you currently might not even anticipate, not to mention know the right answers. Machine learning-based business analytics solutions are known for helping companies to discover previously unknown patterns in their historical data and thus for removing a substantial part of guesswork from predictions.

Last but not least, the market already has quite a few ready-made solutions to offer, either as standalone platforms or as a part of bigger business intelligence solutions. You don’t need to reinvent the wheel anymore. Just connect one of those solutions to your historical data, the rest, including multiple sources of external market intelligence, will be at your fingertips instantly.

What about challenges and limitations?

Of course, one has to consider the potential challenges of this approach as well. The biggest one has even nothing to do with AI: it’s all about the availability and quality of your own data. Machine learning models require lots of input to deliver quality results, and by far not every company has all this information in a form ready for sharing yet. For many, the journey towards AI-powered future has to start with breaking the silos and making historical data unified and consistent.

This does not apply just to sales operations, by the way. Efficient demand prediction can only work when data across all business units can be correlated: including logistics, marketing, and others. If your (or your suppliers’, for that matter) primary analytics tool is still Excel, thinking about artificial intelligence is probably a bit premature.

A major inherent problem of many AI applications is explainability. For many, not being able to understand how exactly a particular prediction has been reached might be a major cause of distrust. Of course, this is primarily an organizational and cultural challenge, but a major challenge, nevertheless.

However, these challenges should not be seen as an excuse to ignore the new AI-based solutions completely. Artificial intelligence for demand forecasting is no longer just a theory. Businesses across various verticals are already using it with various, but undeniably positive results. Researchers claim that machine learning methods can achieve up to 50% better accuracy over purely statistical approaches, to say nothing about human intuition.

If your company is not ready for embracing AI yet, make sure you start addressing your shortcomings before your competitors. In the age of digital transformation, having business processes and business data agile and available for new technologies is a matter of survival, after all. More efficient demand forecasting is just one of the benefits you’ll be able to reap afterward.

Feel free to browse our Focus Area: AI for the Future of your Business for more related content.

There Is No “One Stop Shop” for API Management and Security Yet

From what used to be a purely technical concept created to make developers’ lives easier, Application Programming Interfaces (APIs) have evolved into one of the foundations of modern digital business. Today, APIs can be found everywhere – at homes and in mobile devices, in corporate networks and in the cloud, even in industrial environments, to say nothing about the Internet of Things.

When dealing with APIs, security should not be an afterthought

In a world where digital information is one of the “crown jewels” of many modern businesses (and even the primary source of revenue for some), APIs are now powering the logistics of delivering digital products to partners and customers. Almost every software product or cloud service now comes with a set of APIs for management, integration, monitoring or a multitude of other purposes.

As it often happens in such scenarios, security quickly becomes an afterthought at best or, even worse, it is seen as a nuisance and an obstacle on the road to success. The success of an API is measured by its adoption and security mechanisms are seen as friction that limits this adoption. There are also several common misconceptions around the very notion of API security, notably the idea that existing security products like web application firewalls are perfectly capable of addressing API-related risks.

An integrated API security strategy is indispensable

Creating a well-planned strategy and reliable infrastructure to expose their business functionality securely to be consumed by partners, customers, and developers is a significant challenge that has to be addressed not just at the gateway level, but along the whole information chain from backend systems to endpoint applications. It is therefore obvious that point solutions addressing specific links in this chain are not viable in the long term.

Only by combining proactive application security measures for developers with continuous activity monitoring and deep API-specific threat analysis for operations teams and smart, risk-based and actionable automation for security analysts one can ensure consistent management, governance and security of corporate APIs and thus the continuity of business processes depending on them.

Security challenges often remain underestimated

We have long recognized API Economy as one of the most important current IT trends. Rapidly growing demand for exposing and consuming APIs, which enables organizations to create new business models and connect with partners and customers, has tipped the industry towards adopting lightweight RESTful APIs, which are commonly used today.

Unfortunately, many organizations tend to underestimate potential security challenges of opening up their APIs without a security strategy and infrastructure in place. Such popular emerging technologies as the Internet of Things or Software Defined Computing Infrastructure (SDCI), which rely significantly on API ecosystems, are also bringing new security challenges with them. New distributed application architectures like those based on microservices, are introducing their own share of technical and business problems as well.

KuppingerCole’s analysis is primarily looking at integrated API management platforms, but with a strong focus on security features either embedded directly into these solutions or provided by specialized third party tools closely integrated with them.

The API market has changed dramatically within just a few years

When we started following the API security market over 5 years ago, the industry was still in a rather early emerging stage, with most large vendors focusing primarily on operational capabilities, with very rudimentary threat protection functions built into API management platforms and dedicated API security solutions almost non-existent. In just a few years, the market has changed dramatically.

On one hand, the core API management capabilities are quickly becoming almost a commodity, with, for example, every cloud service provider offering at least some basic API gateway functionality built into their cloud platforms utilizing their native identity management, monitoring, and analytics capabilities. Enterprise-focused API management vendors are therefore looking into expanding the coverage of their solutions to address new business, security or compliance challenges. Some, more future-minded vendors are even no longer considering API management a separate discipline within IT and offer their existing tools as a part of a larger enterprise integration platforms.

On the other hand, the growing awareness of the general public about API security challenges has dramatically increased the demand for specialized tools for securing existing APIs. This has led to the emergence of numerous security-focused startups, offering their innovative solutions, usually within a single area of the API security discipline.

Despite consolidation, there is no “one stop shop” for API security yet

Unfortunately, the field of API security is very broad and complicated, and very few (if any) vendors are currently capable of delivering a comprehensive security solution that could cover all required functional areas. Although the market is already showing signs of undergoing consolidation, with larger vendors acquiring these startups and incorporating their technologies into existing products, expecting to find a “one stop shop” for API security is still a bit premature.

Although the current state of API management and security market is radically different from the situation just a few years ago, and the overall developments are extremely positive, indicating growing demand for more universal and convenient tools and increasing quality of available solutions, it is yet to reach anything resembling the stage of maturity. Thus, it’s even more important for companies developing their API strategies to be aware of the current developments and to look for solutions that implement the required capabilities and integrate well with other existing tools and processes.

Hybrid deployment model is the only flexible and future-proof security option

Since most API management solutions are expected to provide management and protection for APIs regardless of where they are deployed – on-premises, in any cloud or within containerized or serverless environments – the very notion of the delivery model becomes complicated.

Most API management platforms are designed to be loosely coupled, flexible, scalable and environment-agnostic, with a goal to provide consistent functional coverage for all types of APIs and other services. While the gateway-based deployment model remains the most widespread, with API gateways deployed either closer to existing backends or to API consumers, modern application architectures may require alternative deployment scenarios like service meshes for microservices.

Dedicated API security solutions that rely on real-time monitoring and analytics may be deployed either in-line, intercepting API traffic or rely on out-of-band communications with API management platforms. However, management consoles, developer portals, analytics platforms and many other components are usually deployed in the cloud to enable a single pane of glass view across heterogeneous deployments. A growing number of additional capabilities are now being offered as Software-as-a-Service with consumption-based licensing.

In short, for a comprehensive API management and security architecture a hybrid deployment model is the only flexible and future-proof option. Still, for highly sensitive or regulated environments customers may opt for a fully on-premises deployment.

Required Capabilities

In our upcoming Leadership Compass on API Management and Security, we evaluate products according to multiple key functional areas of API management and security solutions. These include API Lifecycle Management core capabilities, flexibility of Deployment and Integration, developer engagement with Developer Portal and Tools, strength and flexibility of Identity and Access Control, API Vulnerability Management for proactive hardening of APIs, Real-time Security Intelligence for detecting ongoing attacks, Integrity and Threat Protection means for securing the data processed by APIs, and, last but not least, each solution’s Scalability and Performance.

The preliminary results of our comparison will be presented at out Cybersecurity Leadership Summit, which will take place next week in Berlin.

Can Your Antivirus Be Too Intelligent Sometimes?

Current and future applications of artificial intelligence (or should we rather stick to a more appropriate term “Machine Learning”?) in cybersecurity have been one of the hottest discussion topics in recent years. Some experts, especially those employed by anti-malware vendors, see ML-powered malware detection as the ultimate solution to replace all previous-generation security tools. Others are more cautious, seeing great potential in such products, but warning about the inherent challenges of current ML algorithms.

One particularly egregious example of “AI security gone wrong” was covered in an earlier post by my colleague John Tolbert. In short, to reduce the number of false positives produced by an AI-based malware detection engine, developers have added another engine that whitelisted popular software and games. Unfortunately, the second engine worked a bit too well, allowing hackers to mask any malware as innocent code just by appending some strings copied from a whitelisted application.

However, such cases where bold marketing claims contradict not just common sense but the reality itself and thus force engineers to fix their ML model shortcomings with clumsy workarounds, are hopefully not particularly common. However, every ML-based security product does face the same challenge – whenever a particular file triggers a false positive, there is no way to tell the model to just stop it. After all, machine learning is not based on rules, you have to feed the model with lots of training data to gradually guide it to a correct decision and re-labeling just one sample is not enough.

This is exactly the problem the developers of Dolphin Emulator have recently faced: for quite some time, every build of their application has been recognized by Windows Defender as a malware based on Microsoft’s AI-powered behavior analysis. Every time the developers would submit a report to Microsoft, it would be dutifully added to the application whitelist, and the case would be closed. Until the next build with a different file hash is released.

Apparently, the way this cloud-based ML-powered detection engine is designed, there is simply no way to fix a false positive once and for all future builds. However, the company obviously does not want to make the same mistake as Cylance and inadvertently whitelist too much, creating potential false negatives. Thus, the developers and users of the Dolphin Emulator are left with the only option: submit more and more false-positive reports and hope that sooner or later the ML engine will “change its mind” on the issue.

Machine learning enhanced security tools are supposed to eliminate the tedious manual labor by security analysts; however, this issue shows that sometimes just the opposite happens. Antimalware vendors, application developers, and even users must do more work to overcome this ML interpretation problem. Yet, does it really mean that incorporating machine learning into an antivirus was a mistake? Of course not, but giving too much authority to an ML engine which is, in a sense, incapable of explaining its decisions and does not react well to criticism, probably was.

Potential solutions for these shortcomings do exist, the most obvious being the ongoing work on making machine learning models more explainable, giving insights into the ways they are making decisions on particular data samples, instead of just presenting themselves to users as a kind of a black box. However, we’re yet to see commercial solutions based on this research. In the future, a broader approach towards the “artificial intelligence lifecycle” will surely be needed, covering not just developing and debugging models, but stretching from the initial training data management all the way up to ethical and legal implications of AI.

By the way, we’re going to discuss the latest developments and challenges of AI in cybersecurity at our upcoming Cybersecurity Leadership Summit in Berlin. Looking forward to meeting you there! If you want to read up on Artificial Intelligence and Machine Learning, be sure to browse our KC+ research platform.

Do You Need a Chief Artificial Intelligence Officer?

Well, if you ask me, the short answer is – why not? After all, companies around the world have a long history of employing people with weird titles ranging from “Chief Happiness Officer” to “Galactic Viceroy of Research Excellence”. A more reasonable response, however, would need to take one important thing into consideration – what a CAIO’s job in your organization would be?

There is no doubt that “Artificial Intelligence” has already become an integral part of our daily lives, both at home and at work. In just a few years, machine learning and other technologies that power various AI applications evolved from highly complicated and prohibitively expensive research prototypes to a variety of specialized solutions available as a service. From image recognition and language processing to predictive analytics and intelligent automation - a broad range of useful AI-powered tools is now available to everyone.

Just like the cloud a decade ago (and Big Data even earlier), AI is universally perceived as a major competitive advantage, a solution for numerous business challenges and even as an enabler of new revenue streams. However, does it really imply that every organization needs an “AI strategy” along with a dedicated executive to implement it?

Sure, there are companies around the world that have made AI a major part of their core business. Cloud service providers, business intelligence vendors or large manufacturing and logistics companies – for them, AI is a major part of the core business expertise or even a revenue-generating product. For the rest of us, however, AI is just another toolkit, powerful and convenient, to address specific business challenges.

Whether your goal is to improve the efficiency of your marketing campaign, to optimize equipment maintenance cycle or to make your IT infrastructure more resilient against cyberattacks – a sensible strategy to achieve such a goal never starts with picking up a single tool. Hiring a highly motivated AI specialist to tackle these challenges would have exactly the opposite effect: armed with a hammer, a person is inevitably going to treat any problem as if it were a nail.

This, of course, by no means implies that companies should not hire AI specialists. However, just like the AI itself was never intended to replace humans, “embracing the AI” should not overshadow the real business goals. We only need to look at Blockchain for a similar story: just a couple years ago adding a Blockchain to any project seemed like a sensible goal regardless of any potential practical gains. Today, the technology has already passed the peak of inflated expectations and it finally seems that the fad is transitioning to the productive phase, at least in those usage scenarios where lack of reliable methods of establishing distributed trust was indeed a business challenge.

Another aspect to consider is the sheer breadth of the AI frontier, both from the AI expert’s perspective and from the point of view of a potential user. Even within such a specialized application area as cybersecurity, the choice of available tools and strategies can be quite bewildering. Looking at the current AI landscape as a whole,  one cannot but realize that it encompasses many complex and quite unrelated technologies and problem domains. Last but not least, consider the new problems that AI itself is creating: many of those lie very much outside of the technology scope and come with social, ethical or legal implications.

In this regard, coming up with a single strategy that is supposed to incorporate so many disparate factors and can potentially influence every aspect of a company’s core business goals and processes seems like a leap of faith that not many organizations are ready to make just yet. Maybe a more rational approach towards AI is the same as with the cloud or any other new technology before that: identify the most important challenges your business is facing, set reasonable goals, find the experts that can help identify the most appropriate tools for achieving them and work together on delivering tangible results. Even better if you can collaborate on (probably different) experts on outlining a long-term AI adoption strategy that would ensure that your individual projects and investments align with each other and avoid wasting time and resources. In other words: Think Big, Start Small, Learn Fast.

If you liked this text, feel free to browse our Artificial Intelligence focus area for more related content.

Meet the Next-Generation Oracle

Oracle OpenWorld 2019 has just wrapped yesterday, and if there is a single word that can describe my impressions of it, that would be “different”. Immediately noticeable was the absence of the traditional Oracle Red spilling into the streets around the Moscone Center in San Francisco, and the reason behind it is the new corporate design system called Redwood. You can already see its colors and patterns applied to the company’s website, but more importantly, it defines new UI controls for Oracle applications and cloud services.

Design, however, is by far not the Oracle’s biggest change. It appears that the company has finally reached the stage where a radical cultural shift is inevitable. To adapt to the latest market challenges and to extend the reach towards new customer demographics, Oracle needs to seriously reconsider many of its business practices, just like Microsoft did years ago. And looking at the announcements of this year’s OOW, the company is already making major strides in the right direction.

It’s an open secret that for years, Oracle has been struggling to position itself as one of the leading cloud service providers. Unfortunately, for a latecomer to this market, playing catch-up with more successful competitors is always a losing strategy. It took the company some time to realize that, and now Oracle is trying a different game: learning from others’ mistakes, understanding the challenges and requirements of modern enterprises, and in the end offering a lean, yet complete stack of cloud services that provide the highest level of performance, comprehensive security and compliance controls and, last but not least, intelligent automation for any business process.

The key concept in this vision for Oracle is “autonomy”. To eliminate human labor from cloud management is to eliminate human error, thus preventing the most common reason for data breaches. Last year, we’ve seen the announcement of the self-patching and self-tuning Autonomous Database. This time, Autonomous Linux has been presented – an operating system that can update itself (including kernel patches) without downtime. It seems that the company’s strategic vision is to make every service in their cloud autonomous in the same sense as well. Combined with the Generation 2 cloud infrastructure designed specifically to eliminate many network-based attack vectors, this provides additional weight to Oracle’s claim of having a cloud ready to run the most business-critical workloads.

Oracle Data Safe, a cloud-based service that improves Oracle database security by identifying risky configuration, users and sensitive data, which allows customers to closely monitor user activities and ensure data protection and compliance for their cloud databases, has been announced as well. Now Oracle cloud databases now include a straightforward, easy to use and free service that helps customers protect their sensitive data from security threats and compliance violations.

It is also worth noting that the company is finally starting to think “outside of the box” with regards to their business strategy as well; or rather outside of the “Oracle ecosystem” bubble. Strategic partnerships with Microsoft (to establish low-latency interconnections between Azure and Oracle Cloud datacenters) and VMware (to allow businesses lift and shift their entire VMware stacks to the cloud while maintaining full control over them, impossible in other public clouds) demonstrate this major paradigm shift in the company’s cloud roadmap.

Even more groundbreaking is arguably the introduction of the new always free tier for cloud services – which is exactly what it says on the lid: an opportunity for every developer, student or even a corporate IT worker to use Autonomous Databases, virtual machines, and other core cloud infrastructure services for an unlimited time. Of course, the offer is restricted by allocated resources, but all functional benefits are still there, and not just for testing. Hopefully, Oracle will soon start promoting these tools outside of Oracle events as well. Seen any APEX evangelists around recently?

The Best Security Tool Is Your Own Common Sense

Earlier this week, Germany’s Federal Office for Information Security (popularly known as BSI) has released their Digital Barometer 2019 (in German), a public survey of private German households that measured their opinions and experience with matters of cybersecurity. Looking at the results, one cannot but admit that they do not look particularly inspiring and that they probably represent the average situation in any other developed country…

According to the study, every fourth respondent has been a victim of cybercrime at least once. The most common types of those include online shopping fraud, phishing attacks or viruses. Further 30% of participants have expressed strong concerns, believing that the risk of becoming such a victim is very high for them. Somewhat unsurprisingly, these concerns do not translate into consistent protection measures. Only 61% of surveyed users have an antivirus program installed, less than 40% update their computers regularly and only 5% opt for such “advanced” technologies as a VPN.

I’m not entirely sure, by the way, how to interpret these results. Did BSI count users running Windows and thus having a very decent antivirus installed by default as protected? And what about iPhone owners who are not given any opportunity to secure their devices even if they wished to do so? Also, it’s quite amusing that the creators of the survey consider email encryption a useful cybersecurity measure. Even weirder is the inclusion of regular password change (a practice that has long been proven useless and is no longer recommended by NIST, for example) but a notable lack of any mentions of multi-factor authentication.

More worrying statistics, however, show that although the absolute majority of users have strong concerns about their online safety, very few actually consider themselves sufficiently informed about the latest developments in this area and even fewer actually implement those recommendations.

The results also clearly indicate that victims of cybercrime have not much faith in the authorities and mostly deal with consequences themselves or turn to friends and family. Less than a third of such crimes end up reported to the police, which means that we should take the official cybercrime statistics (which incidentally show that the rate of such crimes in Germany has grown 8% last year) with a grain of salt – the real number might be much higher.

The rest of the report talks about various measures the government, BSI and police should develop to tackle the problem, but I don’t think that many users will see any notable changes in that regard: their online safety is still largely their own concern… So, what recommendations KuppingerCole could give them?

  • Do not blindly spend money on security tools without understanding your risks and how those tools can (or cannot) mitigate them. Most home users do not really need another antivirus or firewall – the ones built into Windows are quite good already. However, corporate users require an efficient, multi-level security approach. Defining a tailored security portfolio therefore is an important challenge.
  • In fact, investing in a reliable off-site backup solution would make much more sense: even if your device is compromised and your files are destroyed by ransomware, you could always restore them quickly. A good backup will also protect from many other risks and prevent you from losing an important document to simple negligence or a major natural disaster. And by the way: Dropbox and Google Drive are not backup solutions.
  • Activating multi-factor authentication for your online services will automatically protect you from 99% of hackers and fraudsters. It is crucial to do it consistently: not just for your online banking, but for email and social media platforms. By making your accounts impossible to hijack you’re protecting not just yourself, but your online friends as well.
  • Quite frankly, the best security tool is your own common sense. Checking a suspiciously looking email for some obvious indicators of fraud or asking your colleague whether they actually used an obscure website to send you an urgent document before opening it: in most cases, this simple vigilance will help you more than any antivirus or firewall.

For more complicated security-related questions, you can always talk to us!

Facebook Breach Leaves Half a Billion Users Hanging on the Line

It seems that there is simply no end to a long series of Facebook’s privacy blunders. This time, a security researcher has stumbled upon an unprotected server hosting several huge databases containing phone numbers of 419 million Facebook users from different countries. Judging by the screenshot included in an article by Techcrunch, this looks like another case of a misconfigured MongoDB server exposed to the Internet without any access controls. Each record in those databases contains a Facebook user’s unique ID that can be easily linked to an existing profile along with that user’s phone number. Some also contained additional data like name, gender or location.

Facebook has denied that it has anything to do with those databases, and there is no reason to doubt that; the sheer negligence of the case rather points to a third party lacking even basic security competence, perhaps a former Facebook marketing partner. This is by far not the first case of user data being harvested off Facebook by unscrupulous third parties, perhaps the biggest one being the notorious Cambridge Analytica scandal of early 2018. After that, Facebook has disabled access to users’ phone numbers to all its partners, so the data leaked this time is perhaps not the most current.

Still, the huge number of affected users and the company’s apparent inability to find any traces of the perpetrators clearly indicate that Facebook hasn’t done nearly enough to protect their users’ privacy in recent times. Until any further details emerge, we can only speculate about the leak itself. What we could do today, however, is to try and figure out what users can possibly do to protect them from this leak and to minimize the impact of future similar data breaches.

First of all, the most common advice “don’t give your phone number to Facebook and the likes” is obviously not particularly helpful. Many online messaging services (like WhatsApp or Telegram) use phone numbers as the primary user identities and simply won’t work without them. Others (like Google, Twitter or even your own bank) rely on phone numbers to perform two-factor authentication. Second, for hundreds of millions of people around the world, this advice comes too late – their numbers are already at the disposal of spammers, hackers, and other malicious actors. And those guys have a few lucrative opportunities to exploit them…

Besides the obvious use of these phone numbers for unsolicited advertising, they can be used to expose people who use pseudonyms on social media and link those accounts to real people – for suppressing political dissent or simply to further improve online user tracking. Alas, the only sensible method of preventing this breach of privacy is to use a separate dedicated phone number for your online services, which can be cumbersome and expensive (not to mention that it had to be done before the leaks!)

Unfortunately, in some countries (including the USA), leaked phone numbers can also be used for SIM swap attacks, where a fraudster tricks a mobile operator to issue them a new SIM card with the same number, effectively taking full control over your “mobile identity”. With that card, they can pose as you in a phone call, intercept text messages with one-time passwords and thus easily take over any online service that relies on your mobile number as the means of authentication.

Can users do anything to prevent SIM swap attacks? Apparently not, at least, until mobile operators are forced by governments to collaborate with police or banks on fighting this type of fraud. Again, the only sensible way to minimize its impact is to move away from phone-based (not-so-)strong authentication methods and adopt a more modern MFA solution: for example, invest in a FIDO2-based hardware key like Yubikey or at least switch to an authenticator app like Authy. And if your bank still offers no alternative to SMS OTP, maybe today is the right time to switch to another bank.

Remember, in the modern digital world, your phone number is the key to your online identity. Keep it secret, keep it safe!

Security Vendor Imperva Reports a Breach

Imperva, a US-based cybersecurity company known for its web application security and data protection products, has disclosed a breach of their customer data. According to the announcement, a subset of the customers for its cloud-based Web Application Firewall solution (formerly known as Incapsula) had their data exposed, including their email addresses, password hashes, API keys, and SSL certificates.

Adding insult to injury, this breach seems to be that of the worst kind: it happened long ago, probably in September 2017, and was unnoticed until a third party notified Imperva a week ago. Even though the investigation is still ongoing and not many details are revealed yet, the company did the right thing by providing a prompt full disclosure along with recommended security measures.

Still, what can we learn or at least guess from this story? First and foremost, even the leading cybersecurity vendors are not immune (or should I say, “impervious”?) to hacking and data breaches, exposing not only their own corporate infrastructures and sensitive data, but creating unexpected attack vectors for their customers. This is especially critical for SaaS-based security solutions, where a single data leak may give a hacker convenient means to attack multiple other companies using the service.

More importantly, however, this highlights the critical importance of having monitoring and governance tools in place in addition to traditional protection-focused security technologies. After all, having an API key for a cloud-based WAF gives a hacker ample opportunity to silently modify its policies, weakening or completely disabling protection of the application behind it. If the customer has no means of detecting these changes and reacting quickly, he will inevitably end up being the next target.

Having access to the customer’s SSL certificates opens even broader opportunities for hackers: application traffic can be exposed to various Man-in-the-Middle attacks or even silently diverted to a malicious third party for all kinds of misuse: from data exfiltration to targeted phishing attacks. Again, without specialized monitoring and detection tools in place, such attacks may go unnoticed for months (depending on how long your certificate rotation cycles are). Quite frankly, having your password hashes leaked feels almost harmless in comparison.

So, does this mean that Imperva’s Cloud WAF should no longer be trusted at all? Of course not, but the company will surely have to work hard to restore its product’s reputation after this breach.

Does it mean that SaaS-based security products, in general, should be avoided? Again, not necessarily, but additional risks of relying on security solutions outside of your direct control must be taken into account. Alas, finding the right balance between complexity and costs of an on-premises solution vs. scalability and convenience of “security from the cloud” has just become even more complicated than it was last week.

The bottom line is that although 100% security is impossible to achieve, even with multi-layered security architecture, the only difference between a good and a bad strategy here is in properly identifying the business risks and investing in appropriate mitigation controls. However, without continuous monitoring and governance in place, you will inevitably end up finding about a data breach long after it has occurred – and you’ll be extremely lucky if you learn it from your security vendor and not from the morning news.

The ultimate security of an organization, and thus its residual risk, depends on the proper mix of complementary components within an IT security portfolio. Gaps in safeguarding of sensitive systems must be identified and eliminated. Functional overlaps and ineffective measures must give way to more efficient concepts. The KuppingerCole Analysts Portfolio Compass Advisory Services offers you support in the evaluation and validation of existing security controls in your specific infrastructure, with the aim of designing a future-proof and cost-efficient mix of measures. Learn more here or just talk to us.

VMware to Acquire Carbon Black and Pivotal, Aims at the Modern, Secure Cloud Vision

Last week, VMware has announced its intent to acquire Carbon Black, one of the leading providers of cloud-based endpoint security solutions. This announcement follows earlier news about acquiring Pivotal, a software development company known for its Cloud Foundry cloud application platform, as well as Bitnami, a popular application delivery service. The combined value of these acquisitions would reach five billion dollars, so it looks like a major upgrade of VMware’s long-term strategy with regards to the cloud.

Looking back at the company’s 20-year history, one cannot but admit VMware’s enormous influence on the very foundation and development of cloud computing, yet their relationship with the cloud was quite uneven. As a pioneer in hardware virtualization, VMware has basically laid the technology foundation for scalable and manageable computing infrastructures, first in on-premises datacenters and later in the public cloud. Over the years, the company has dabbed in IaaS and PaaS services as well, but those attempts weren’t particularly successful: The Cloud Foundry platform was spun out as a separate company in 2013 (the very same Pivotal that VMware is about to buy back now!) and the vCloud Air service was sold out in 2017.

This time however the company seems quite resolute to try it again. Why? What has changed in recent years that may give VMware another chance? Quite a lot, to be fair.

First of all, Cloud is no longer a buzzword: most businesses have already figured out its capabilities and potential limitations, outlined their long-term strategies and are now working on integrating cloud technologies into their business goals. Becoming cloud-native is no longer an answer to all problems, nowadays it always raises the next question: which cloud is good enough for us?

Second, developing modern applications, services or other workloads specifically for the public cloud to fully unlock all its benefits is not an easy job: old-school development tools and methods, legacy on-premises applications (many of which run on VMware-powered infrastructure, by the way) and strict compliance regulations limit the adoption rate. “Lift and shift” approach is usually frowned upon, but many companies have no other alternative: the best thing they can dream of is a method of making their applications work the same way in every environment, both on-prem and in any of the existing clouds.

Last but not least, the current state of cloud security leaves a lot to be desired, as numerous data breaches and embarrassing hacks of even the largest enterprises indicate. Even though cloud service providers are working hard to offer numerous security tools for their customers, implementing and managing dozens of standalone agents and appliances without leaving major gaps between them is a challenge few companies can master.

This is where VMware’s new vision is aiming at: offering an integrated platform for developing, running and securing business applications that work consistently across every on-premises or mobile device and in every major cloud, with consistent proactive security built directly into this unified platform instead of being bolted on it in many places. VMware’s own infrastructure technologies, which can now run natively on AWS or Azure clouds, combined with Pivotal’s Kubernetes-powered application platform and Carbon Black’s cloud-native security analytics that can now monitor every layer of the computing stack are expected to provide an integrated foundation for such a platform in the very near future.

How quickly and consistently VMware will be able to deliver on this promise remains to be seen, of course. Hopefully, third time’s a charm! 

Passwordless for the Masses

What an interesting coincidence: I’m writing this just after finishing a webinar where we talked about the latest trends in strong authentication and the ways to eliminate passwords within an enterprise. Well, this could not have been a better time for the latest announcement from Microsoft, introducing Azure Active Directory support for passwordless sign-in using FIDO2 authentication devices.

Although most people agree that passwords are no longer an even remotely adequate authentication method for the modern digital and connected world, somehow the adoption of more secure alternatives is still quite underwhelming. For years, security experts warned about compromised credentials being the most common reason for data breaches, pointing out that just by enabling multi-factor authentication, companies may prevent 99% of all identity-related attacks. Major online service providers like Google or Microsoft have been offering this option for years already. The number of vendors offering various strong authentication products, ranging from hardware-based one-time password tokens to various biometric methods to simple smartphone apps is staggering – sure there is a solution for any use case on the market today…

Why then are so few individuals and companies using MFA? What are the biggest reasons preventing its universal adoption? Arguably, it all boils down to three major perceived problems: high implementation costs, poor user experience, and lack of interoperability between all those existing products. Alas, having too many options does not encourage wider adoption – if anything, it has the opposite effect. If an organization wants to provide consistently strong authentication experiences to users of different hardware platforms, application stacks, and cloud services, they are forced to implement multiple incompatible solutions in parallel, driving costs and administration efforts up, not down.

FIDO Alliance was founded back in 2013, promising to establish certified interoperability among various strong authentication products. KuppingerCole has been following their developments closely ever since, even awarding the Alliance twice with our EIC Awards for the Best Innovation and Best Standard project. Unfortunately, the adoption rate of FIDO-enabled devices was not particularly universal, and mostly limited  to individuals, although large-scale consumer-oriented projects supported by vendors like Samsung, Google or PayPal succeeded. Lack of consistent support of the standard in browsers restricted its popularity even further.

Fast forward to early 2019, however, and the second version of the FIDO specification has been adopted as a W3C standard, ensuring its consistent support in all major web browsers, as well as in Windows 10 and Android platforms. The number of online services that support FIDO2-based strong authentication is now growing much faster than in previous years and yet, many experts would still argue that the standard is too focused on consumers and not a good fit for enterprise deployments.

Well, this week, Microsoft has announced that FIDO2 security devices are now supported in Azure Active Directory, meaning that any Azure AD-connected application or service can immediately benefit from this secure, standards-based and convenient experience. Users can now authenticate themselves using a Yubikey or any other compatible security device, the Microsoft Authenticator mobile app, or the native Windows Hello framework.

With Azure Active Directory being the identity platform behind Microsoft’s own cloud services like Office 365 and Azure Cloud, as well as one of the most popular cloud-based IAM services for numerous 3rd party applications, can this be any more “enterprise-y”?

We realize that the service is still in the preview stage, so there are still a few kinks to iron out, but in the end, this announcement may be the final push for many companies that were considering adopting some form of modern strong authentication but were wary of the challenges mentioned earlier. Going fully passwordless is not something that can be achieved in a single step, but Microsoft has made it even easier now, with more traditional MFA options and even support for legacy apps still available when needed.

And, of course, this could be a major boost for FIDO2 adoption in the enterprise world, which we can only wholeheartedly welcome.

Discover KuppingerCole

KuppingerCole PLUS

Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

AI for the Future of your Business Learn more

AI for the Future of your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00