Blog posts by Alexei Balaganski

Do You Need a Chief Artificial Intelligence Officer?

Well, if you ask me, the short answer is – why not? After all, companies around the world have a long history of employing people with weird titles ranging from “Chief Happiness Officer” to “Galactic Viceroy of Research Excellence”. A more reasonable response, however, would need to take one important thing into consideration – what a CAIO’s job in your organization would be?

There is no doubt that “Artificial Intelligence” has already become an integral part of our daily lives, both at home and at work. In just a few years, machine learning and other technologies that power various AI applications evolved from highly complicated and prohibitively expensive research prototypes to a variety of specialized solutions available as a service. From image recognition and language processing to predictive analytics and intelligent automation - a broad range of useful AI-powered tools is now available to everyone.

Just like the cloud a decade ago (and Big Data even earlier), AI is universally perceived as a major competitive advantage, a solution for numerous business challenges and even as an enabler of new revenue streams. However, does it really imply that every organization needs an “AI strategy” along with a dedicated executive to implement it?

Sure, there are companies around the world that have made AI a major part of their core business. Cloud service providers, business intelligence vendors or large manufacturing and logistics companies – for them, AI is a major part of the core business expertise or even a revenue-generating product. For the rest of us, however, AI is just another toolkit, powerful and convenient, to address specific business challenges.

Whether your goal is to improve the efficiency of your marketing campaign, to optimize equipment maintenance cycle or to make your IT infrastructure more resilient against cyberattacks – a sensible strategy to achieve such a goal never starts with picking up a single tool. Hiring a highly motivated AI specialist to tackle these challenges would have exactly the opposite effect: armed with a hammer, a person is inevitably going to treat any problem as if it were a nail.

This, of course, by no means implies that companies should not hire AI specialists. However, just like the AI itself was never intended to replace humans, “embracing the AI” should not overshadow the real business goals. We only need to look at Blockchain for a similar story: just a couple years ago adding a Blockchain to any project seemed like a sensible goal regardless of any potential practical gains. Today, the technology has already passed the peak of inflated expectations and it finally seems that the fad is transitioning to the productive phase, at least in those usage scenarios where lack of reliable methods of establishing distributed trust was indeed a business challenge.

Another aspect to consider is the sheer breadth of the AI frontier, both from the AI expert’s perspective and from the point of view of a potential user. Even within such a specialized application area as cybersecurity, the choice of available tools and strategies can be quite bewildering. Looking at the current AI landscape as a whole,  one cannot but realize that it encompasses many complex and quite unrelated technologies and problem domains. Last but not least, consider the new problems that AI itself is creating: many of those lie very much outside of the technology scope and come with social, ethical or legal implications.

In this regard, coming up with a single strategy that is supposed to incorporate so many disparate factors and can potentially influence every aspect of a company’s core business goals and processes seems like a leap of faith that not many organizations are ready to make just yet. Maybe a more rational approach towards AI is the same as with the cloud or any other new technology before that: identify the most important challenges your business is facing, set reasonable goals, find the experts that can help identify the most appropriate tools for achieving them and work together on delivering tangible results. Even better if you can collaborate on (probably different) experts on outlining a long-term AI adoption strategy that would ensure that your individual projects and investments align with each other and avoid wasting time and resources. In other words: Think Big, Start Small, Learn Fast.

If you liked this text, feel free to browse our Artificial Intelligence focus area for more related content.

Meet the Next-Generation Oracle

Oracle OpenWorld 2019 has just wrapped yesterday, and if there is a single word that can describe my impressions of it, that would be “different”. Immediately noticeable was the absence of the traditional Oracle Red spilling into the streets around the Moscone Center in San Francisco, and the reason behind it is the new corporate design system called Redwood. You can already see its colors and patterns applied to the company’s website, but more importantly, it defines new UI controls for Oracle applications and cloud services.

Design, however, is by far not the Oracle’s biggest change. It appears that the company has finally reached the stage where a radical cultural shift is inevitable. To adapt to the latest market challenges and to extend the reach towards new customer demographics, Oracle needs to seriously reconsider many of its business practices, just like Microsoft did years ago. And looking at the announcements of this year’s OOW, the company is already making major strides in the right direction.

It’s an open secret that for years, Oracle has been struggling to position itself as one of the leading cloud service providers. Unfortunately, for a latecomer to this market, playing catch-up with more successful competitors is always a losing strategy. It took the company some time to realize that, and now Oracle is trying a different game: learning from others’ mistakes, understanding the challenges and requirements of modern enterprises, and in the end offering a lean, yet complete stack of cloud services that provide the highest level of performance, comprehensive security and compliance controls and, last but not least, intelligent automation for any business process.

The key concept in this vision for Oracle is “autonomy”. To eliminate human labor from cloud management is to eliminate human error, thus preventing the most common reason for data breaches. Last year, we’ve seen the announcement of the self-patching and self-tuning Autonomous Database. This time, Autonomous Linux has been presented – an operating system that can update itself (including kernel patches) without downtime. It seems that the company’s strategic vision is to make every service in their cloud autonomous in the same sense as well. Combined with the Generation 2 cloud infrastructure designed specifically to eliminate many network-based attack vectors, this provides additional weight to Oracle’s claim of having a cloud ready to run the most business-critical workloads.

Oracle Data Safe, a cloud-based service that improves Oracle database security by identifying risky configuration, users and sensitive data, which allows customers to closely monitor user activities and ensure data protection and compliance for their cloud databases, has been announced as well. Now Oracle cloud databases now include a straightforward, easy to use and free service that helps customers protect their sensitive data from security threats and compliance violations.

It is also worth noting that the company is finally starting to think “outside of the box” with regards to their business strategy as well; or rather outside of the “Oracle ecosystem” bubble. Strategic partnerships with Microsoft (to establish low-latency interconnections between Azure and Oracle Cloud datacenters) and VMware (to allow businesses lift and shift their entire VMware stacks to the cloud while maintaining full control over them, impossible in other public clouds) demonstrate this major paradigm shift in the company’s cloud roadmap.

Even more groundbreaking is arguably the introduction of the new always free tier for cloud services – which is exactly what it says on the lid: an opportunity for every developer, student or even a corporate IT worker to use Autonomous Databases, virtual machines, and other core cloud infrastructure services for an unlimited time. Of course, the offer is restricted by allocated resources, but all functional benefits are still there, and not just for testing. Hopefully, Oracle will soon start promoting these tools outside of Oracle events as well. Seen any APEX evangelists around recently?

The Best Security Tool Is Your Own Common Sense

Earlier this week, Germany’s Federal Office for Information Security (popularly known as BSI) has released their Digital Barometer 2019 (in German), a public survey of private German households that measured their opinions and experience with matters of cybersecurity. Looking at the results, one cannot but admit that they do not look particularly inspiring and that they probably represent the average situation in any other developed country…

According to the study, every fourth respondent has been a victim of cybercrime at least once. The most common types of those include online shopping fraud, phishing attacks or viruses. Further 30% of participants have expressed strong concerns, believing that the risk of becoming such a victim is very high for them. Somewhat unsurprisingly, these concerns do not translate into consistent protection measures. Only 61% of surveyed users have an antivirus program installed, less than 40% update their computers regularly and only 5% opt for such “advanced” technologies as a VPN.

I’m not entirely sure, by the way, how to interpret these results. Did BSI count users running Windows and thus having a very decent antivirus installed by default as protected? And what about iPhone owners who are not given any opportunity to secure their devices even if they wished to do so? Also, it’s quite amusing that the creators of the survey consider email encryption a useful cybersecurity measure. Even weirder is the inclusion of regular password change (a practice that has long been proven useless and is no longer recommended by NIST, for example) but a notable lack of any mentions of multi-factor authentication.

More worrying statistics, however, show that although the absolute majority of users have strong concerns about their online safety, very few actually consider themselves sufficiently informed about the latest developments in this area and even fewer actually implement those recommendations.

The results also clearly indicate that victims of cybercrime have not much faith in the authorities and mostly deal with consequences themselves or turn to friends and family. Less than a third of such crimes end up reported to the police, which means that we should take the official cybercrime statistics (which incidentally show that the rate of such crimes in Germany has grown 8% last year) with a grain of salt – the real number might be much higher.

The rest of the report talks about various measures the government, BSI and police should develop to tackle the problem, but I don’t think that many users will see any notable changes in that regard: their online safety is still largely their own concern… So, what recommendations KuppingerCole could give them?

  • Do not blindly spend money on security tools without understanding your risks and how those tools can (or cannot) mitigate them. Most home users do not really need another antivirus or firewall – the ones built into Windows are quite good already. However, corporate users require an efficient, multi-level security approach. Defining a tailored security portfolio therefore is an important challenge.
  • In fact, investing in a reliable off-site backup solution would make much more sense: even if your device is compromised and your files are destroyed by ransomware, you could always restore them quickly. A good backup will also protect from many other risks and prevent you from losing an important document to simple negligence or a major natural disaster. And by the way: Dropbox and Google Drive are not backup solutions.
  • Activating multi-factor authentication for your online services will automatically protect you from 99% of hackers and fraudsters. It is crucial to do it consistently: not just for your online banking, but for email and social media platforms. By making your accounts impossible to hijack you’re protecting not just yourself, but your online friends as well.
  • Quite frankly, the best security tool is your own common sense. Checking a suspiciously looking email for some obvious indicators of fraud or asking your colleague whether they actually used an obscure website to send you an urgent document before opening it: in most cases, this simple vigilance will help you more than any antivirus or firewall.

For more complicated security-related questions, you can always talk to us!

Facebook Breach Leaves Half a Billion Users Hanging on the Line

It seems that there is simply no end to a long series of Facebook’s privacy blunders. This time, a security researcher has stumbled upon an unprotected server hosting several huge databases containing phone numbers of 419 million Facebook users from different countries. Judging by the screenshot included in an article by Techcrunch, this looks like another case of a misconfigured MongoDB server exposed to the Internet without any access controls. Each record in those databases contains a Facebook user’s unique ID that can be easily linked to an existing profile along with that user’s phone number. Some also contained additional data like name, gender or location.

Facebook has denied that it has anything to do with those databases, and there is no reason to doubt that; the sheer negligence of the case rather points to a third party lacking even basic security competence, perhaps a former Facebook marketing partner. This is by far not the first case of user data being harvested off Facebook by unscrupulous third parties, perhaps the biggest one being the notorious Cambridge Analytica scandal of early 2018. After that, Facebook has disabled access to users’ phone numbers to all its partners, so the data leaked this time is perhaps not the most current.

Still, the huge number of affected users and the company’s apparent inability to find any traces of the perpetrators clearly indicate that Facebook hasn’t done nearly enough to protect their users’ privacy in recent times. Until any further details emerge, we can only speculate about the leak itself. What we could do today, however, is to try and figure out what users can possibly do to protect them from this leak and to minimize the impact of future similar data breaches.

First of all, the most common advice “don’t give your phone number to Facebook and the likes” is obviously not particularly helpful. Many online messaging services (like WhatsApp or Telegram) use phone numbers as the primary user identities and simply won’t work without them. Others (like Google, Twitter or even your own bank) rely on phone numbers to perform two-factor authentication. Second, for hundreds of millions of people around the world, this advice comes too late – their numbers are already at the disposal of spammers, hackers, and other malicious actors. And those guys have a few lucrative opportunities to exploit them…

Besides the obvious use of these phone numbers for unsolicited advertising, they can be used to expose people who use pseudonyms on social media and link those accounts to real people – for suppressing political dissent or simply to further improve online user tracking. Alas, the only sensible method of preventing this breach of privacy is to use a separate dedicated phone number for your online services, which can be cumbersome and expensive (not to mention that it had to be done before the leaks!)

Unfortunately, in some countries (including the USA), leaked phone numbers can also be used for SIM swap attacks, where a fraudster tricks a mobile operator to issue them a new SIM card with the same number, effectively taking full control over your “mobile identity”. With that card, they can pose as you in a phone call, intercept text messages with one-time passwords and thus easily take over any online service that relies on your mobile number as the means of authentication.

Can users do anything to prevent SIM swap attacks? Apparently not, at least, until mobile operators are forced by governments to collaborate with police or banks on fighting this type of fraud. Again, the only sensible way to minimize its impact is to move away from phone-based (not-so-)strong authentication methods and adopt a more modern MFA solution: for example, invest in a FIDO2-based hardware key like Yubikey or at least switch to an authenticator app like Authy. And if your bank still offers no alternative to SMS OTP, maybe today is the right time to switch to another bank.

Remember, in the modern digital world, your phone number is the key to your online identity. Keep it secret, keep it safe!

Security Vendor Imperva Reports a Breach

Imperva, a US-based cybersecurity company known for its web application security and data protection products, has disclosed a breach of their customer data. According to the announcement, a subset of the customers for its cloud-based Web Application Firewall solution (formerly known as Incapsula) had their data exposed, including their email addresses, password hashes, API keys, and SSL certificates.

Adding insult to injury, this breach seems to be that of the worst kind: it happened long ago, probably in September 2017, and was unnoticed until a third party notified Imperva a week ago. Even though the investigation is still ongoing and not many details are revealed yet, the company did the right thing by providing a prompt full disclosure along with recommended security measures.

Still, what can we learn or at least guess from this story? First and foremost, even the leading cybersecurity vendors are not immune (or should I say, “impervious”?) to hacking and data breaches, exposing not only their own corporate infrastructures and sensitive data, but creating unexpected attack vectors for their customers. This is especially critical for SaaS-based security solutions, where a single data leak may give a hacker convenient means to attack multiple other companies using the service.

More importantly, however, this highlights the critical importance of having monitoring and governance tools in place in addition to traditional protection-focused security technologies. After all, having an API key for a cloud-based WAF gives a hacker ample opportunity to silently modify its policies, weakening or completely disabling protection of the application behind it. If the customer has no means of detecting these changes and reacting quickly, he will inevitably end up being the next target.

Having access to the customer’s SSL certificates opens even broader opportunities for hackers: application traffic can be exposed to various Man-in-the-Middle attacks or even silently diverted to a malicious third party for all kinds of misuse: from data exfiltration to targeted phishing attacks. Again, without specialized monitoring and detection tools in place, such attacks may go unnoticed for months (depending on how long your certificate rotation cycles are). Quite frankly, having your password hashes leaked feels almost harmless in comparison.

So, does this mean that Imperva’s Cloud WAF should no longer be trusted at all? Of course not, but the company will surely have to work hard to restore its product’s reputation after this breach.

Does it mean that SaaS-based security products, in general, should be avoided? Again, not necessarily, but additional risks of relying on security solutions outside of your direct control must be taken into account. Alas, finding the right balance between complexity and costs of an on-premises solution vs. scalability and convenience of “security from the cloud” has just become even more complicated than it was last week.

The bottom line is that although 100% security is impossible to achieve, even with multi-layered security architecture, the only difference between a good and a bad strategy here is in properly identifying the business risks and investing in appropriate mitigation controls. However, without continuous monitoring and governance in place, you will inevitably end up finding about a data breach long after it has occurred – and you’ll be extremely lucky if you learn it from your security vendor and not from the morning news.

The ultimate security of an organization, and thus its residual risk, depends on the proper mix of complementary components within an IT security portfolio. Gaps in safeguarding of sensitive systems must be identified and eliminated. Functional overlaps and ineffective measures must give way to more efficient concepts. The KuppingerCole Analysts Portfolio Compass Advisory Services offers you support in the evaluation and validation of existing security controls in your specific infrastructure, with the aim of designing a future-proof and cost-efficient mix of measures. Learn more here or just talk to us.

VMware to Acquire Carbon Black and Pivotal, Aims at the Modern, Secure Cloud Vision

Last week, VMware has announced its intent to acquire Carbon Black, one of the leading providers of cloud-based endpoint security solutions. This announcement follows earlier news about acquiring Pivotal, a software development company known for its Cloud Foundry cloud application platform, as well as Bitnami, a popular application delivery service. The combined value of these acquisitions would reach five billion dollars, so it looks like a major upgrade of VMware’s long-term strategy with regards to the cloud.

Looking back at the company’s 20-year history, one cannot but admit VMware’s enormous influence on the very foundation and development of cloud computing, yet their relationship with the cloud was quite uneven. As a pioneer in hardware virtualization, VMware has basically laid the technology foundation for scalable and manageable computing infrastructures, first in on-premises datacenters and later in the public cloud. Over the years, the company has dabbed in IaaS and PaaS services as well, but those attempts weren’t particularly successful: The Cloud Foundry platform was spun out as a separate company in 2013 (the very same Pivotal that VMware is about to buy back now!) and the vCloud Air service was sold out in 2017.

This time however the company seems quite resolute to try it again. Why? What has changed in recent years that may give VMware another chance? Quite a lot, to be fair.

First of all, Cloud is no longer a buzzword: most businesses have already figured out its capabilities and potential limitations, outlined their long-term strategies and are now working on integrating cloud technologies into their business goals. Becoming cloud-native is no longer an answer to all problems, nowadays it always raises the next question: which cloud is good enough for us?

Second, developing modern applications, services or other workloads specifically for the public cloud to fully unlock all its benefits is not an easy job: old-school development tools and methods, legacy on-premises applications (many of which run on VMware-powered infrastructure, by the way) and strict compliance regulations limit the adoption rate. “Lift and shift” approach is usually frowned upon, but many companies have no other alternative: the best thing they can dream of is a method of making their applications work the same way in every environment, both on-prem and in any of the existing clouds.

Last but not least, the current state of cloud security leaves a lot to be desired, as numerous data breaches and embarrassing hacks of even the largest enterprises indicate. Even though cloud service providers are working hard to offer numerous security tools for their customers, implementing and managing dozens of standalone agents and appliances without leaving major gaps between them is a challenge few companies can master.

This is where VMware’s new vision is aiming at: offering an integrated platform for developing, running and securing business applications that work consistently across every on-premises or mobile device and in every major cloud, with consistent proactive security built directly into this unified platform instead of being bolted on it in many places. VMware’s own infrastructure technologies, which can now run natively on AWS or Azure clouds, combined with Pivotal’s Kubernetes-powered application platform and Carbon Black’s cloud-native security analytics that can now monitor every layer of the computing stack are expected to provide an integrated foundation for such a platform in the very near future.

How quickly and consistently VMware will be able to deliver on this promise remains to be seen, of course. Hopefully, third time’s a charm! 

Passwordless for the Masses

What an interesting coincidence: I’m writing this just after finishing a webinar where we talked about the latest trends in strong authentication and the ways to eliminate passwords within an enterprise. Well, this could not have been a better time for the latest announcement from Microsoft, introducing Azure Active Directory support for passwordless sign-in using FIDO2 authentication devices.

Although most people agree that passwords are no longer an even remotely adequate authentication method for the modern digital and connected world, somehow the adoption of more secure alternatives is still quite underwhelming. For years, security experts warned about compromised credentials being the most common reason for data breaches, pointing out that just by enabling multi-factor authentication, companies may prevent 99% of all identity-related attacks. Major online service providers like Google or Microsoft have been offering this option for years already. The number of vendors offering various strong authentication products, ranging from hardware-based one-time password tokens to various biometric methods to simple smartphone apps is staggering – sure there is a solution for any use case on the market today…

Why then are so few individuals and companies using MFA? What are the biggest reasons preventing its universal adoption? Arguably, it all boils down to three major perceived problems: high implementation costs, poor user experience, and lack of interoperability between all those existing products. Alas, having too many options does not encourage wider adoption – if anything, it has the opposite effect. If an organization wants to provide consistently strong authentication experiences to users of different hardware platforms, application stacks, and cloud services, they are forced to implement multiple incompatible solutions in parallel, driving costs and administration efforts up, not down.

FIDO Alliance was founded back in 2013, promising to establish certified interoperability among various strong authentication products. KuppingerCole has been following their developments closely ever since, even awarding the Alliance twice with our EIC Awards for the Best Innovation and Best Standard project. Unfortunately, the adoption rate of FIDO-enabled devices was not particularly universal, and mostly limited  to individuals, although large-scale consumer-oriented projects supported by vendors like Samsung, Google or PayPal succeeded. Lack of consistent support of the standard in browsers restricted its popularity even further.

Fast forward to early 2019, however, and the second version of the FIDO specification has been adopted as a W3C standard, ensuring its consistent support in all major web browsers, as well as in Windows 10 and Android platforms. The number of online services that support FIDO2-based strong authentication is now growing much faster than in previous years and yet, many experts would still argue that the standard is too focused on consumers and not a good fit for enterprise deployments.

Well, this week, Microsoft has announced that FIDO2 security devices are now supported in Azure Active Directory, meaning that any Azure AD-connected application or service can immediately benefit from this secure, standards-based and convenient experience. Users can now authenticate themselves using a Yubikey or any other compatible security device, the Microsoft Authenticator mobile app, or the native Windows Hello framework.

With Azure Active Directory being the identity platform behind Microsoft’s own cloud services like Office 365 and Azure Cloud, as well as one of the most popular cloud-based IAM services for numerous 3rd party applications, can this be any more “enterprise-y”?

We realize that the service is still in the preview stage, so there are still a few kinks to iron out, but in the end, this announcement may be the final push for many companies that were considering adopting some form of modern strong authentication but were wary of the challenges mentioned earlier. Going fully passwordless is not something that can be achieved in a single step, but Microsoft has made it even easier now, with more traditional MFA options and even support for legacy apps still available when needed.

And, of course, this could be a major boost for FIDO2 adoption in the enterprise world, which we can only wholeheartedly welcome.

API Security in Microservices Architectures

Microservice-based architectures allow businesses to develop and deploy their applications in a much more flexible, scalable and convenient way – across multiple programming languages, frameworks and IT environments. Like with any other new technology that DevOps and security teams started to explore in the recent years, there is still quite a lot of confusion about the capabilities of new platforms, misconceptions about new attack vectors and renewed discussions about balancing security with the pace of innovation. And perhaps the biggest myth of microservices is that their security somehow takes care of itself.

Let’s get this thing out of the way first: microservices on their own are nothing more than a method of designing applications as an interconnected system of loosely coupled business-focused components. There is nothing inherent to microservices that would make them more resilient against cyber threats or prevent sensitive data from being stolen. On the contrary, microservice-based architectures rely on new tools and technologies, and those bring in new security challenges and new skills needed to mitigate them efficiently.

In fact, even if we disregard the “architectural” risks of microservices, like cascading failures or service discovery abuse, we have to agree that a modern loosely coupled application is subjected to the same risks as a traditional monolithic one – ranging from the low-level infrastructure exploits to the communication layer and all the way up to attacks targeting the application users. And perhaps no other attack vector is more critical than APIs.

As we have discussed in a recent KuppingerCole webinar, even for more traditional scenarios, API security is still something that many businesses tend to underestimate and neglect, hoping that existing tools like web application firewalls will be sufficient to secure their business APIs. Unfortunately, this could not be further from truth – APIs are subject to numerous risks that can only be successfully mitigated with a properly designed strategy that covers the whole API lifecycle – even before any code is written, let alone deployed to a backend.

In microservice-based applications, where hundreds of individual microservices are communicating with each other and with the outside world exclusively through APIs, the difficulty of securing all those interactions increases exponentially. Due to the nature of these applications, individual API endpoints become ephemeral, appearing as new containers are spun up, migrating between environments and disappearing again. And yet each of them must be secured by proper access control, threat protection, input validation, bot mitigation, and activity monitoring solutions – all those jobs which are typically performed by an API gateway. How many API gateways would you need for that?

Another challenge of microservice-based architectures is their diversity – when individual microservices are written using different development frameworks and deployed to different platforms, providing consistent authentication and authorization becomes a problem – ensuring that all components agree on a common access rights model, that they understand the same access token format, that this token exchange scales properly, and that sensitive attributes flowing between services are not exposed to the outside world. The same considerations apply to network-level communications: isolation, segmentation, traffic encryption - these are just some issues developers have to think about. Preferably, in advance.

Does all this mean that making microservices secure is too much of a hassle that undoes all the speed and convenience of the architecture? Not at all, but the key point here is that you need to do it the right way from the very beginning of your microservice journey. And luckily, you do not have to walk alone – everyone had faced the same challenges, and many have already figured them out. Others have even come up with convenient tools and frameworks that will take care of some of these problems for you.

Consider modern API security solutions that do not just focus on static infrastructure, but cover everything from proactive risk assessment of your API contracts to ensuring that each of your microservices is secured by a tiny centrally managed API microgateway. Or the protocols and standards designed specifically for microservices like Secure Production Identity Framework for Everyone (SPIFFE) – essentially the “next-gen PKI” for dynamic heterogeneous software systems. Or even full-featured service mesh implementations that provide a control and security foundation for your microservices – reinventing the wheel is the last thing you need to think about.

In fact, the only thing you absolutely must do yourself is to keep an open mind and never stop learning – about the recent technologies and tools, about the newest design patterns and best practices, and, of course, about the latest cyber threats and other risks. Needless to say, we are here to support you on this journey. See you at one of our upcoming events!

Oops, Google Did It Again!

Like many people with a long career in IT, I have numerous small computer-related side duties I’m supposed to perform for my less skilled friends and relatives. Among those, I’m helping manage a G Suite account for a small business a friend of mine has. Needless to say, I was a bit surprised to receive an urgent e-mail alert from Google yesterday, telling me that several users in that G Suite domain were impacted by a password storage problem.

Turns out, Google has just discovered that they’ve accidentally stored some of those passwords unencrypted, in plain text. Apparently, this problem can be traced back to a bug in the G Suite admin console, which has been around since 2005 (which, if I remember correctly, predates not just the “G Suite” brand, but the whole idea of offering Google services for businesses).

Google is certainly not the first large technology vendor caught violating one of the most basic security hygiene principles – just a couple months earlier we’ve heard the same story about Facebook. I’m pretty sure they won’t be the last as well – with the ever-growing complexity of modern IT infrastructures and the abundance of legacy IAM systems and applications, how can you be sure you don’t have a similar problem somewhere?

In Google’s case, the problem wasn’t even in their primary user management and authentication frameworks – it only affected the management console where admins typically create new accounts and then distribute credentials to their users. Including the passwords in plain text. In theory, this means that a rogue account admin could have access to other users’ accounts without their knowledge, but that’s a problem that goes way beyond just e-mail…

So, what can normal users do to protect themselves from this bug? Not much, actually – according to the mail from the G Suite team, they will be forcing a password reset for every affected user as well as terminating all active user sessions starting today. Combined with fixing the vulnerability in the console, this should prevent further potential exploits. 

However, considering the number of similar incidents with other companies, this should be another compelling reason for everyone to finally activate Multi-Factor Authentication for each service that supports it, including Google. Anyone who is already using any reliable MFA authentication method – ranging from smartphone apps like Google Authenticator to FIDO2-based Google Security Keys – is automatically protected from any kind of credential abuse. Just don’t use SMS-based one-time passwords, ok? They’ve been compromised years ago and should not be considered secure anymore.

As for service providers themselves – how do you even start protecting sensitive information under your control if you do not know about all places it can be stored? Comprehensive data discovery and classification strategy should be the first step towards knowing what needs to be protected. Without it, both large companies like Google and smaller like the one that just leaked 50 million Instagram account details, will remain not just subjects of sensationalized publications in press, but constant targets for lawsuits and massive fines for compliance violations.

Remember, the rumors of password’s death are greatly exaggerated – and protecting these highly insecure but so utterly convenient bits of sensitive data is still everyone’s responsibility.

Artificial Intelligence in Cybersecurity: Are We There Yet?

Artificial Intelligence (along with Machine Learning) seems to be the hottest buzzword in just about every segment of the IT industry nowadays, and not without reason. The very idea of teaching a machine to mimic the way humans think (but much, much quicker) without the need to develop millions of complex rules sounds amazing: instead, machine learning models are simply trained by feeding them with large amounts of carefully selected data.

There is however a subtle but crucial distinction between “thinking like a human” (which in academic circles is usually referred as “Strong AI” and to this day remains largely a philosophical concept) and “performing intellectual tasks like a human”, which is the gist of Artificial General Intelligence (AGI). The latter is an active research field with dozens of companies and academic institutions working on various practical applications of general AI. Much more prevalent, however, are the applications of Weak Artificial Intelligence or “Narrow AI”, which can only be trained to solve a single and rather narrow task – like language processing or image recognition.

Although the theoretical foundations of machine learning go back to the 1940s, only recently a massive surge in available computing power thanks to cloud services and specialized hardware has made it accessible to everyone. Thousands of startups are developing their AI-powered solutions for various problems. Some of those, like intelligent classification of photos or virtual voice assistants, are already an integral part of our daily lives; others, like driverless cars, are expected to become reality in a few years.

AIs are already beating humans at games and even in public debates – surely they will soon replace us in other important fields, like cybersecurity? Well, this is exactly where reality often fails to match customer expectations fueled by the intense hype wave that still surrounds AI and machine learning. Looking at various truly amazing AI applications developed by companies like Google, IBM or Tesla, some customers tend to believe that sooner or later AIs are going to replace humans completely, at least in some less creative jobs.

When it comes to cybersecurity, it’s hard to blame them, really… As companies go through the digital transformation, they are facing new challenges: growing complexity of their IT infrastructures, massive amounts of sensitive data spread across multiple clouds, and the increasing shortage of skilled people to deal with them. Even large businesses with strong security teams cannot keep up with the latest cybersecurity risks.

Having AI as potential replacement for overworked humans to ensure that threats and breaches are detected and mitigated in real time without any manual forensic analysis and decision-making – that would be awesome, wouldn’t it? Alas, people waiting for solutions like that need a reality check.

First, artificial intelligence, at least in its practical definition, was never intended to replace humans, but rather to augment their powers by automating the most tedious and boring parts of their jobs and leaving more time for creative and productive tasks. Upgrading to AI-powered tools from traditional “not-so-smart” software products may feel like switching from pen and paper to a computer, but both just provide humans with better, more convenient tools to do their job faster and with less effort.

Second, even leaving all potential ethical consequences aside, there are several technological challenges that need to be addressed specifically for the field of cybersecurity.

  • Availability and quality of training data that are required for training cybersecurity-related ML models. This data almost always contains massive amounts of sensitive information – intellectual property, PII or otherwise strictly regulated data – which companies aren’t willing to share with security vendors.
  • Formal verification and testing of machine learning models is a massive challenge of its own. Making sure that an AI-based cybersecurity product does not misbehave under real-world conditions (or indeed under adversarial examples specifically crafted to deceive ML models) is something that vendors are still figuring out, and in many cases, this is only possible through a collaboration with customers.
  • While in many applications it’s perfectly fine to train a model once and then use it for years, the field of cybersecurity is constantly evolving, and threat models must be continuously updated, expanded and retrained on newly discovered threats.

Does it mean that AI cannot be used in cybersecurity? Not at all, and in fact, the market is already booming, with numerous AI/ML-powered cybersecurity solutions available right now – the solutions that aim to offer deeper, more holistic real-time visibility into the security posture of an organization across multiple IT environments; to provide intelligent assistance for human forensic analysts by making their job more productive; to help identify previously unknown threats. In other words, to augment but definitely not to replace humans!

Perhaps the most popular approach is applying Big Data Analytics methods to raw security data for detecting patterns or anomalies in network traffic flows, application activities or user behavior. This method has led to the creation of whole new market segments variously referred to as security intelligence platforms or next-generation SIEM. These tools manage to reduce the number of false positives and other noise generated by traditional SIEMs and provide a forensic analyst with a low number of context-enriched alerts ranked by risk scores and often accompanied by actionable mitigation recommendations.

Another class of AI solutions for cybersecurity is based around true cognitive technologies – such as language processing and semantic reasoning. Potential applications include generating structured threat intelligence from unstructured textual and multimedia data (ranging from academic research papers to criminal communications on the Dark Web), proactive protection against phishing attacks or, again, intelligent decision support for human experts. Alas, we are yet to see sufficiently mature products of this kind on the market.

It’s also worth noting that some vendors are already offering products bearing the “autonomous” label. However, customers should take such claims with a pinch of salt. Yes, products like the Oracle Autonomous Database or Darktrace’s autonomous cyber-defense platform are based on AI and are, to a degree, capable of automated mitigation of various security problems, but they are still dependent on their respective teams of experts ready to intervene if something does not go as planned. That’s why such solutions are only offered as a part of a managed service package – even the best “autonomous AIs” still need humans from time to time…

So, is Artificial Intelligence the solution for all current and future cybersecurity challenges? Perhaps, but please do not let over-expectations or fears affect your purchase decisions. Thanks to the ongoing developments both in narrow and general AI, we already have much better security tools than just several years before. Yet, when planning your future security strategy, you still must think in terms of risks and the capabilities needed to mitigate them, not in terms of technologies.

Also, don’t forget that cybercriminals can use AI to create better malware, too. In fact, things are just starting to get interesting!

Discover KuppingerCole

KuppingerCole PLUS

Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

AI for the Future of your Business Learn more

AI for the Future of your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00