Blog posts by Alexei Balaganski

Oops, Google Did It Again!

Like many people with a long career in IT, I have numerous small computer-related side duties I’m supposed to perform for my less skilled friends and relatives. Among those, I’m helping manage a G Suite account for a small business a friend of mine has. Needless to say, I was a bit surprised to receive an urgent e-mail alert from Google yesterday, telling me that several users in that G Suite domain were impacted by a password storage problem.

Turns out, Google has just discovered that they’ve accidentally stored some of those passwords unencrypted, in plain text. Apparently, this problem can be traced back to a bug in the G Suite admin console, which has been around since 2005 (which, if I remember correctly, predates not just the “G Suite” brand, but the whole idea of offering Google services for businesses).

Google is certainly not the first large technology vendor caught violating one of the most basic security hygiene principles – just a couple months earlier we’ve heard the same story about Facebook. I’m pretty sure they won’t be the last as well – with the ever-growing complexity of modern IT infrastructures and the abundance of legacy IAM systems and applications, how can you be sure you don’t have a similar problem somewhere?

In Google’s case, the problem wasn’t even in their primary user management and authentication frameworks – it only affected the management console where admins typically create new accounts and then distribute credentials to their users. Including the passwords in plain text. In theory, this means that a rogue account admin could have access to other users’ accounts without their knowledge, but that’s a problem that goes way beyond just e-mail…

So, what can normal users do to protect themselves from this bug? Not much, actually – according to the mail from the G Suite team, they will be forcing a password reset for every affected user as well as terminating all active user sessions starting today. Combined with fixing the vulnerability in the console, this should prevent further potential exploits. 

However, considering the number of similar incidents with other companies, this should be another compelling reason for everyone to finally activate Multi-Factor Authentication for each service that supports it, including Google. Anyone who is already using any reliable MFA authentication method – ranging from smartphone apps like Google Authenticator to FIDO2-based Google Security Keys – is automatically protected from any kind of credential abuse. Just don’t use SMS-based one-time passwords, ok? They’ve been compromised years ago and should not be considered secure anymore.

As for service providers themselves – how do you even start protecting sensitive information under your control if you do not know about all places it can be stored? Comprehensive data discovery and classification strategy should be the first step towards knowing what needs to be protected. Without it, both large companies like Google and smaller like the one that just leaked 50 million Instagram account details, will remain not just subjects of sensationalized publications in press, but constant targets for lawsuits and massive fines for compliance violations.

Remember, the rumors of password’s death are greatly exaggerated – and protecting these highly insecure but so utterly convenient bits of sensitive data is still everyone’s responsibility.

Artificial Intelligence in Cybersecurity: Are We There Yet?

Artificial Intelligence (along with Machine Learning) seems to be the hottest buzzword in just about every segment of the IT industry nowadays, and not without reason. The very idea of teaching a machine to mimic the way humans think (but much, much quicker) without the need to develop millions of complex rules sounds amazing: instead, machine learning models are simply trained by feeding them with large amounts of carefully selected data.

There is however a subtle but crucial distinction between “thinking like a human” (which in academic circles is usually referred as “Strong AI” and to this day remains largely a philosophical concept) and “performing intellectual tasks like a human”, which is the gist of Artificial General Intelligence (AGI). The latter is an active research field with dozens of companies and academic institutions working on various practical applications of general AI. Much more prevalent, however, are the applications of Weak Artificial Intelligence or “Narrow AI”, which can only be trained to solve a single and rather narrow task – like language processing or image recognition.

Although the theoretical foundations of machine learning go back to the 1940s, only recently a massive surge in available computing power thanks to cloud services and specialized hardware has made it accessible to everyone. Thousands of startups are developing their AI-powered solutions for various problems. Some of those, like intelligent classification of photos or virtual voice assistants, are already an integral part of our daily lives; others, like driverless cars, are expected to become reality in a few years.

AIs are already beating humans at games and even in public debates – surely they will soon replace us in other important fields, like cybersecurity? Well, this is exactly where reality often fails to match customer expectations fueled by the intense hype wave that still surrounds AI and machine learning. Looking at various truly amazing AI applications developed by companies like Google, IBM or Tesla, some customers tend to believe that sooner or later AIs are going to replace humans completely, at least in some less creative jobs.

When it comes to cybersecurity, it’s hard to blame them, really… As companies go through the digital transformation, they are facing new challenges: growing complexity of their IT infrastructures, massive amounts of sensitive data spread across multiple clouds, and the increasing shortage of skilled people to deal with them. Even large businesses with strong security teams cannot keep up with the latest cybersecurity risks.

Having AI as potential replacement for overworked humans to ensure that threats and breaches are detected and mitigated in real time without any manual forensic analysis and decision-making – that would be awesome, wouldn’t it? Alas, people waiting for solutions like that need a reality check.

First, artificial intelligence, at least in its practical definition, was never intended to replace humans, but rather to augment their powers by automating the most tedious and boring parts of their jobs and leaving more time for creative and productive tasks. Upgrading to AI-powered tools from traditional “not-so-smart” software products may feel like switching from pen and paper to a computer, but both just provide humans with better, more convenient tools to do their job faster and with less effort.

Second, even leaving all potential ethical consequences aside, there are several technological challenges that need to be addressed specifically for the field of cybersecurity.

  • Availability and quality of training data that are required for training cybersecurity-related ML models. This data almost always contains massive amounts of sensitive information – intellectual property, PII or otherwise strictly regulated data – which companies aren’t willing to share with security vendors.
  • Formal verification and testing of machine learning models is a massive challenge of its own. Making sure that an AI-based cybersecurity product does not misbehave under real-world conditions (or indeed under adversarial examples specifically crafted to deceive ML models) is something that vendors are still figuring out, and in many cases, this is only possible through a collaboration with customers.
  • While in many applications it’s perfectly fine to train a model once and then use it for years, the field of cybersecurity is constantly evolving, and threat models must be continuously updated, expanded and retrained on newly discovered threats.

Does it mean that AI cannot be used in cybersecurity? Not at all, and in fact, the market is already booming, with numerous AI/ML-powered cybersecurity solutions available right now – the solutions that aim to offer deeper, more holistic real-time visibility into the security posture of an organization across multiple IT environments; to provide intelligent assistance for human forensic analysts by making their job more productive; to help identify previously unknown threats. In other words, to augment but definitely not to replace humans!

Perhaps the most popular approach is applying Big Data Analytics methods to raw security data for detecting patterns or anomalies in network traffic flows, application activities or user behavior. This method has led to the creation of whole new market segments variously referred to as security intelligence platforms or next-generation SIEM. These tools manage to reduce the number of false positives and other noise generated by traditional SIEMs and provide a forensic analyst with a low number of context-enriched alerts ranked by risk scores and often accompanied by actionable mitigation recommendations.

Another class of AI solutions for cybersecurity is based around true cognitive technologies – such as language processing and semantic reasoning. Potential applications include generating structured threat intelligence from unstructured textual and multimedia data (ranging from academic research papers to criminal communications on the Dark Web), proactive protection against phishing attacks or, again, intelligent decision support for human experts. Alas, we are yet to see sufficiently mature products of this kind on the market.

It’s also worth noting that some vendors are already offering products bearing the “autonomous” label. However, customers should take such claims with a pinch of salt. Yes, products like the Oracle Autonomous Database or Darktrace’s autonomous cyber-defense platform are based on AI and are, to a degree, capable of automated mitigation of various security problems, but they are still dependent on their respective teams of experts ready to intervene if something does not go as planned. That’s why such solutions are only offered as a part of a managed service package – even the best “autonomous AIs” still need humans from time to time…

So, is Artificial Intelligence the solution for all current and future cybersecurity challenges? Perhaps, but please do not let over-expectations or fears affect your purchase decisions. Thanks to the ongoing developments both in narrow and general AI, we already have much better security tools than just several years before. Yet, when planning your future security strategy, you still must think in terms of risks and the capabilities needed to mitigate them, not in terms of technologies.

Also, don’t forget that cybercriminals can use AI to create better malware, too. In fact, things are just starting to get interesting!

Oslo, We Have a Problem!

As you have certainly already heard, Norsk Hydro, one of the world’s largest aluminum manufacturers and the second biggest hydropower producer in Norway, has suffered a massive cyber attack earlier today. According to a very short statement issued by the company, the attack has impacted operations in several of its business areas. To maintain the safety and continuity of their industrial processes, many of the operations had to be switched to manual mode.

The details of the incident are still pretty sparse, but according to the statement at their press conference, it may have been hit by a ransomware attack. Researchers are currently speculating that it most likely has been LockerGoga, a strain of malware that affected a French company Altran Technologies back in January. This particular strain is notable for having been signed with a valid digital certificate, although it has been revoked since then. Also, only a few of antimalware products are currently able to detect and block it.

It appears that the IT people at Norsk Hydro are currently trying to contain the fallout from the attack, including asking their employees not to turn on their computers and even shutting down the corporate website. Multiple shifts are working manually at the production facilities to ensure that there is no danger to people’s safety and to minimize financial impact.

We will hopefully see more details about the incident later, but what could we learn from the Norsk Hydro’s initial response? First and foremost, we have another confirmation that this kind of incident can happen to anybody. No company, regardless of its industry, size and security budget can assume that their business or industrial networks are immune to such attacks, or that they already have controls in place that defend against all possible security risks.

Second, here we have another textbook example of how not to handle public relations during a security incident. We can assume that a company of that scale should have at least some kind of plan for worst-case scenarios like this – but does it go beyond playbooks for security experts? Have the company’s executives ever been trained to prepare for such level of media attention? And whose idea was it anyway to limit public communications to a Facebook page?

Studies in other countries (like this report from the UK government) indicate that companies are shockingly unprepared for such occasions, with many lacking even a basic incident response plan. However, even having one on paper does not guarantee that everything will go according to it. The key to effective incident management is preparation and this should include awareness among all the people involved, clearly defined roles and responsibilities, access to external experts if needed, but above anything else – practice!

KuppingerCole’s top three recommendations would be the following:

  1. Be prepared! You must have an incident response plan that covers not just the IT aspects of a cyberattack, but organizational, legal, financial and public relations and other means of dealing with its fallout. It is essential that company’s senior executives are involved in its design and rehearsals, since they will be the front and center of any actual operation.
  2. Invest in the right technologies and products to reduce the impact of cyber incidents as well as those to prevent them from happening in the first place. Keep in mind however that no security tool vendor can do the job of assessing the severity and likelihood of your own business risks. Also, always have a backup set of tools and even “backup people” ready to ensure that essential business operations can continue even during a full shutdown.
  3. You will need help from specialists in multiple areas ranging from cyber forensic to PR, and most companies do not have all those skills internally. Look for partnerships with external experts and do it before the incident occurs.

If you need neutral and independent advice, we are here to assist you as well!

Building Trust by Design

Trust has somehow become a marketing buzzword recently. There is a lot of talks about “redefining trust”, “trust technologies” or even “trustless models” (the latter is usually applied to Blockchain, of course). To me, this has always sounded… weird.

After all, trust is the foundation of the very society we live in, the key notion underlying the “social contract” that allows individuals to coexist in a mutually beneficial way. For businesses, trust has always been a resulting combination of two crucial driving forces – reputation and regulation. Gaining a trustworthy reputation takes time but ruining it can be instantaneous – and it is usually in a businesses’ best interest not to cheat their customers or at least not to get caught (and that’s exactly where regulation comes into play!). Through the lengthy process of trial and error, we have more or less figured out already how to maintain trust in traditional “tangible” businesses. And then the Digital Transformation happened.

Unfortunately, the dawn of the digital era has not only enabled many exciting new business models but also completely shattered the existing checks and balances. On one hand, the growing complexity of IT infrastructures and the resulting skills shortage made sensitive digital data much more vulnerable to cyberattacks and breaches. On the other hand, unburdened by regulations and free from public scrutiny, many companies have decided that the lucrative business of hoarding and reselling personal information is worth more than any moral obligation towards their customers. In a way, the digital transformation has brought back the Wild West mentality to modern businesses – completely with gangs of outlaws, bounty hunters, and snake oil peddlers…

All this has led to a substantial erosion of public trust – between another high-profile data breach and a political scandal about harvesting personal data people no longer know whom to trust. From banks and retailers to social media and tech companies – this “trust meltdown” isn’t just bad publicity, it leads to substantial brand damage and financial losses. The recent introduction of strict data protection regulations like GDPR with their massive fines for privacy violations is a sign that legislation is finally catching up, but will compliance alone fix the trust issue? What other methods and technologies can companies utilize to restore their reputations?

Well, the first and foremost measure is always transparency and open communications with customers. And this isn’t just limited to breach disclosure – on the contrary, the companies must demonstrate their willingness to improve data protection and educate customers about the hidden challenges of the “digital society”. Another obvious approach is simply minimizing personal data collection from customers and implementing proper consent management. Sure, this is already one of the primary stipulations of regulations like GDPR, but compliance isn’t even the primary benefit here: for many companies, the costs savings on data protection and reputation improvements alone will already outweigh the potential (and constantly dwindling) profits from collecting more PII than necessary.

Finally, we come to the notion of security and privacy “by design”. This term has also become a buzzword for security vendors eager to sell you another data protection or cybersecurity solution. Again, it’s important to stress that just purchasing a security product does not automatically make a business more secure and thus more trustworthy. However, incorporating certain security- and privacy-enhancing technologies into the very fabric of your business processes may, in fact, bring noticeable improvements, and not just to your company’s public reputation.

Perhaps, the most obvious example of such a technology is encryption. It’s ubiquitous, cheap to implement and gives you a warm feeling of safety, right? Yes, but making encryption truly inclusive and end-to-end, ensuring that it covers all environments from databases to cloud services, and, last but not least, that the keys are managed properly is not an easy challenge. However, to make data-centric security the foundation of your digital business, you would need to go deeper still. Without identity, modern security simply cannot fulfill its potential, so you’ll need to add dynamic centralized access control to the mix. And then security monitoring and intelligence with a pinch of AI. Thus, step by step, you’ll eventually reach the holy grail of the modern IT – Zero Trust (wait, weren’t we going to boost trust, not get rid of it? Alas, that’s the misleading nature of many popular buzzwords nowadays).

For software development companies, investing into security by design can look complicated at first, too. From source code testing to various application hardening techniques to API security – writing secure applications is hard, and modern technologies like containers and microservices make it even harder, don’t they? This cannot be farther from the truth, however: modern development methodologies like DevOps and DevSecOps are in fact focusing on reducing the strain on programmers with intelligent automation, unified architectures across hybrid environments, and better experience for users, who are learning to appreciate programs that do not break under high load or cyberattacks.

But it does not even have to be that complicated. Consider Consumer Identity and Access Management platforms, for example. Replacing a homegrown user management system with such a platform not only dramatically improves the experience for your current and potential customers – with built-in privacy and consent management features, it also gives users better control over their online identities, boosting their trust considerably. And in the end, you get to know your customers better while reducing your own investments into IT infrastructure and operations. It can’t really get better than this.

You see, trust, privacy, and security don’t have to be a liability and a financial burden. With an open mind and a solid strategy, even the harshest compliance regulations can be turned into new business enablers, cost-saving opportunities and powerful messages to the public. And we are always here to support you on this journey.

Who's the Best Security Vendor of Them All?

This week I had an opportunity to visit the city of Tel Aviv, Israel to attend one of the Microsoft Ignite | The Tour events the company is organizing to bring the latest information about their new products and technologies closer to IT professionals around the world. Granted, the Tour includes other cities closer to home as well, but the one in Tel Aviv was supposed to have an especially strong focus on security and the weather in January is so warm, so here I was!

I do have to confess however that the first day was somewhat boring– although I could imagine that around 2000 visitors were enjoying the show, for me as an analyst most of the information presented in sessions wasn’t really that new. But on the second day, we have visited the Microsoft Israel Development Center in nearby Herzliya and had a chance to talk directly to people leading the development of some of the most interesting products from Microsoft’s security portfolio.

At this moment some readers would probably ask me: wait a minute, are you suggesting that Microsoft is really a security vendor, let alone the best one? Well, that’s where it starts getting interesting! In one of the sessions, the speaker made a strong point for the notion of “good enough security”, explaining that most end-user companies do not really need the best of breed security products, because they’ll eventually end up with a massive number of disjointed tools that need to be managed separately.

Not only does it further increase the complexity of your corporate IT infrastructure that is already complex enough without security; these disconnected tools fail to deliver a unified view into everything happening within it and thus are unable to detect the most advanced cyber threats. Instead, he argued, a perfectly integrated solution covering multiple areas of cybersecurity would be more beneficial for most, even if it’s not the best of breed in individual areas. And who was the best opportunity to offer such an integrated solution? Well, Microsoft of course, given their leading positions in several key markets like on endpoints with Windows, in the cloud with Azure and, of course, in the workplace with Office 365.

Now, I’m not sure I like the term “good enough security” and I definitely do not believe that market domination in one area automatically translates into better opportunities in others, but there is actually a grain of truth behind this bold claim. First of all, being present on so many endpoints, cloud computers, mail servers, and other connected systems, Microsoft is able to collect vast amounts of telemetry data that end up in their Intelligent Security Graph – a single database of security events that can provide security insights and threat intelligence.

Second, even though many people still do not realize it, Microsoft has been a proper security vendor for quite some time already. Even though the company was a late starter in many areas, they are quickly closing the gaps in areas like Endpoint Protection or Cloud Security and in others, like Information Protection, they are already ahead of competitors. In recent years, the company has acquired a number of security startups, primarily here in Israel, and making these new products work together seamlessly has been one of their top priorities. This will certainly not happen overnight but talking to the actual developers gave me a strong impression of their motivation and commitment.

Now, Microsoft has an interesting history of working hard for years to win a completely new market, with impressive successes (like Azure or Xbox) and spectacular failures (remember Windows Mobile?). It seems also that technology excellence plays less of a role here than quality marketing. Unfortunately, this is where the company is still falling short – for example, how many potential customers are even considering Windows Defender Advanced Threat Protection for a shortlist of EDR solutions? Do they even know that Windows Defender is a full-featured EPP/EDR solution and not just a basic antivirus it used to be?

It seems to me that the company is still exploring their marketing strategy, judging by the number of new product names and licensing changes I’ve seen during the last year. We’re down to 4 product lines now, but I really wish they’d choose one name and stick to it. In the end, do I think that Microsoft is the best security vendor of them all? Of course not, they still have a very long way to go towards that, and there is no such thing as the single “best” security vendor anyway. But they are definitely already beyond the “good enough” stage.

AWS re:Invent Impressions

This year’s flagship conference for AWS – the re:Invent 2018 in Las Vegas – has just officially wrapped. Continuing the tradition, it has been bigger than ever – with more than 50 thousand attendees, over 2000 sessions, workshops, hackathons, certification courses, a huge expo area, and, of course, tons of entertainment programs. Kudos to the organizers for pulling off an event of this scale – I can only imagine the amount of effort that went into it.

I have to confess, however: maybe it’s just me getting older and grumpier, but at times I couldn’t stop thinking that this event is a bit too big for its own good. With the premises spanning no less than 7 resorts along the Las Vegas Boulevard, the simple task of getting to your next session becomes a time-consuming challenge. I have no doubt however that most of the attendees have enjoyed the event program immensely because application development is supposed to be fun – at least according to the developers themselves!

Apparently, this approach is deeply rooted in the AWS corporate culture as well – their core target audience is still “the builders” – people who already have the goals, skills and desire to create new cloud-native apps and services and the only thing they need are the necessary tools and building blocks. And that’s exactly what the company is striving to offer – the broadest choice of tools and technologies at the most competitive prices.

Looking at the business stats, it’s obvious that the company remains a quite distant leader when it comes to Infrastructure-as-a-Service (IaaS) – having such a huge scale advantage over other competitors, the company can still outpace them for years even if its relative growth slows down. Although there have been discussions in the past whether AWS has a substantial Platform-as-a-Service (PaaS) offering, they can be easily dismissed now – in a sense, “traditional PaaS” is no longer that relevant, giving way to modern technology stacks like serverless and containers. Both are strategic for AWS, and, with the latest announcements about expanding the footprint of the Lambda platform, one can say that the competition in the “next-gen PaaS” field would be even tougher.

Perhaps the only part of the cloud playing field where AWS continues to be notoriously absent is Software-as-a-Service (SaaS) and more specifically enterprise application suites. The company’s own rare forays into this field are unimpressive at best, and the general strategy seems to be “leave it to the partners and let them run their services on AWS infrastructure”. In a way, this reflects the approach Microsoft has been following for decades with Windows. Whether this approach is sustainable in the long term or whether cloud service providers should rather look at Apple as their inspiration – that’s a topic that can be debated for hours… In my opinion, this situation leaves a substantial opening in the cloud market for competitors to catch up and overtake the current leader eventually.

The window of opportunity is already shrinking, however, as AWS is aiming at expanding into new markets and doing just about anything technology-related better (or at least bigger and cheaper) than their competitors, as the astonishing number of new product and service announcements during the event shows. They span from the low-level infrastructure improvements (faster hardware, better elasticity, further cost reductions) to catching up with competitors on things like managed Blockchain to all-new almost science fiction-looking stuff like design of robots and satellite management.

However, to me as an analyst, the most important change in the company’s strategy has been their somewhat belated realization that not all their users are “passionate builders”. And even those who are, are not necessarily considering the wide choice of available tools a blessing. Instead, many are looking at the cloud as a means to solve their business problems and the first thing they need is guidance. And then security and compliance. Services like AWS Well-Architected Tool, AWS Control Tower and AWS Security Hub are the first step in the right direction.

Still, the star topic of the whole event was undoubtedly AI/ML. With a massive number of new announcements, AWS clearly indicates that its goal is to make machine learning accessible not just for hardcore experts and data scientists, but to everyone, no ML expertise required. With their own machine learning inference chips along with the most powerful hardware to run model training and a number of significant optimizations in frameworks running on them, AWS promises to become the platform for the most cutting-edge ML applications. However, on the other end, the ability to package machine learning models and offer them on the AWS Marketplace almost as commodity products makes these applications accessible to a much broader audience – another step towards “AI-as-a-Service”.

Another major announcement is the company’s answer to their competitors’ hybrid cloud developments – AWS Outposts. Here, the company’s approach is radically different from offerings like Microsoft’s Azure Stack or Oracle Cloud at Customer, AWS has decided not to try and package their whole public cloud “in a box” for on-premises applications. Instead, only the key services like storage and compute instances (the ones that really have to remain on-premises because of compliance or latency considerations, for example) are brought to your data center, but the whole control plane remains in the cloud and these local services will appear as a seamless extension of the customer’s existing virtual private cloud in their region of choice. The idea is that customers will be able to launch additional services on top of this basic foundation locally - for example, for databases, machine learning or container management. To manage Outposts, AWS offers two choices of a control plane: either through the company’s native management console or through VMware Cloud management tools and APIs.

Of course, this approach won’t be able to address certain use cases like occasionally-connected remote locations (on ships, for example), but for a large number of customers, AWS Outposts promises significantly reduced complexity and better manageability of their hybrid solutions. Unfortunately, not many technical details have been revealed yet, so I’m looking forward to further updates.

There was a number of announcements regarding AWS’s database portfolio, meaning that customers now have an even bigger number of available database engines to choose from. Here, however, I’m not necessarily buying into the notion that more choice translates into more possibilities. Surely, managed MySQL, Memcached or any other open source database will be “good enough” for a vast number of use cases, but meeting the demands of large enterprises is a different story. Perhaps, a topic for an entirely separate blog post.

Oh, and although I absolutely recognize the value of a “cryptographically verifiable ledger with centralized trust” for many use cases which people currently are trying (and failing) to implement with Blockchains, I cannot but note that “Quantum Ledger Database” is a really odd choice of a name for one. What does it have to do with quantum computing anyway?

After databases, the expansion of the company’s serverless compute portfolio was the second biggest part of AWS CTO Werner Vogels’ keynote. Launched four years ago, AWS Lambda has proven to be immensely successful with developers as a concept, but the methods of integrating this radically different way of developing and running code in the cloud into traditional development workflows were not particularly easy. This year the company has announced multiple enhancements both to the Lambda engine itself – you can now use programming languages like C++, PHP or Cobol to write Lambda functions or even bring your own custom runtime – and to the developer toolkit around it including integrations with several popular integrated development environments.

Notably, the whole serverless computing platform has been re-engineered to run on top of AWS’s own lightweight virtualization technology called Firecracker, which ensures more efficient resource utilization and better tenant isolation that translates into better security for customers and even further potential for cost savings.

These were the announcements that have especially caught my attention during the event. I’m pretty sure that you’ll find other interesting things among all the re:Invent 2018 product announcements. Is more always better? You decide. But it sure is more fun!

Impressions from the Oracle OpenWorld

Recently I was in San Francisco again, attending the Oracle OpenWorld for the second time. Just like last year, I cannot but commend the organizers for making the event even bigger, more informative and more convenient to attend – by all means not a small feat when you consider the crowd of over 60000 attendees from  175 countries. By setting up a separate press and analyst workspace in an isolated corner of the convention center, the company gave us the opportunity to work more productively and to avoid the noisy exposition floor environment, thus effectively eliminating the last bit of critique I had for the event back in 2017.

Thematically, things have definitely improved as well, at least for me as an analyst focusing on cybersecurity. Surely, the Autonomous Database continued to dominate, but as opposed to the last year’s mostly theoretical talks about a future roadmap, this time visitors had ample opportunity to see the products (remember, there are two different editions already available: one optimized for data warehouses and another for transactional processing and mixed loads) in action, to talk directly to the technical staff and to learn about the next phase of Oracle’s roadmap for 2019, which includes dedicated Exadata Cloud infrastructure for the most demanding customers as well as Autonomous Database in the Cloud at Customer, which is the closest thing to running an autonomous database on premises.

Last but not least, we had a chance to talk to real customers sharing their success stories. I have to confess however that I had somewhat mixed feelings about those stories. On one hand, I can absolutely understand Oracle’s desire to showcase their most successful customer projects, where things “just worked” after migrating a database to the Oracle Cloud and there were no challenges whatsoever. But for us analysts (and even more so for our customers that are not necessarily already heavily invested into Oracle databases) stories like this sound a bit trivial. What about migrating from a different platform? What about struggles to overcome unexpected obstacles? We need more drama, Oracle!

Another major topic – this year’s big reveal – was however not about databases at all. During his keynote, Larry Ellison has announced Oracle’s Generation 2 Cloud, which is completely redesigned with the “security first” principle in mind. His traditionally dramatic presentation mentioned lasers and threat-fighting robots, but regardless of all that the idea of a “secure by design” cloud is a pretty big deal. The two big components of Oracle’s next-generation cloud infrastructure are “dedicated Cloud Control Computers” and “intelligent analytics”.

The former provide a completely separate control plane and security perimeter of the cloud infrastructure, ensuring that every customer’s resources are not only protected by an isolation barrier from external threats but also that other tenants’ rogue admins cannot somehow access and exploit them. At the same time, this isolation barrier prevents Oracle’s own engineers from ever having unauthorized access to customer data. In addition to that, machine learning is supposed to provide additional capabilities not just for finding and mitigating security threats but also to reduce administration costs by means of intelligent automation.

Combined with the brand-new bare-metal computing infrastructure and fast, low-latency RDMA network, this forms the foundation of Oracle’s new cloud, which the company promises to be not just faster than any competitor, but also substantially cheaper (yes, no Oracle keynote can dispense with poking fun at AWS).

However, the biggest and the most omnipresent topic of this year’s OpenWorld was undoubtedly the Artificial Intelligence. And although I really hate the term and the ways it has been used and abused in marketing, I cannot but appreciate Oracle’s strategy around it. In his second keynote, Larry Ellison has talked how machine learning and AI should not be seen as standalone tools, but as new business enablers, which will eventually find their way into every business application, enabling not just productivity improvements but completely new ways of doing business that were simply impossible before. In a business application, machine learning should not just assist you with getting better answers to your questions but offer you better questions as well (finding hidden correlations in business data, optimizing costs, fighting fraud and so on). Needless to say, if you want to use such intelligent business apps today, you’re expected to look no further than in the Oracle cloud :)

Making Sense of the Top Cybersecurity Trends

With each passing year, the CISO’s job is not becoming any easier. As companies continue embracing the Digital Transformation, the growing complexity and openness of their IT infrastructures mean that the attack surface for hackers and malicious insiders is increasing as well. Combined with the recent political developments such as the rise of state-sponsored attacks, new surveillance laws, and harsh privacy regulations, security professionals now have way too many things on their hands that sometimes keep them awake at night. What’s more important – protecting your systems from ransomware or securing your cloud infrastructure? Should you invest in CEO fraud protection or work harder to prepare for a media fallout after a data breach? Decisions, decisions…

The skills gap problem is often discussed by the press, but the journalists usually focus more on the lack of IT experts which are needed to operate complex and sprawling cybersecurity infrastructures. Alas, the related problem of making wrong strategic decisions about the technologies and tools to purchase and deploy is not mentioned that often, but it is precisely the reason for the “cargo cult of cybersecurity”. Educating the public about the modern IT security trends and technologies will be a big part of our upcoming Cybersecurity Leadership Summit, which will be held in Berlin this November, and last week, my fellow analyst John Tolbert and I have presented a sneak peek into this topic by dispelling several popular misconceptions.

After a lengthy discussion about choosing just five out of the multitude of topics we’ll be covering at the summit, we came up with a list of things that, on one hand, are generating enough buzz in the media and vendors’ marketing materials and, on the other hand, are actually relevant and complex enough to warrant a need to dig into them. That’s why we didn’t mention ransomware, for example, which is actually declining along with the devaluation of popular cryptocurrencies…

Artificial Intelligence in Cybersecurity

Perhaps the biggest myth about Artificial Intelligence / Machine Learning (which, incidentally, are not the same even though both terms are often used interchangeably) is that it’s a cutting-edge technology that has arrived to solve all our cybersecurity woes. This cannot be further from the truth, though: the origins of machine learning predate digital computers. Neural networks were invented back in the 1950s and some of their applications are just as old. It’s only the recent surge in available computing power thanks to commodity hardware and cloud computing that has caused this triumphant entering of machine learning into so many areas of our daily lives.

In his recent blog post, our fellow analyst Mike Small has provided a concise overview of various terms and methods related to AI and ML. To his post, I can only add that applications of these methods to cybersecurity are still very much a field of academic research that is yet to mature into advanced off-the-shelf security solutions. Most products that are currently sold with “AI/ML inside” stickers on their boxes are in reality limited to the most basic ML methods that enable faster pattern or anomaly detection in log files. Only some of the more advanced ones offer higher-level functionality like actionable recommendations and improved forensic analysis. Finally, true cognitive technologies like natural language processing and AI-powered reasoning are just beginning to be adapted towards cybersecurity applications by a few visionary vendors.

It’s worth stressing, however, that such solutions will probably never completely replace human analysts if only for numerous legal and ethical problems associated with decisions made by an “autonomous AI”. If anyone, it would be the cybercriminals without moral inhibitions that we will see among the earliest adopters…

Zero Trust Security

The Zero Trust paradigm is rapidly gaining popularity as a modern alternative to the traditional perimeter-based security, which can no longer provide sufficient protection against external and internal advanced cyberthreats. An IT infrastructure designed around this model treats every user, application or data source as untrusted and enforces strict security, access control, and comprehensive auditing to ensure visibility and accountability of all user activities.

However, just like with any other hyped trend, there is a lot of confusion about what Zero Trust actually is. Fueled by massive marketing campaigns by vendors trying to get into this lucrative new market, a popular misconception is that Zero Trust is some kind of a “next-generation perimeter” that’s supposed to replace outdated firewalls and VPNs of old days.

Again, this cannot be further from the truth. Zero Trust is above all a new architectural model, a combination of multiple processes and technologies. And although adopting Zero Trust approach promises a massive reduction of attack surface, reduction of IT complexity, and productivity improvements, there is definitely no off-the-shelf solution that magically transforms your existing IT infrastructure.

Going Zero Trust always starts with a strategy, which must heterogeneous and hybrid by design. It involves discovering, classifying and protecting sensitive data; redefining identities for each user and device; establishing and enforcing strict access controls to each resource; and finally, continuous monitoring and audit of every activity. And remember: you should trust no one. Especially not vendor marketing!

Insider Threat Management

Ten years ago, the riskiest users in every company were undoubtedly the system administrators. Protecting the infrastructure and sensitive data from them potentially misusing their privileged access was the top priority. Nowadays, the situation has changed dramatically: every business user that has access to sensitive corporate data can, either inadvertently or with a malicious intent, cause substantial damage to your business by leaking confidential information, disrupting access to a critical system or simply draining your bank account. The most privileged users in that regard are the CEO or CFO, and the number of new cyber attacks targeting them specifically is on the rise.

The studies show that cyberattacks focusing on infrastructure are becoming too complex and costly for hackers, so they are focusing on social engineering methods instead. One carefully crafted phishing mail can thus cause more damage than an APT attack that takes months of planning… And the best part is that victims do all the work themselves!

Unfortunately, traditional security tools and even specialized Privileged Access Management solutions aren’t suitable for solving this new challenge. Again, the only viable strategy is to combine changes in existing business processes (especially those related to financial transactions) and a multi-layered deployment of different security technologies ranging from endpoint detection and response to email security to data loss prevention and even brand reputation management.

Continuous Authentication

Passwords are dead, biometric methods are easily circumvented, account hijacking is rampant… How can we still be sure that users are who they are claiming they are when they access a system or an application, from anywhere in the world and from a large variety of platforms?

One of the approaches that’s been growing in popularity in the recent year is adaptive authentication – the process of gathering additional context information about the users, their devices and other environmental factors and evaluating them according to risk-based policies. Such solutions usually combine multiple strong authentication methods and present the most appropriate challenge to the user based on their current risk level. However, even this quite complex approach is often not sufficient to combat advanced cyberattacks.

Continuous authentication paradigm takes this to the next level. By combining dynamic context-based authentication with real-time behavior biometrics, it turns authentication from a single event into a seamless ongoing process and thus promises to reduce the impact of a credential compromise. This way, the user’s risk score is not calculated just once during initial authentication but is constantly reevaluated across time, changing as the user moves into a different environment or reacting to anomalies in their behavior.

Unfortunately, this approach requires major changes in the way applications are designed and modernizing legacy systems can be a major challenge. Another problem is AA’s perceived invasiveness – many users do not feel comfortable being constantly monitored, and in many cases, these actions may even be illegal. Thus, although promising solutions are starting to appear on the market, AA is still far from mainstream adoption.

Embedding a Cybersecurity Culture

Perhaps the biggest myth about cybersecurity is that it takes care of itself. Unfortunately, the history of the recent large-scale cybersecurity incidents clearly demonstrates that even the largest companies with massive budgets for security tools are not immune to attacks. Also, many employees and whole business units often see security as a nuisance that hurts their productivity and would sometimes go as far as to actively sabotage it, maintaining their own “shadow IT” tools and services.

However, the most common cause of security breaches is simple negligence stemming primarily from insufficient awareness, lack of established processes and general reluctance to be a part of corporate cybersecurity culture. Unfortunately, there is no technology that can fix these problems, and companies must invest more resources into employee training, teaching them the cybersecurity hygiene basics, explaining the risks of handling personal information and preparing them for the inevitable response to a security incident.

Even more important is for the CISOs and other high-level executives to continuously improve their own awareness of the latest trends and developments in cybersecurity. And what is a better way for that than meeting the leading experts at the KuppingerCole’s Cybersecurity Leadership Summit next month? See you in Berlin!

Future-Proofing Your Cybersecurity Strategy

It’s May 25 today, and the world hasn’t ended. Looking back at the last several weeks before the GDPR deadline, I have an oddly familiar feeling. It seems that many companies have treated it as another “Year 2000 disaster” - a largely imaginary but highly publicized issue that has to be addressed by everyone before a set date, and then it’s quickly forgotten because nothing has really happened.

Unfortunately, applying the same logic to GDPR is the biggest mistake a company can make. First of all, obviously, you can only be sure that all your previous preparations actually worked after they are tested in courts, and we all hope this happens to us as late as possible. Furthermore, GDPR compliance is not a one-time event, it’s a continuous process that will have to become an integral part of your business for years (along with other regulations that will inevitably follow). Most importantly, however, all the bad guys out there are definitely not planning to comply and will double their efforts in developing new ways to attack your infrastructure and steal your sensitive data.

In other words, it’s business as usual for cybersecurity specialists. You still need to keep up with the ever-changing cyberthreat landscape, react to new types of attacks, learn about the latest technologies and stay as agile and flexible as possible. The only difference is that the cost of your mistake will now be much higher. On the other hand, the chance that your management will give you a bigger budget for security products is also somewhat bigger, and you have to use this opportunity wisely.

As we all know, the cybersecurity market is booming, since companies are spending billions on it, but the net effect of this increased spending seems to be quite negligible – the number of data breaches or ransomware attacks is still going up. Is it a sign that many companies still view cybersecurity as a kind of a magic ritual, a cargo cult of sorts? Or is it caused by a major skills gap, as the world simply doesn’t have enough experts to battle cybercriminals efficiently?

It’s probably both and the key underlying factor here is the simple fact that in the age of Digital Transformation, cybersecurity can no longer be a problem of your IT department only. Every employee is now constantly exposed to security threats and humans, not computers, are now the weakest link in any security architecture. Unless everyone is actively involved, there will be no security anymore. Luckily, we already see the awareness of this fact growing steadily among developers, for example. The whole notion of DevSecOps is revolving around integrating security practices into all stages of software development and operations cycle.

However, that is by far not enough. As business people like your CFO, not administrators, are becoming the most privileged users in your company, you have to completely rethink substantial parts of your security architecture to address the fact that a single forged email can do more harm to your business than the most sophisticated zero-day exploit. Remember, the victim is doing all the work here, so no firewall or antivirus will stop this kind of attack!

To sum it all, a future-proof cybersecurity strategy in the “post-GDPR era” must, of course, be built upon a solid foundation of data protection and privacy by design. But that alone is not enough – only by constantly raising awareness of the newest cyberthreats among all employees and by gradually increasing the degree of intelligent automation of your daily security operations do you have a chance of staying compliant with the strictest regulations at all times.

Humans and robots fighting cybercrime together – what a time to be alive! :)

How (Not) to Achieve Instant GDPR Compliance

With mere days left till the dreaded General Data Protection Regulation comes into force, many companies, especially those not based in the EU, still haven’t quite figured out how to deal with it. As we mentioned countless times earlier, the upcoming GDPR will profoundly change the way companies collect, store and process personal data of any EU resident. What is understood as personal data and what is considered processing is very broad and is only considered legal if it meets a number of very strict criteria. Fines for non-compliance are massive – up to 20 million Euro or 4% of a company’s annual turnover, whichever is higher.

Needless to say, not many companies feel happy about massive investments they’d need to make into their IT infrastructures, as well as other costs (consulting, legal and even PR-related) of compliance. And while European businesses don’t really have any other options, quite a few companies based outside of the EU are considering pulling out of the European market completely. A number of them even made their decision public, although we could safely assume that most would rather keep the matters quiet.

But if you really decide to erect a “digital Iron Curtain” between you and those silly Europeans with their silly privacy laws, how can you be sure it’s really impenetrable? And even if it is, is that a viable strategy at all? The easiest solution is obviously geofencing – just block all access to your website from any known European IP range. That’s something a reasonably competent network administrator can do in under an hour or so. There are even companies that would do it for you, for a monthly fee. One such service, aptly named GDPR Shield, offers a simple JavaScript snippet you need only to paste into your site’s code. Sadly, the service seems to be unavailable at the moment, probably unable to keep up with all the demand…

However, before you even start looking for other similar solutions, consider one point: the GDPR protects the EU subjects’ privacy regardless of their geographic location. A German citizen staying in the US and using a US-based service is, at least in theory, supposed to have the same control over their PII as back home. And even without traveling, an IP blacklist can be easily circumvented using readily available tools like VPN. Trust me, Germans know how to use them – as until recently, the majority of YouTube videos were not available in Germany because of a copyright dispute, so a VPN was needed to enjoy “Gangnam style” or any other musical hit of the time.

On the other hand, thinking that the EU intends to track every tiniest privacy violation worldwide and then drag every offender to the court is ridiculous; just consider the huge resources the European bureaucrats would need to put into a campaign of that scale. In reality, their first targets will undoubtedly be the likes of Facebook and Google – large companies whose business is built upon collecting and reselling their users’ personal data to third parties. So, unless your business is in the same market as Cambridge Analytica, you should probably reconsider the idea of blocking out European visitors – after all, you’d miss nearly 750 million potential customers from the world’s largest economy.

Finally, the biggest mistake many companies make is to think that GDPR’s sole purpose is to somehow make their lives more miserable and to punish them with unnecessary fines. However, like any other compliance regulation, GDPR is above all a comprehensive set of IT security, data protection and legal best practices. Complying with GDPR - even if you don’t plan to do business in the EU market - is thus a great exercise that can prepare your business for some of the most difficult challenges of the Digital Age. Maybe in the same sense as a volcano eruption is a great test of your running skills, but running exercises are still quite useful even if you do not live in Hawaii.

Discover KuppingerCole

KuppingerCole PLUS

Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

AI for the Future of your Business Learn more

AI for the Future of your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00