Blog posts by Alexei Balaganski

AWS re:Invent Impressions

This year’s flagship conference for AWS – the re:Invent 2018 in Las Vegas – has just officially wrapped. Continuing the tradition, it has been bigger than ever – with more than 50 thousand attendees, over 2000 sessions, workshops, hackathons, certification courses, a huge expo area, and, of course, tons of entertainment programs. Kudos to the organizers for pulling off an event of this scale – I can only imagine the amount of effort that went into it.

I have to confess, however: maybe it’s just me getting older and grumpier, but at times I couldn’t stop thinking that this event is a bit too big for its own good. With the premises spanning no less than 7 resorts along the Las Vegas Boulevard, the simple task of getting to your next session becomes a time-consuming challenge. I have no doubt however that most of the attendees have enjoyed the event program immensely because application development is supposed to be fun – at least according to the developers themselves!

Apparently, this approach is deeply rooted in the AWS corporate culture as well – their core target audience is still “the builders” – people who already have the goals, skills and desire to create new cloud-native apps and services and the only thing they need are the necessary tools and building blocks. And that’s exactly what the company is striving to offer – the broadest choice of tools and technologies at the most competitive prices.

Looking at the business stats, it’s obvious that the company remains a quite distant leader when it comes to Infrastructure-as-a-Service (IaaS) – having such a huge scale advantage over other competitors, the company can still outpace them for years even if its relative growth slows down. Although there have been discussions in the past whether AWS has a substantial Platform-as-a-Service (PaaS) offering, they can be easily dismissed now – in a sense, “traditional PaaS” is no longer that relevant, giving way to modern technology stacks like serverless and containers. Both are strategic for AWS, and, with the latest announcements about expanding the footprint of the Lambda platform, one can say that the competition in the “next-gen PaaS” field would be even tougher.

Perhaps the only part of the cloud playing field where AWS continues to be notoriously absent is Software-as-a-Service (SaaS) and more specifically enterprise application suites. The company’s own rare forays into this field are unimpressive at best, and the general strategy seems to be “leave it to the partners and let them run their services on AWS infrastructure”. In a way, this reflects the approach Microsoft has been following for decades with Windows. Whether this approach is sustainable in the long term or whether cloud service providers should rather look at Apple as their inspiration – that’s a topic that can be debated for hours… In my opinion, this situation leaves a substantial opening in the cloud market for competitors to catch up and overtake the current leader eventually.

The window of opportunity is already shrinking, however, as AWS is aiming at expanding into new markets and doing just about anything technology-related better (or at least bigger and cheaper) than their competitors, as the astonishing number of new product and service announcements during the event shows. They span from the low-level infrastructure improvements (faster hardware, better elasticity, further cost reductions) to catching up with competitors on things like managed Blockchain to all-new almost science fiction-looking stuff like design of robots and satellite management.

However, to me as an analyst, the most important change in the company’s strategy has been their somewhat belated realization that not all their users are “passionate builders”. And even those who are, are not necessarily considering the wide choice of available tools a blessing. Instead, many are looking at the cloud as a means to solve their business problems and the first thing they need is guidance. And then security and compliance. Services like AWS Well-Architected Tool, AWS Control Tower and AWS Security Hub are the first step in the right direction.

Still, the star topic of the whole event was undoubtedly AI/ML. With a massive number of new announcements, AWS clearly indicates that its goal is to make machine learning accessible not just for hardcore experts and data scientists, but to everyone, no ML expertise required. With their own machine learning inference chips along with the most powerful hardware to run model training and a number of significant optimizations in frameworks running on them, AWS promises to become the platform for the most cutting-edge ML applications. However, on the other end, the ability to package machine learning models and offer them on the AWS Marketplace almost as commodity products makes these applications accessible to a much broader audience – another step towards “AI-as-a-Service”.

Another major announcement is the company’s answer to their competitors’ hybrid cloud developments – AWS Outposts. Here, the company’s approach is radically different from offerings like Microsoft’s Azure Stack or Oracle Cloud at Customer, AWS has decided not to try and package their whole public cloud “in a box” for on-premises applications. Instead, only the key services like storage and compute instances (the ones that really have to remain on-premises because of compliance or latency considerations, for example) are brought to your data center, but the whole control plane remains in the cloud and these local services will appear as a seamless extension of the customer’s existing virtual private cloud in their region of choice. The idea is that customers will be able to launch additional services on top of this basic foundation locally - for example, for databases, machine learning or container management. To manage Outposts, AWS offers two choices of a control plane: either through the company’s native management console or through VMware Cloud management tools and APIs.

Of course, this approach won’t be able to address certain use cases like occasionally-connected remote locations (on ships, for example), but for a large number of customers, AWS Outposts promises significantly reduced complexity and better manageability of their hybrid solutions. Unfortunately, not many technical details have been revealed yet, so I’m looking forward to further updates.

There was a number of announcements regarding AWS’s database portfolio, meaning that customers now have an even bigger number of available database engines to choose from. Here, however, I’m not necessarily buying into the notion that more choice translates into more possibilities. Surely, managed MySQL, Memcached or any other open source database will be “good enough” for a vast number of use cases, but meeting the demands of large enterprises is a different story. Perhaps, a topic for an entirely separate blog post.

Oh, and although I absolutely recognize the value of a “cryptographically verifiable ledger with centralized trust” for many use cases which people currently are trying (and failing) to implement with Blockchains, I cannot but note that “Quantum Ledger Database” is a really odd choice of a name for one. What does it have to do with quantum computing anyway?

After databases, the expansion of the company’s serverless compute portfolio was the second biggest part of AWS CTO Werner Vogels’ keynote. Launched four years ago, AWS Lambda has proven to be immensely successful with developers as a concept, but the methods of integrating this radically different way of developing and running code in the cloud into traditional development workflows were not particularly easy. This year the company has announced multiple enhancements both to the Lambda engine itself – you can now use programming languages like C++, PHP or Cobol to write Lambda functions or even bring your own custom runtime – and to the developer toolkit around it including integrations with several popular integrated development environments.

Notably, the whole serverless computing platform has been re-engineered to run on top of AWS’s own lightweight virtualization technology called Firecracker, which ensures more efficient resource utilization and better tenant isolation that translates into better security for customers and even further potential for cost savings.

These were the announcements that have especially caught my attention during the event. I’m pretty sure that you’ll find other interesting things among all the re:Invent 2018 product announcements. Is more always better? You decide. But it sure is more fun!

Impressions from the Oracle OpenWorld

Recently I was in San Francisco again, attending the Oracle OpenWorld for the second time. Just like last year, I cannot but commend the organizers for making the event even bigger, more informative and more convenient to attend – by all means not a small feat when you consider the crowd of over 60000 attendees from  175 countries. By setting up a separate press and analyst workspace in an isolated corner of the convention center, the company gave us the opportunity to work more productively and to avoid the noisy exposition floor environment, thus effectively eliminating the last bit of critique I had for the event back in 2017.

Thematically, things have definitely improved as well, at least for me as an analyst focusing on cybersecurity. Surely, the Autonomous Database continued to dominate, but as opposed to the last year’s mostly theoretical talks about a future roadmap, this time visitors had ample opportunity to see the products (remember, there are two different editions already available: one optimized for data warehouses and another for transactional processing and mixed loads) in action, to talk directly to the technical staff and to learn about the next phase of Oracle’s roadmap for 2019, which includes dedicated Exadata Cloud infrastructure for the most demanding customers as well as Autonomous Database in the Cloud at Customer, which is the closest thing to running an autonomous database on premises.

Last but not least, we had a chance to talk to real customers sharing their success stories. I have to confess however that I had somewhat mixed feelings about those stories. On one hand, I can absolutely understand Oracle’s desire to showcase their most successful customer projects, where things “just worked” after migrating a database to the Oracle Cloud and there were no challenges whatsoever. But for us analysts (and even more so for our customers that are not necessarily already heavily invested into Oracle databases) stories like this sound a bit trivial. What about migrating from a different platform? What about struggles to overcome unexpected obstacles? We need more drama, Oracle!

Another major topic – this year’s big reveal – was however not about databases at all. During his keynote, Larry Ellison has announced Oracle’s Generation 2 Cloud, which is completely redesigned with the “security first” principle in mind. His traditionally dramatic presentation mentioned lasers and threat-fighting robots, but regardless of all that the idea of a “secure by design” cloud is a pretty big deal. The two big components of Oracle’s next-generation cloud infrastructure are “dedicated Cloud Control Computers” and “intelligent analytics”.

The former provide a completely separate control plane and security perimeter of the cloud infrastructure, ensuring that every customer’s resources are not only protected by an isolation barrier from external threats but also that other tenants’ rogue admins cannot somehow access and exploit them. At the same time, this isolation barrier prevents Oracle’s own engineers from ever having unauthorized access to customer data. In addition to that, machine learning is supposed to provide additional capabilities not just for finding and mitigating security threats but also to reduce administration costs by means of intelligent automation.

Combined with the brand-new bare-metal computing infrastructure and fast, low-latency RDMA network, this forms the foundation of Oracle’s new cloud, which the company promises to be not just faster than any competitor, but also substantially cheaper (yes, no Oracle keynote can dispense with poking fun at AWS).

However, the biggest and the most omnipresent topic of this year’s OpenWorld was undoubtedly the Artificial Intelligence. And although I really hate the term and the ways it has been used and abused in marketing, I cannot but appreciate Oracle’s strategy around it. In his second keynote, Larry Ellison has talked how machine learning and AI should not be seen as standalone tools, but as new business enablers, which will eventually find their way into every business application, enabling not just productivity improvements but completely new ways of doing business that were simply impossible before. In a business application, machine learning should not just assist you with getting better answers to your questions but offer you better questions as well (finding hidden correlations in business data, optimizing costs, fighting fraud and so on). Needless to say, if you want to use such intelligent business apps today, you’re expected to look no further than in the Oracle cloud :)

Making Sense of the Top Cybersecurity Trends

With each passing year, the CISO’s job is not becoming any easier. As companies continue embracing the Digital Transformation, the growing complexity and openness of their IT infrastructures mean that the attack surface for hackers and malicious insiders is increasing as well. Combined with the recent political developments such as the rise of state-sponsored attacks, new surveillance laws, and harsh privacy regulations, security professionals now have way too many things on their hands that sometimes keep them awake at night. What’s more important – protecting your systems from ransomware or securing your cloud infrastructure? Should you invest in CEO fraud protection or work harder to prepare for a media fallout after a data breach? Decisions, decisions…

The skills gap problem is often discussed by the press, but the journalists usually focus more on the lack of IT experts which are needed to operate complex and sprawling cybersecurity infrastructures. Alas, the related problem of making wrong strategic decisions about the technologies and tools to purchase and deploy is not mentioned that often, but it is precisely the reason for the “cargo cult of cybersecurity”. Educating the public about the modern IT security trends and technologies will be a big part of our upcoming Cybersecurity Leadership Summit, which will be held in Berlin this November, and last week, my fellow analyst John Tolbert and I have presented a sneak peek into this topic by dispelling several popular misconceptions.

After a lengthy discussion about choosing just five out of the multitude of topics we’ll be covering at the summit, we came up with a list of things that, on one hand, are generating enough buzz in the media and vendors’ marketing materials and, on the other hand, are actually relevant and complex enough to warrant a need to dig into them. That’s why we didn’t mention ransomware, for example, which is actually declining along with the devaluation of popular cryptocurrencies…

Artificial Intelligence in Cybersecurity

Perhaps the biggest myth about Artificial Intelligence / Machine Learning (which, incidentally, are not the same even though both terms are often used interchangeably) is that it’s a cutting-edge technology that has arrived to solve all our cybersecurity woes. This cannot be further from the truth, though: the origins of machine learning predate digital computers. Neural networks were invented back in the 1950s and some of their applications are just as old. It’s only the recent surge in available computing power thanks to commodity hardware and cloud computing that has caused this triumphant entering of machine learning into so many areas of our daily lives.

In his recent blog post, our fellow analyst Mike Small has provided a concise overview of various terms and methods related to AI and ML. To his post, I can only add that applications of these methods to cybersecurity are still very much a field of academic research that is yet to mature into advanced off-the-shelf security solutions. Most products that are currently sold with “AI/ML inside” stickers on their boxes are in reality limited to the most basic ML methods that enable faster pattern or anomaly detection in log files. Only some of the more advanced ones offer higher-level functionality like actionable recommendations and improved forensic analysis. Finally, true cognitive technologies like natural language processing and AI-powered reasoning are just beginning to be adapted towards cybersecurity applications by a few visionary vendors.

It’s worth stressing, however, that such solutions will probably never completely replace human analysts if only for numerous legal and ethical problems associated with decisions made by an “autonomous AI”. If anyone, it would be the cybercriminals without moral inhibitions that we will see among the earliest adopters…

Zero Trust Security

The Zero Trust paradigm is rapidly gaining popularity as a modern alternative to the traditional perimeter-based security, which can no longer provide sufficient protection against external and internal advanced cyberthreats. An IT infrastructure designed around this model treats every user, application or data source as untrusted and enforces strict security, access control, and comprehensive auditing to ensure visibility and accountability of all user activities.

However, just like with any other hyped trend, there is a lot of confusion about what Zero Trust actually is. Fueled by massive marketing campaigns by vendors trying to get into this lucrative new market, a popular misconception is that Zero Trust is some kind of a “next-generation perimeter” that’s supposed to replace outdated firewalls and VPNs of old days.

Again, this cannot be further from the truth. Zero Trust is above all a new architectural model, a combination of multiple processes and technologies. And although adopting Zero Trust approach promises a massive reduction of attack surface, reduction of IT complexity, and productivity improvements, there is definitely no off-the-shelf solution that magically transforms your existing IT infrastructure.

Going Zero Trust always starts with a strategy, which must heterogeneous and hybrid by design. It involves discovering, classifying and protecting sensitive data; redefining identities for each user and device; establishing and enforcing strict access controls to each resource; and finally, continuous monitoring and audit of every activity. And remember: you should trust no one. Especially not vendor marketing!

Insider Threat Management

Ten years ago, the riskiest users in every company were undoubtedly the system administrators. Protecting the infrastructure and sensitive data from them potentially misusing their privileged access was the top priority. Nowadays, the situation has changed dramatically: every business user that has access to sensitive corporate data can, either inadvertently or with a malicious intent, cause substantial damage to your business by leaking confidential information, disrupting access to a critical system or simply draining your bank account. The most privileged users in that regard are the CEO or CFO, and the number of new cyber attacks targeting them specifically is on the rise.

The studies show that cyberattacks focusing on infrastructure are becoming too complex and costly for hackers, so they are focusing on social engineering methods instead. One carefully crafted phishing mail can thus cause more damage than an APT attack that takes months of planning… And the best part is that victims do all the work themselves!

Unfortunately, traditional security tools and even specialized Privileged Access Management solutions aren’t suitable for solving this new challenge. Again, the only viable strategy is to combine changes in existing business processes (especially those related to financial transactions) and a multi-layered deployment of different security technologies ranging from endpoint detection and response to email security to data loss prevention and even brand reputation management.

Continuous Authentication

Passwords are dead, biometric methods are easily circumvented, account hijacking is rampant… How can we still be sure that users are who they are claiming they are when they access a system or an application, from anywhere in the world and from a large variety of platforms?

One of the approaches that’s been growing in popularity in the recent year is adaptive authentication – the process of gathering additional context information about the users, their devices and other environmental factors and evaluating them according to risk-based policies. Such solutions usually combine multiple strong authentication methods and present the most appropriate challenge to the user based on their current risk level. However, even this quite complex approach is often not sufficient to combat advanced cyberattacks.

Continuous authentication paradigm takes this to the next level. By combining dynamic context-based authentication with real-time behavior biometrics, it turns authentication from a single event into a seamless ongoing process and thus promises to reduce the impact of a credential compromise. This way, the user’s risk score is not calculated just once during initial authentication but is constantly reevaluated across time, changing as the user moves into a different environment or reacting to anomalies in their behavior.

Unfortunately, this approach requires major changes in the way applications are designed and modernizing legacy systems can be a major challenge. Another problem is AA’s perceived invasiveness – many users do not feel comfortable being constantly monitored, and in many cases, these actions may even be illegal. Thus, although promising solutions are starting to appear on the market, AA is still far from mainstream adoption.

Embedding a Cybersecurity Culture

Perhaps the biggest myth about cybersecurity is that it takes care of itself. Unfortunately, the history of the recent large-scale cybersecurity incidents clearly demonstrates that even the largest companies with massive budgets for security tools are not immune to attacks. Also, many employees and whole business units often see security as a nuisance that hurts their productivity and would sometimes go as far as to actively sabotage it, maintaining their own “shadow IT” tools and services.

However, the most common cause of security breaches is simple negligence stemming primarily from insufficient awareness, lack of established processes and general reluctance to be a part of corporate cybersecurity culture. Unfortunately, there is no technology that can fix these problems, and companies must invest more resources into employee training, teaching them the cybersecurity hygiene basics, explaining the risks of handling personal information and preparing them for the inevitable response to a security incident.

Even more important is for the CISOs and other high-level executives to continuously improve their own awareness of the latest trends and developments in cybersecurity. And what is a better way for that than meeting the leading experts at the KuppingerCole’s Cybersecurity Leadership Summit next month? See you in Berlin!

Future-Proofing Your Cybersecurity Strategy

It’s May 25 today, and the world hasn’t ended. Looking back at the last several weeks before the GDPR deadline, I have an oddly familiar feeling. It seems that many companies have treated it as another “Year 2000 disaster” - a largely imaginary but highly publicized issue that has to be addressed by everyone before a set date, and then it’s quickly forgotten because nothing has really happened.

Unfortunately, applying the same logic to GDPR is the biggest mistake a company can make. First of all, obviously, you can only be sure that all your previous preparations actually worked after they are tested in courts, and we all hope this happens to us as late as possible. Furthermore, GDPR compliance is not a one-time event, it’s a continuous process that will have to become an integral part of your business for years (along with other regulations that will inevitably follow). Most importantly, however, all the bad guys out there are definitely not planning to comply and will double their efforts in developing new ways to attack your infrastructure and steal your sensitive data.

In other words, it’s business as usual for cybersecurity specialists. You still need to keep up with the ever-changing cyberthreat landscape, react to new types of attacks, learn about the latest technologies and stay as agile and flexible as possible. The only difference is that the cost of your mistake will now be much higher. On the other hand, the chance that your management will give you a bigger budget for security products is also somewhat bigger, and you have to use this opportunity wisely.

As we all know, the cybersecurity market is booming, since companies are spending billions on it, but the net effect of this increased spending seems to be quite negligible – the number of data breaches or ransomware attacks is still going up. Is it a sign that many companies still view cybersecurity as a kind of a magic ritual, a cargo cult of sorts? Or is it caused by a major skills gap, as the world simply doesn’t have enough experts to battle cybercriminals efficiently?

It’s probably both and the key underlying factor here is the simple fact that in the age of Digital Transformation, cybersecurity can no longer be a problem of your IT department only. Every employee is now constantly exposed to security threats and humans, not computers, are now the weakest link in any security architecture. Unless everyone is actively involved, there will be no security anymore. Luckily, we already see the awareness of this fact growing steadily among developers, for example. The whole notion of DevSecOps is revolving around integrating security practices into all stages of software development and operations cycle.

However, that is by far not enough. As business people like your CFO, not administrators, are becoming the most privileged users in your company, you have to completely rethink substantial parts of your security architecture to address the fact that a single forged email can do more harm to your business than the most sophisticated zero-day exploit. Remember, the victim is doing all the work here, so no firewall or antivirus will stop this kind of attack!

To sum it all, a future-proof cybersecurity strategy in the “post-GDPR era” must, of course, be built upon a solid foundation of data protection and privacy by design. But that alone is not enough – only by constantly raising awareness of the newest cyberthreats among all employees and by gradually increasing the degree of intelligent automation of your daily security operations do you have a chance of staying compliant with the strictest regulations at all times.

Humans and robots fighting cybercrime together – what a time to be alive! :)

How (Not) to Achieve Instant GDPR Compliance

With mere days left till the dreaded General Data Protection Regulation comes into force, many companies, especially those not based in the EU, still haven’t quite figured out how to deal with it. As we mentioned countless times earlier, the upcoming GDPR will profoundly change the way companies collect, store and process personal data of any EU resident. What is understood as personal data and what is considered processing is very broad and is only considered legal if it meets a number of very strict criteria. Fines for non-compliance are massive – up to 20 million Euro or 4% of a company’s annual turnover, whichever is higher.

Needless to say, not many companies feel happy about massive investments they’d need to make into their IT infrastructures, as well as other costs (consulting, legal and even PR-related) of compliance. And while European businesses don’t really have any other options, quite a few companies based outside of the EU are considering pulling out of the European market completely. A number of them even made their decision public, although we could safely assume that most would rather keep the matters quiet.

But if you really decide to erect a “digital Iron Curtain” between you and those silly Europeans with their silly privacy laws, how can you be sure it’s really impenetrable? And even if it is, is that a viable strategy at all? The easiest solution is obviously geofencing – just block all access to your website from any known European IP range. That’s something a reasonably competent network administrator can do in under an hour or so. There are even companies that would do it for you, for a monthly fee. One such service, aptly named GDPR Shield, offers a simple JavaScript snippet you need only to paste into your site’s code. Sadly, the service seems to be unavailable at the moment, probably unable to keep up with all the demand…

However, before you even start looking for other similar solutions, consider one point: the GDPR protects the EU subjects’ privacy regardless of their geographic location. A German citizen staying in the US and using a US-based service is, at least in theory, supposed to have the same control over their PII as back home. And even without traveling, an IP blacklist can be easily circumvented using readily available tools like VPN. Trust me, Germans know how to use them – as until recently, the majority of YouTube videos were not available in Germany because of a copyright dispute, so a VPN was needed to enjoy “Gangnam style” or any other musical hit of the time.

On the other hand, thinking that the EU intends to track every tiniest privacy violation worldwide and then drag every offender to the court is ridiculous; just consider the huge resources the European bureaucrats would need to put into a campaign of that scale. In reality, their first targets will undoubtedly be the likes of Facebook and Google – large companies whose business is built upon collecting and reselling their users’ personal data to third parties. So, unless your business is in the same market as Cambridge Analytica, you should probably reconsider the idea of blocking out European visitors – after all, you’d miss nearly 750 million potential customers from the world’s largest economy.

Finally, the biggest mistake many companies make is to think that GDPR’s sole purpose is to somehow make their lives more miserable and to punish them with unnecessary fines. However, like any other compliance regulation, GDPR is above all a comprehensive set of IT security, data protection and legal best practices. Complying with GDPR - even if you don’t plan to do business in the EU market - is thus a great exercise that can prepare your business for some of the most difficult challenges of the Digital Age. Maybe in the same sense as a volcano eruption is a great test of your running skills, but running exercises are still quite useful even if you do not live in Hawaii.

Email Encryption Is Dead™. Or Is It?

As we all know, there is no better way for a security researcher to start a new week than to learn about another massive security vulnerability (or two!) that beats all previous ones and will surely ruin the IT industry forever! Even though I’m busy packing my suitcase and getting ready to head to our European Identity and Cloud Conference that starts tomorrow in Munich, I simply cannot but put my things aside for a moment and admire the latest one.

This time it’s about email encryption (or rather about its untimely demise). According to this EFF’s announcement, a group of researchers from German and Belgian universities has discovered a set of vulnerabilities affecting users of S/MIME and PGP – two most popular protocols for exchanging encrypted messages over email. In a chain of rather cryptic tweets, they’ve announced that they’ll be publishing these vulnerabilities tomorrow and that there is no reliable fix for the problems they’ve discovered. Apparently, the only way to avoid leaking your encrypted emails (even the ones sent in the past) to malicious third parties is to stop using these encryption tools completely.

Needless to say, this wasn’t the most elegant way to disclose such a serious vulnerability. Without concrete technical details, which we are promised not to see until tomorrow, pretty wild speculations are already making rounds in the press. Have a look at this article in Süddeutsche Zeitung, for example: „a research team… managed to shatter one of the central building blocks of secure communication in the digital age“. What do we do now? Are we all doomed?!

Well, first of all, let’s not speculate until we get exact information about the exploits and the products that are affected and not fixed yet. However, we could try to make a kind of an educated guess based on the bits of information we do have already. Apparently, the problem is not caused by a weakness in either protocol, but rather by the peculiar way modern email programs handle multipart mail messages (those are typically used for delivering HTML mails or messages with attachments). By carefully manipulating invisible parts of an encrypted message, an attacker may trick the recipient’s mail program to open an external link and thus leak certain information about encryption parameters. Since attacker has access to this URL, he can leverage this information to steal the recipient's private key or other sensitive data.

How to protect yourself from the exploit now? Well, the most obvious solution is not to use HTML format for sending encrypted mails. Of course, the practicality of this method in real life is debatable – you cannot force all of your correspondents to switch to plain text, especially the malicious ones. The next suggestion is to stop using encryption tools that are known to be affected (some are listed in the EFF’s article) until they are fixed. The most radical method, obviously, is to stop using email for secret communications completely and switch to a more modern alternative.

Will this vulnerability fundamentally change the way we use encrypted email in general? I seriously doubt it. Back in 2017, it was discovered that for months, Microsoft Outlook has been sending all encrypted mails with both encrypted and unencrypted forms of the original content included. Did anyone stop using S/MIME or decide to switch to PGP? Perhaps, the researchers who discovered that bug should have used more drama!

Yes, however negatively I usually think about this type of sensational journalism in IT, maybe it will have a certain positive effect if it makes more people to take notice and update their tools promptly. Or maybe it gives an additional incentive to software vendors to develop better, more reliable and convenient secure communication solutions.

Azure Advanced Threat Protection: Securing Your Identities Right From the Cloud

Recently, Microsoft has announced general availability for another addition to their cybersecurity portfolio: Azure Advanced Threat Protection (Azure ATP for short) – a cloud-based service for monitoring and protecting hybrid IT infrastructures against targeted cyberattacks and malicious insider activities.

The technology behind this service is actually not new. Microsoft has acquired it back in 2014 with the purchase of Aorato, an Israel-based startup company specializing in hybrid cloud security solutions. Aorato’s behavior detection methodology, named Organizational Security Graph, enables non-intrusive collection of network traffic, event logs and other data sources in an enterprise network and then, using behavior analysis and machine learning algorithms, detects suspicious activities, security issues and cyber-attacks against corporate Active Directory servers.

Although this may sound like an overly specialized tool, in reality solutions like this can be a very useful addition to any company’s security infrastructure – after all, according to statistics, the vast majority of security breaches leverage compromised credentials, and close monitoring of the heart of nearly every company’s identity management – the Active Directory servers – allows for quicker identification of both known malicious attacks and traces of unknown but suspicious activities. And since practically every cyberattack involves manipulating stolen credentials at some stage of the killchain, identifying them early allows security experts to discover these attacks much earlier than the typical 99+ days.

Back in 2016, we have reviewed Microsoft Advanced Threat Analytics (ATA), the first product Microsoft released with the Security Graph technology. KuppingerCole’s verdict at the time was that the product was easy to deploy, transparent and non-intrusive, with an innovative and intuitive user interface, yet powerful enough to identify a wide range of security issues, malicious attacks and suspicious activities in corporate networks. However, the product was only intended for on-premises deployment and provided very limited forensic and mitigation capabilities due to lack of integration with other security tools.

Well, with the new solution, Microsoft has successfully addressed both of these challenges. Azure ATP, as evident from its name, is a cloud-based service. Although you obviously still need to deploy sensors within your network to capture the network traffic and other security events, they are sent directly to the Azure cloud, and all the correlation magic happens over there. This makes the product substantially more scalable and fitting even for the largest corporate networks. In addition, it can directly consume the latest threat intelligence data collected by Microsoft across its cloud infrastructure.

On top of that, Azure ATP integrates with Windows Defender ATP – Microsoft’s endpoint protection platform. If you’re using both platforms, you can seamlessly switch between them for additional forensic information or direct remediation of malware threats on managed endpoints. In fact, the company’s Advanced Threat Protection brand now also includes Office 365 ATP, which provides protection against malicious emails and URLs, as well as secures files in Office 365 applications.

With all three platforms combined, Microsoft can now offer seamless protection against malicious attacks across the most critical attack surfaces as a fully managed cloud-based solution.

Free Tools That Can Save Millions? We Need More of These

When IT visionaries give presentations about the Digital Transformation, they usually talk about large enterprises with teams of experts working on exciting stuff like heterogeneous multi-cloud application architectures with blockchain-based identity assurance and real-time behavior analytics powered by deep learning (and many other marketing buzzwords). Of course, these companies can also afford investing substantial money into building in-depth security infrastructures to protect their sensitive data.

Unfortunately, for every such company there are probably thousands of smaller ones, which have neither budgets nor expertise of their larger counterparts. This means that these companies not only cannot afford “enterprise-grade” security products, they are often not even aware that such products exist or, for that matter, what problems they are facing without them. And yet, from the compliance perspective, these companies are just as responsible for protecting their customer’s personal information (or other kinds of regulated digital data) as the big ones and they are facing the same harsh punishments for GDPR violations.

One area where this is especially evident is database security. Databases are still the most widespread technology for storing business information across companies of all sizes. Modern enterprise relational databases are extremely sophisticated and complex products, requiring trained specialists for their setup and daily maintenance. The number of security risks a business-critical database is open to is surprisingly large, ranging from the sensitivity of the data stored in it all the way down to the application stack, storage, network and hardware. This is especially true for popular database vendors like Oracle, whose products can be found in every market vertical.

Of course, Oracle itself can readily provide a full range of database security solutions for their databases, but needless to say, not every customer can afford spending that much, not to mention having the necessary expertise to deploy and operate these tools. The recently announced Autonomous Database can solve many of those problems by completely taking management tasks away from DBAs, but it should be obvious that at least in the short term, this service isn’t a solution for every possible use case, so on-premises Oracle databases are not going anywhere anytime soon.

And exactly for these, the company has recently (and without much publicity) released their Database Security Assessment Tool (DBSAT) – a freeware tool for assessing the security configuration of Oracle databases and for identifying sensitive data in them. The tool is a completely standalone command-line program that does not have any external dependencies and can be installed and run on any DB server in minutes to generate two types of reports.

Database Security Assessment report provides a comprehensive overview of configuration parameters, identifying weaknesses, missing updates, improperly configured security technologies, excessive privileges and so on. For each discovered problem, the tool provides a short summary and risk score, as well as remediation suggestions and links to appropriate documentation. I had a chance to see a sample report and even with my quite limited DBA skills I was able to quickly identify the biggest risks and understand which concrete actions I’d need to perform to mitigate them.

The Sensitive Data Assessment report provides a different view on the database instance, showing the schemas, tables and columns that contain various types of sensitive information. The tool supports over 50 types of such data out of the box (including PII, financial and healthcare for several languages), but users can define their own search patterns using regular expressions. Personally, I find this report somewhat less informative, although it does its job as expected. If only for executive reporting, it would be useful not just to show how many occurrences of sensitive data were found, but to provide an overview of the overall company posture to give the CEO a few meaningful numbers as KPIs.

Of course, being a standalone tool, DBSAT does not support any integrations with other security assessment tools from Oracle, nor it provides any means for mass deployment across hundreds of databases. What it does provide is the option to export the reports into formats like CSV or JSON, which can be then exported into third party tools for further processing. Still, even in this rather simple form, the program helps a DBA to quickly identify and mitigate the biggest security risks in their databases, potentially saving the company from a breach or a major compliance violation. And as we all know, these are going to become very expensive soon.

Perhaps my biggest disappointment with the tools, however, has nothing to do with its functionality. Just like other companies before, Oracle seems to be not very keen on letting the world know about tools like this. And what use is even the best security tool or feature if people do not know of its existence? Have a look at AWS, for example, where misconfigured permissions for S3 buckets have been the reason behind a large number of embarrassing data leaks. And even though AWS now offers a number of measures to prevent them, we still keep reading about new personal data leaks every week.

Spreading the word and raising awareness about the security risks and free tools to mitigate them is, in my opinion, just as important as releasing those tools. So, I’m doing my part!

Spectre and Meltdown: A Great Start Into the New Year!

Looks like we the IT people have gotten more New Year presents than expected for 2018! The year has barely started, but we already have two massive security problems on our hands, vulnerabilities that dwarf anything discovered previously, even the notorious Heartbleed bug or the KRACK weakness in WiFi protocols. Discovered back in early 2017 by several independent groups of researchers, these vulnerabilities were understandably kept from the general public to give hardware and operating system vendors time to analyze the effects and develop countermeasures for them and to prevent hackers from creating zero-day exploits.

Unfortunately, the number of patches recently made for the Linux kernel alone was enough to raise suspicion of many security experts. This has led to a wave of speculations about the possible reasons behind them: has it something to do with the NSA? Will it make all computers in the world run 30% slower? Why is Intel’s CEO selling his stock? In the end, the researchers were forced to release their findings a week earlier just to put an end to wild rumors. So, what is this all about after all?

Technically speaking, both Meltdown and Spectre aren’t caused by some bugs or vulnerabilities. Rather, both exploit the unforeseen side effects of speculative execution, a core feature present in most modern processors that’s used to significantly improve calculation performance. The idea behind speculative execution is actually quite simple: every time a processor must check a condition in order to decide which part of code to run, instead of waiting till some data is loaded from memory (which may take hundreds of CPU cycles to complete), it makes an educated guess and starts executing the next instruction immediately. If later the guess proves to be wrong, the processor simply discards those instructions and reverts its state to a previously saved checkpoint, but if it was correct, the resulting performance gain can be significant. Processors have been designed this way for over 20 years, and potential security implications of incorrect speculative execution were never considered important.

Well, not any more. Researchers have discovered multiple methods of exploiting side effects of speculative execution that allow malicious programs to steal sensitive data they normally should not have access to. And since the root cause of the problem lies in the fundamental design in a wide range of modern Intel, AMD and ARM processors, nearly every system using those chips is affected including desktops, laptops, servers, virtual machines and cloud services. There is also no way to detect or block attacks using these exploits with an antivirus or any other software.

In fact, the very name “Spectre” was chosen to indicate that the problem is going to haunt us for a long time, since there is no common fix for all possible exploit methods. Any program designed according to the best security practices is still vulnerable to a carefully crafted piece of malicious code that could extract sensitive data from it by manipulating the processor state and measuring the side effects of speculative execution. There is even a proof-of-concept implementation in JavaScript, meaning that even visiting a website in a browser may trigger an attack, although popular browsers like Chrome and Firefox have already been patched to prevent it.

The only way to fully mitigate all variants of the Spectre exploit is to modify every program explicitly to disable speculative execution in sensitive places. There is some consolation in the fact that exploiting this vulnerability is quite complicated and there is no way to affect the operating system kernel this way. This cannot be said about the Meltdown vulnerability, however.

Apparently, Intel processors take so many liberties when applying performance optimizations to the executed code that the same root cause gives hackers access to arbitrary system memory locations, rendering (“melting”) all memory isolation features in modern operating systems completely useless. When running on an Intel processor, a malicious code can leak sensitive data from any process or OS kernel. In a virtualized environment, a guest process can leak data from the host operating system. Needless to say, this scenario is especially catastrophic for cloud service providers, where data sovereignty is not just a technical requirement, but a key legal and compliance foundation for their business model.

Luckily, there is a method of mitigating the Meltdown vulnerability completely on an operating system level, and that is exactly what Microsoft, Apple and Linux Foundation have been working on in the recent months. Unfortunately, to enforce separation between kernel and user space memory also means to undo performance optimizations processors and OS kernels are relying on to make switching between different execution modes quicker. According to independent tests, for different applications these losses may be anywhere between 5 and 30%. Again, this may be unnoticeable to average office users, but can be dramatic for cloud environments, where computing resources are billed by execution time. How would you like to have your monthly bill suddenly increased by 30% for… nothing, really.

Unfortunately, there is no other way to deal with this problem. The first and most important recommendation is as usual: keep your systems up-to-date with the latest patches. Update your browsers. Update your development tools. Check the advisories published by your cloud service provider. Plan your mitigation measures strategically.

And keep a cool head – conspiracy theories are fun, but not productive in any way. And by the way: Intel officially states that their CEO selling stocks in October has nothing to do with this vulnerability.

For Oracle, the Future Is Autonomous

Recently, I have attended the Oracle OpenWorld in San Francisco. For five days, the company has spared no expenses to inform, educate and (last but not least) entertain its customers and partners as well as developers, journalists, industry analysts and other visitors – in total, a crowd of over 50 thousand. As a person somewhat involved in organizing IT conferences (on a much smaller scale, of course), I could not but stand in awe thinking about all the challenges organizers of such an event had to overcome to make it successful and safe.

More important, however, was the almost unexpected thematic twist that dominated the whole conference. As I was preparing for the event, browsing the agenda and the list of exhibitors, I found way too many topics and products quite outside of my area of coverage. Although I do have some database administrator (DBA) experience, my current interests lie squarely within the realm of cybersecurity and I wasn’t expecting to hear a lot about it. Well, I could not be more wrong! In the end, cybersecurity was definitely one of the most prominent topics, starting right with Larry Ellison’s opening keynote.

The Autonomous Database, the world’s first database, according to Oracle, that comes with fully automated management, was the first and the biggest announcement. Built upon the latest Oracle Database 18c, this solution promises to completely eliminate human labor and hence human error thanks to complete automation powered by machine learning. This includes automated upgrades and patches, disaster recovery, performance tuning and more. In fact, an autonomous database does not have any controls available for a human administrator – it just works™. Of course, it does not replace all the functions of a DBA: a database specialist can now focus on more interesting, business-related aspects of his job and leave the plumbing maintenance to a machine.

The offer comes with a quite unique SLA that guarantees 99.995% availability without any exceptions. And, thanks to more elastic scalability and optimized performance, “it’s cheaper than AWS” as we were told at least a dozen times during the keynote. For me however, the security implications of this offer are extremely important. Since the database is no longer directly accessible to administrators, this not only dramatically improves its stability and resilience against human errors, but also substantially reduces the potential cyberattack surface and simplifies compliance with data protection regulations. This does not fully eliminate the need for database security solutions, but at least simplifies the task quite a bit without any additional costs.

Needless to say, this announcement has caused quite a stir among database professionals: does it mean that a DBA is now completely replaced by an AI? Should thousands of IT specialists around the world fear for their jobs? Well, the reality is a bit more complicated: the Autonomous Database is not really a product, but a managed service combining the newest improvements in the latest Oracle Database release with the decade-long evolution of various automation technologies, running on the next generation of Oracle Exadata hardware platform supported by the expertise of Oracle’s leading engineers. In short, you can only get all the benefits of this new solution when you become an Oracle Cloud customer.

This is, of course, a logical continuation of Oracle’s ongoing struggle to position itself as a Cloud company. Although the company already has an impressive portfolio of cloud-based enterprise applications and it continues to invest a lot in expanding their SaaS footprint, when it comes to PaaS and IaaS, Oracle still cannot really compete with its rivals that started in this business years earlier. So, instead of trying to beat competitors on their traditional playing fields, Oracle is now focusing on offering unique and innovative solutions that other cloud service providers simply do not have (and in the database market probably never will).

Another security-related announcement was the unveiling of the Oracle Security Monitoring and Analytics – a cloud-based solution that enables detection, investigation and remediation of various security threats across on-premises and cloud assets. Built upon the Oracle Management Cloud platform, this new service is also focusing on solving the skills gap problem in cybersecurity by reducing administration burden and improving efficiency of cybersecurity analysts.

Among other notable announcements are various services based on applied AI technologies like intelligent conversation bots and the newly launched enterprise-focused Blockchain Cloud Service based on the popular Hyperledger Fabric project. These offerings, combined with the latest rapid application development tools unveiled during the event as well, will certainly make the Oracle Cloud Platform more attractive not just for existing Oracle customers, but for newcomers of all sizes – from small startups with innovative ideas to large enterprises struggling to make their transition to the cloud as smooth as possible.

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

Modern Cybersecurity Trends & Technologies Learn more

Modern Cybersecurity Trends & Technologies

Companies continue spending millions of dollars on their cybersecurity. With an increasing complexity and variety of cyber-attacks, it is important for CISOs to set correct defense priorities and be aware of state-of-the-art cybersecurity mechanisms. [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00