Blog posts by Alexei Balaganski

Ransomware During the Pandemic Crisis

It is really astonishing how quickly the word “pandemic” has evolved from a subject of obscure computer games to the center of everyone’s daily conversations… However, when discussing the latest news about the coronavirus outbreak, one should not forget another pandemic that’s been causing massive damages to businesses, governments, and individuals around the world for several years already.

Since its initial emergence in Eastern Europe about a decade ago, it has quickly evolved into one of the largest global cyberthreats, crippling hospitals and entire cities, bringing large corporations to a total halt, costing the world billions in economic losses. We are, of course, talking about ransomware.

What is ransomware anyway?

Actually, the answer is directly in the name: ransomware is a kind of malicious software that’s designed to prevent you from accessing your computer or specific files on it until a ransom is paid to the attacker. Usually, ransomware is disguised as a legitimate document or program, and users are tricked to download them from a website or to open as an email attachment.

Most modern strains of ransomware encrypt valuable files, such as office documents and images, on affected devices, others merely lock the victims out of their computers – both however demand a payment to restore the access.

Contrary to the popular belief, ransomware attacks are not diabolically clever creations of elite hacker groups: since they don’t need to evade detection for a long time to achieve their goal, even novice cybercriminals can launch successful ransomware attacks with minimal resources.

Ransomware evolution

Early ransomware types were usually limited to a narrow geographical region, where attackers were able to collect their money via premium SMS messages or even prepaid cards. However, the explosive growth of anonymous cryptocurrencies like Bitcoin made them the perfect tool for much larger global extortion campaigns.

Within a few years, ransomware has become a highly lucrative business for cybercriminals, providing high reward and low risk with minimal investments. Many criminal groups even offer Ransomware-as-a-service, where the earnings are shared between malware creators and their “affiliates”.

Things turned ugly in 2017 when several strains of ransomware appeared, which utilized a highly dangerous Windows exploit believed to be developed by the NSA and later leaked by a hacker group to spread across computer networks without any user interaction.

WannaCry attack has affected over 200,000 computers across 150 countries including the entire British National Healthcare System. NotPetya malware, originally targeting Ukrainian companies, has spread uncontrollably around the world within days, affecting many large enterprises: the shipping company Maersk alone estimated their losses to be around $300 million.

Ransomware was no longer just a lucrative criminal business: it has turned into a cyberweapon of mass destruction.

Ransomware identification

As opposed to most other cyber threats, ransomware manifests itself within minutes of the initial infection. Whether you have clicked a link to a malicious website, opened a suspicious email attachment, or were affected by a drive-by download (such as an infected online ad), at the moment when you see a note on the screen telling that your computer is blocked or your files are encrypted, the damage is usually already done and the only thing you can do is to try to minimize it.

First, don’t panic – not all such notes are a sign of real ransomware, especially if they appear in your browser. Check whether you can still switch to a different program or browse a folder with your documents. If not, you might be a victim of locker ransomware.

If you can still browse your documents, but cannot open any of them because of data corruption, it might be a sign of the worst-case scenario – your files are encrypted and the only way to get them back is to pay the ransom. At least that’s what the attacker wants you to believe.

Dealing with a ransomware attack

Whether you decide to pay the ransom or not, your first action should be disconnecting your computer from the network and external drives: you really don’t want ransomware to spread to other devices or cloud services. It is also advisable to take a photo of the ransom note – this will help identify the malware strain that hit you.

Should you pay? Most security experts recommend against it: not only there is no guarantee to get your documents back after paying, but this will also encourage more ransomware attacks in the future. However, if critical business records are at stake, and you do not have any copies left, paying the ransom might be a sensible (even though morally questionable) option.

It cannot be stressed enough that you’re not alone against the attacker in any case: there are multiple resources that will help you identify the specific type of ransomware, let you know whether the encryption can be reversed and provide additional guidance. Of course, every notable antivirus company offers its own tools and services to deal with ransomware attacks as well.

However, in many cases, the only viable option left to you is to cut your losses, do a clean operating system reinstall on your device and to restore any available files from a backup. Before doing so, however, check whether your backups weren’t encrypted, too.

Finally, it’s highly recommended to submit a report to your local police. This is not just necessary for filing an insurance claim but will also help the authorities to stay on top of malware trends and might even help other victims of later attacks.

Protecting against ransomware

If the scenario above looks too grim then by now it should be clear to you that the most painless method of dealing with ransomware attacks is to prevent them from happening in the first place.

Arguably the most important preventive measure is to have proper backups of all your documents. A popular rule of thumb is to create three copies of your data, store them on two different media, and keep one copy off-site. And, of course, you have to actually test your backups regularly to ensure that they are still recoverable. Having an off-site backup ensures that even the most sophisticated ransomware that specifically targets backup files won’t render them useless.

However, backups alone won’t save you from locking ransomware or from the latest trend of “ransomware doxing”, when attackers threaten to publicly reveal sensitive stolen data unless the ransom is paid. It is, therefore, crucial to keep your users (employees, colleagues, family members) constantly informed about the potential threats. They should be trained to always check the addresses of incoming emails and not blindly click on any links or attachments. More importantly, however, they must be provided with clear actionable guides for dealing with a ransomware attack on their computers.

Endpoint protection solutions are the primary line of defense against ransomware, but the exact capabilities may vary between different products. Modern solutions rely on behavior analysis methods (sometimes powered by machine learning) to identify and block suspicious encryption-related activities before they damage your documents. Others will transparently keep copies of your original files and revert any malicious changes to them automatically. Even the Windows Defender antivirus that comes bundled with Windows 10 now provides built-in ransomware protection – however, you might want to check whether it is enabled on your computer already.

Keeping your operating system and critical applications up to date with security patches is another key prevention measure. Remember, the only reason why WannaCry was so devastating is that so many companies did not apply a critical Windows patch in time after it was released months before the attack. Besides Windows itself, applications like Internet Explorer, Adobe Flash, and Microsoft Office are notorious for having the most commonly exploited vulnerabilities.

Finally, a word about the cloud: there is a popular belief that keeping work documents in a cloud storage service like OneDrive or Dropbox is an efficient preventive measure against ransomware attacks. To be fair, there is a grain of truth in it. Most of these services have built-in versioning capabilities, allowing you to restore a previous version of a document after it gets corrupted by ransomware. Also, if your computer is locked, you can easily continue working with your document from another device (or even from a remote desktop session if your company uses a virtual desktop infrastructure).

However, these considerations only apply if you are not synchronizing your cloud files with your computer: those local copies will be compromised by ransomware and then automatically copied to the cloud in a matter of seconds. Remember, file synchronization services are not a replacement for a proper backup!

Ransomware during the pandemic crisis

Looking at the latest media reports, it seems that many workers are going to work from home for a substantial period. How does it affect the overall resilience against ransomware attacks? Recently, several large cybercrime gangs have publicly promised not to target healthcare organizations during the pandemic. Also, staying away from corporate networks might substantially slow the spread of malware from one device to the others.

However, security researchers are already reporting an uptake in malicious attacks exploiting coronavirus fears. Also, even for every slightly altruistic cybercriminal, there are at least a thousand of others without ethical reservations. For individuals working from home, especially when using personal devices not protected by enterprise-wide security tools, the risk of becoming a ransomware victim is, unfortunately, higher than ever.

For an alternative to office-based security gateways, companies should look at the security solutions delivered from the cloud, especially those that do not require any additional hardware or software deployment.  However, the most efficient protection against ransomware is still your own common sense: do not open unsolicited email communications, avoid clicking suspicious links and attachments, stick to trusted websites for the latest news. Remember, your cyber hygiene is just as critical for your security as literal hygiene is for your health.

 

The DON’Ts of IT in the Times of Crisis

Truly we are living in interesting times (incidentally, this expression, commonly known as “the Chinese curse”, has nothing to do with China). Just a couple of weeks ago the world was watching China fighting the coronavirus outbreak as something that surely can never happen in other countries. Today Europe and the United States are facing the same crisis and we’re quickly coming to the realization that neither memes nor thoughts and prayers are going to help: many countries have already introduced substantial quarantine measures to limit social interactions and thus slow down the spread of the virus.

Suddenly, for many companies, the only sensible way to continue their business is to let everyone work from home. Naturally, the Internet is full of recommendations on things you need to do to ease this transition. For a change, I’d like to compile a short and practical list of IT- and security-related things you should avoid doing now to save yourself from regrets later… This is mostly targeted towards smaller companies that, on one hand, probably never had any plans prepared for situations like this but on the other hand can be much quicker and more flexible in actually implementing changes in their processes on such short notice. Check out my colleague John Tolbert's post if you're looking for advice for large enterprises.

Let’s start with a few general recommendations…

The pandemic is not an excuse for GDPR violations

First and foremost – don’t panic (knowing where your towel is wouldn’t hurt either)! It isn’t easy to stay calm and level-headed looking at the sensationalized media coverage from countries like Italy, but making impulsive irrational decisions is the worst possible thing to do in a crisis. This doesn’t only apply to hoarding toilet paper and pasta: if you’re considering actions like purchasing 100 laptops today to issue one to your every employee tomorrow, you might want to think twice…

Don’t think that the pandemic will be a universal excuse for any potential violation of security and compliance regulations, however: the crisis will be over sooner or later, and GDPR or PCI DSS will still apply… Having said that, don’t blindly trust anyone’s recommendations, not even ours! This especially applies to unscrupulous marketing activities of some vendors who might attempt to cash in on the opportunity. Only you can properly assess the risks of enabling remote access to certain types of sensitive corporate or customer data and to adjust your business processes accordingly.

Last but not least, don’t try to build a virtual office for remote workers. With a handful of obvious exceptions (like, for example, accessing legacy on-prem equipment or dealing with highly regulated personal information), people working from home don’t really need to pretend to be in the office. Consider the current situation a once-in-a-lifetime opportunity to radically upgrade your business workflows. Maybe you don’t really need to clock in every employee? Are your daily morning meetings so important that you need to pay for an online collaboration platform to continue them? Again, only you can decide!

Want some more practical advice?

Security from the cloud as a modern VPN alternative

How about this: you don’t need a VPN! Seriously, if you don’t have one already, don’t even think about investing in one. VPNs are not really a modern technology; not only do they not scale for situations like this, but they also introduce gaping holes in security perimeters by giving users full access to whole corporate networks.

With multiple known vulnerabilities in VPN products, which will more likely not be patched in time by overstressed IT teams, malicious actors will get additional opportunities to compromise your security. Instead, consider a more modern Zero Trust approach with software-defined perimeter (SDP) solutions, which enable fine-grained, authenticated and audited access to specific internal services and applications from anywhere without a bottleneck of a VPN. Companies like Zscaler, Akamai or CloudFlare among others are offering such solutions completely delivered and managed from the cloud. The latter even offers its solution for free for small businesses during the pandemic emergency.

Also, if your office security still relies on a hub-and-spoke architecture with firewalls and other appliances filtering all corporate traffic, don’t forget that it leaves remote workers unprotected! This approach has long been proven to be inefficient and hard to scale, so again consider a great opportunity to switch to a cloud-delivered security solution! Whether you’ll opt for a service from Akamai, Cisco or Zscaler among other possibilities, you should choose one that does not require any network changes or software deployment to keep your employees safe working from home, even from their personal devices.

Separating work and private life in home office

However, if you’re still not comfortable with BYOD, you don’t need to compromise! Consider a much more convenient and safer (if somewhat more expensive) enterprise mobility management solution that will maintain a secure air gap between private and corporate things on every employee’s device. Whether you opt for a solution from Microsoft or VMware among others, you’ll maintain full control over security policies regardless of every worker’s current location.

You don’t need to spend additional money to stay in touch with your colleagues and business partners: you can continue using whatever online collaboration platform you’re already using. Each has its own small quirks, but in the end, GoToMeeting, WebEx, Google Hangouts, Microsoft Teams or any other tool seem to get their job done pretty well. If you are still unsure which one you prefer the most, have a look at this website: some vendors are offering special extended trials or even free versions of their tools for small businesses.

Protecting the weakest link in your security chain

Don’t forget about the human factor! Every humanitarian crisis gives rise to various social engineering attacks aimed to deceive users into running malicious software or simply hijacking their accounts. Unsurprisingly, security researchers already report various malicious attacks exploiting coronavirus fears. With email still being the most popular (and incidentally the least secure) communication channel for businesses, you probably already have at least some kind of email security solution in place for your employees. However, none of those are impenetrable, and people often fall victim to a simple scam that has nothing to do with malware. Educating your employees about potential risks is a good idea, but proactive protection is more important.

Thus, if you still haven’t deployed multi-factor authentication in your company, don’t wait any longer! According to multiple reports, simply enabling MFA on an online service used by your business can protect your employees from over 99% of credential-based attacks. And it does not have to be expensive as well – most notable online services, including Google, Microsoft, Salesforce or Dropbox, support a range of different authentication options.

Even the simplest One Time Password generated by a smartphone app is vastly more secure than no MFA. For additional security across multiple online services, you may want to consider FIDO2-based authentication devices. The Yubikey is perhaps the most popular one, but Google offers its own Titan Key as well and you can find many more FIDO-certified products on the alliance’s website.

Don't just look at the labels: asking the right questions

Traditional antimalware protection for each endpoint device is, of course, still important, but now that you have to consider the option of letting your employees use their own devices for work, what it is the best product you can buy? To be honest, I don’t have an easy answer for that – whether you’ll opt for a “best-of-breed” endpoint protection product like Kaspersky, an integrated cloud-native protection platform like Carbon Black or a radical AI-powered antivirus replacement like SentinelOne, don’t just look at product labels, ask vendors about supported capabilities and other concrete technical things. You might want to refer to KuppingerCole’s research like this Buyer’s Compass if you need to know which questions to ask. Check out Paul Fisher's post as well for an in-depth view of potential applications of AI in fighting the consequences of the pandemic.

In fact, don’t hesitate to reach out to us for independent, vendor-neutral guidance and support in all things related to cybersecurity. And more importantly, stay safe and healthy. Use this opportunity to relax a bit, be with your family and think of new opportunities after the crisis is over. And don’t forget to wash your hands!

Will 2020 Be the Year of Oracle Cloud?

Recently I had an opportunity to attend the Next Generation Cloud Summit, an event organized by Oracle in Seattle, WA for industry analysts to learn about the latest developments in Oracle Cloud strategy. This was the first Oracle’s analyst summit in Seattle and coincidentally my first time in the Cloud city as well… Apparently, that’s a legitimate nickname for Seattle for a few years already, since all notable cloud service providers are located there, with Google and Oracle joining AWS and Microsoft at their historical home grounds by opening their cloud offices in the city.

Alas, when it comes to weather, Seattle in winter lives up to its nickname as well – it was raining non-stop for the whole three days I’ve spent at the event. Oh well, at least nothing distracted me from learning and discussing the latest developments in Oracle’s cloud infrastructure, database, and analytics, security and application development portfolios. Unfortunately, some of the things I’ve learned are still under NDA for some time, but I think that even the things we can already talk about clearly show that Oracle has finally found the right way to reinvent itself.

A veteran database technology vendor, the company has been working hard to establish itself as a prominent cloud service provider in the recent years, and the struggle to bridge the cultural gap between the old-school “sealed ecosystem” approach Oracle has been so notorious for and the open and heterogeneous nature of the cloud has been very real.

A latecomer to the cloud market, the company had a unique opportunity not to repeat all the mistakes of its older competitors and to implement their cloud infrastructure with a much higher level of security by design (at least in what Oracle refers to as the “second generation cloud”). Combined with a rich suite of business applications and the industry-leading database to power them, Oracle had all the components of a successful public cloud, but unfortunately, it took them quite some time to figure out how to market it properly.

It was only last year when the company has finally stopped trying to fight with competing cloud providers on their terms with tactics like claiming that Oracle cloud is cheaper than AWS (while it might technically be the case for some scenarios, independent tests by industry analysts usually measure cloud costs with completely different methods). Instead, it finally became clear that the company should focus on its unique differentiators and their added value for Oracle cloud customers – such as the performance and compliance benefits of the Autonomous Database, the intelligent capabilities of the Oracle Analytics services and, of course, the cutting-edge networking technology of Oracle Cloud Infrastructure.

However, it’s the year 2020 that’s going to be the decisive push for Oracle’s new cloud strategy, and the company demonstrates its commitment with some impressive developments. First of all, by the end of this year, Oracle Cloud will expand from the current 17 regions to 36, including such countries as Israel, Saudi Arabia or Chile, to bring its services to all major markets. In addition, Oracle is expanding the interconnect program with Microsoft, increasing the number of data centers with high-speed direct connections to Azure cloud to six. This strategic partnership with Microsoft finally makes true multi-cloud scenarios possible, where developers could, for example, deploy their frontend applications using Azure services while keeping their data in Autonomous databases on managed Exadata servers in the Oracle Cloud.

Speaking of “autonomous”, the company is continuing to expand this brand and ultimately to deliver a comprehensive, highly integrated and, of course, intelligent suite of services under the Autonomous Data Platform moniker: this will not only include various flavors of the “self-driving” Oracle Database but a range of data management services for all kinds of stakeholders: from developers and data scientists to business analysts and everyone else. Together with the Oracle Analytics Cloud, the company is aiming to provide a complete solution for all your corporate data in one place, with seamless integrations with both Oracle’s own public cloud services, hybrid deployments “at Customer” and even with competitors (now rather partners) like Microsoft.

My personal favorite, however, was Oracle APEX, the company’s low-code development platform that gives mere mortals without programming skills the opportunity to quickly develop simple, but useful and scalable business applications. To be honest, APEX has been an integral part of every Oracle database for over 15 years, but for a long time, it has remained a kind of a hidden gem used primarily by Oracle database customers (I was surprised to learn that Germany has one of the largest APEX communities with hundreds of developers in my hometown alone). Well, now anyone can start with APEX for free without any prerequisites, you don’t even need an Oracle account for that! Alas, I wish Oracle had invested a bit more in promoting tools like this outside of their existing community. I had to travel all the way to Seattle to learn about this, but at least now you don’t have to!

Of course, Oracle still has to learn quite a lot from the likes of Microsoft (how to reinvent its public image for the new generation of IT specialists) and perhaps even Apple (how to charge a premium and still make customers feel happy). But I’m pretty sure they are already on the right track to becoming a proper cloud service provider with a truly open ecosystem and a passionate community. 

The Next Best Thing After "Secure by Design"

There is an old saying that goes like this: “you can lead a horse to water, but you can’t make it drink”. Nothing personal against anyone in particular, but it seems to me that it perfectly represents the current state of cybersecurity across almost any industry. Although the cybersecurity tools are arguably becoming better and more sophisticated, and, for example, cloud service providers are constantly rolling out new security and compliance features in their platforms, the number of data breaches and hacks continues to grow. But why?

Well, the most obvious answer is that security tools, even the best ones, are still just tools. When a security feature is implemented as an optional add-on to a business-relevant product or service, someone still has to know that it exists to deploy and configure it properly and then operate and monitor it continuously, taking care of security alerts, as well as bug fixes, new features and the latest best practices.

The skills gap is real

Perhaps the most notorious example of this problem is the Simple Storage Service (better known as S3) from AWS. For over a decade, this cloud storage platform has been one of the most popular places to keep any kind of data, including the most sensitive kinds like financial or healthcare records. And even though over the years AWS has introduced multiple additional security controls for S3, the number of high-profile breaches caused by improper access configuration leaving sensitive data open to the public, is still staggering. A similar reputation stain – when their database installations were exposed to the whole Internet without any authentication – still haunts MongoDB even though they have fixed this issue years ago.

Of course, every IT expert is supposed to know better and never make such disastrous mistakes. Unfortunately, to err is human, but the even bigger problem is that not every company can afford to have a team of such experts. The notorious skills gap is real – only the largest enterprises can afford to hire the real pros, and for smaller companies, managed security services are perhaps the only viable alternative. For many companies, cybersecurity is still some kind of a cargo cult, when a purchased security tool isn’t even properly deployed or monitored for alerts.

“Secure by design” is too often not an option

Wouldn’t it be awesome if software just were secure on its own, without any effort from its users? This idea has been the foundation for “secure by design” principles that have been established years ago, defining various approaches towards creating software that is inherently free from vulnerabilities and resilient against hacking attacks. Alas, writing properly secured software is a tedious and costly process, which in most cases does not provide any immediate ROI (with a few existing exceptions like space flight or highly regulated financial applications). Also, these principles do not apply well to existing legacy applications – it is very difficult to refactor old code for security without breaking a lot of stuff.

So, if making software truly secure is so complicated, what are more viable alternatives? Well, the most trivial, yet arguably still the most popular one is offering software as a managed service, with a team of experts behind it to take care of all operational maintenance and security issues. The only major problem with this approach is that it does not scale well for the same reason – the number of experts in the world is finite.

Current AI technologies lack flexibility for different challenges

The next big breakthrough that will supposedly solve this challenge is replacing human experts with AI. Unfortunately, most people tend to massively overestimate the sophistication of existing AI technologies. While they are undoubtedly much more efficient than us at automating tedious number-crunching tasks, the road towards fully autonomous universal AI capable of replacing us in mission-critical decision making is still very long. While some very interesting developments for narrow security-related AI-powered solutions already exist (like Oracle’s Autonomous Database or automated network security solutions from vendors like Darktrace), they are nowhere nearly flexible enough to be adapted for different challenges.

And this is where we finally get back to the statement made in this post’s title. If “secure by design” and “secure by AI” are undoubtedly the long-term goals for software vendors now, what is the next best thing possible in the shorter term? My strong belief has always been that the primary reason for not doing security properly (which in the worst cases degenerates into a cargo cult mentioned above) is insufficient guidance and a lack of widely accepted best practices in every area of cybersecurity. The best security controls do not work if they are not enabled, and their existence is not communicated to users.

“Secure by default” should be your short-term goal

Thus, the next best thing after “secure by design” is “secure by default”. If a software vendor or service provider cannot guarantee that their product is free of security vulnerabilities, they should at least make an effort to ensure that every user knows the full potential of existing security controls, has them enabled according to the latest best practices and, ideally, that their security posture cannot be easily compromised through misconfiguration.

The reason for me to write this blog post was the article about security defaults introduced by Microsoft for their Azure Active Directory service. They are a collection of settings that can be applied to any Azure tenant with a single mouse click and which will ensure that all users are required to use multi-factor authentication, that legacy, insecure authentication protocols are no longer used and that highly privileged administration activities are protected by additional security checks.

There isn’t really anything fancy behind this new feature – it’s just a combination of existing security controls applied according to the current security best practices. It won’t protect Azure users against 100% of cyberattacks. It’s not even suitable for all users, since, if applied, it will conflict with more advanced capabilities like Conditional Access. However, protecting 95% of users against 95% of attacks is miles better than not protecting anyone. Most importantly, however, is that these settings will be applied to all new tenants as well as to existing ones that have no idea about any advanced security controls.

Time to vaccinate your IT now

In a way, this approach can be compared to vaccinations against a few known dangerous diseases. There will always be a few exemptions and an occasional ill effect, but the notion of population immunity applies to cybersecurity as well. Ask your software vendor or service provider for security defaults! This is the vaccination for IT.

Security vs Convenience: In the Cloud, it’s Still Your Choice and Your Responsibility

Social logins are extremely popular. Instead of going through a process of creating a new account on another website, you just click on the “Continue with Facebook” or “Sign in with Google” button and you’re in. The website in question can automatically pull the needed information like your name or photo from either service to complete your new profile. It can even ask for additional permissions like seeing your friend list or posting new content on your behalf.

When implemented correctly, following all the security and compliance checks, this enables multiple convenient functions for users. However, some applications are known to abuse user consent, asking for excessively broad permissions to illegally collect personal information, track users across websites or post spam messages. The apparent inability (or unwillingness) of companies like Facebook to put an end to this has been a major source of criticism by privacy advocates for years.

Social logins for enterprise environments? A CISO’s nightmare

When it comes to enterprise cloud service providers, however, the issue can go far beyond user privacy. As one security researcher demonstrated just a few days ago, using a similar “Sign in with Microsoft” button can lead to much bigger security and compliance problems for any company that uses Office 365 or Azure AD to manage their employees’ identities.

Even though user authentication itself can be implemented with multiple security features like multi-factor authentication, Conditional Access, and Identity Protection to ensure that a malicious actor is not impersonating your employee, the default settings for user consent in Azure Active Directory are so permissive that a Microsoft account can be used for social logins as well.

Any third-party application can easily request user’s consent to access their mail and contacts, to read any of their documents, send e-mails on their behalf and so on. An access token issued by Microsoft to such an application is not subjected to any of the security validations mentioned above, it also does not expire automatically. If a user has access to any corporate intellectual property or deals with sensitive customer information, this creates a massive, unchecked and easily exploitable backdoor for malicious access or at least a huge compliance violation.

Even in the cloud, it’s still your responsibility

Of course, Microsoft’s own security guidance recommends disabling this feature under Azure Active Directory – Enterprise applications – User settings, but it is nevertheless enabled by default. It is also worth noting that under no circumstances is Microsoft liable for any data breaches which may occur this way: as the data owner, you’re still fully responsible for securing your information, under GDPR or any other compliance regulation.

In a way, this is exactly the same kind of problem as numerous data breaches caused by unprotected Amazon S3 buckets – even though AWS did not initially provide an on-by-default setting for data protection in their storage service, which eventually led to many large-scale data leaks, it was always the owners of this data that were held responsible for the consequences.

So, to be on the safe side, disabling the “Users can consent to apps accessing company data on their behalf” option seems to be a very sensible idea. It is also possible to still give your users a choice of consent, but only after a mandatory review by an administrator.

Unfortunately, this alone isn’t enough. You still have to check every user for potentially unsafe applications that already have access to their data. Unless your Office 365 subscription includes access to the Microsoft Cloud App Security portal, this may take a while…

Increase Accuracy in Demand Forecasting with Artificial Intelligence

Demand forecasting is one of the most crucial factors that determine the success of every business, online or offline, retail or wholesale. Being able to predict future customer behavior is essential for optimal purchase planning, supply chain management, reducing potential risks and improving profit margins. In some form, demand prediction has existed since the dawn of civilization, just as long as commerce itself.

Yet, even nowadays, when businesses have much more historical data available for analysis and a broad range of statistical methods to crunch it, demand forecasting is still not hard science, often relying on expert decisions based on intuition alone. With all the hype surrounding artificial intelligence’s potential applications in just about any line of business, it’s no wonder then that many experts believe it will have the biggest impact on demand planning as well.

Benefits of AI applications in demand forecasting

But what exactly are the potential benefits of this new approach as opposed to traditional methods? Well, the most obvious one is efficiency due to the elimination of the human factor. Instead of relying on serendipity, machine learning-based methods operate on quantifiable data, both from the business's own operational history and on various market intelligence than may influence demand fluctuations (like competitor activities, price changes or even weather).

On the other hand, most traditional statistical demand prediction methods were designed to better approximate specific use cases: quick vs. slow fluctuations, large vs. small businesses and so on. Selecting the right combination of those methods requires you to be able to deal with a lot of questions you currently might not even anticipate, not to mention know the right answers. Machine learning-based business analytics solutions are known for helping companies to discover previously unknown patterns in their historical data and thus for removing a substantial part of guesswork from predictions.

Last but not least, the market already has quite a few ready-made solutions to offer, either as standalone platforms or as a part of bigger business intelligence solutions. You don’t need to reinvent the wheel anymore. Just connect one of those solutions to your historical data, the rest, including multiple sources of external market intelligence, will be at your fingertips instantly.

What about challenges and limitations?

Of course, one has to consider the potential challenges of this approach as well. The biggest one has even nothing to do with AI: it’s all about the availability and quality of your own data. Machine learning models require lots of input to deliver quality results, and by far not every company has all this information in a form ready for sharing yet. For many, the journey towards AI-powered future has to start with breaking the silos and making historical data unified and consistent.

This does not apply just to sales operations, by the way. Efficient demand prediction can only work when data across all business units can be correlated: including logistics, marketing, and others. If your (or your suppliers’, for that matter) primary analytics tool is still Excel, thinking about artificial intelligence is probably a bit premature.

A major inherent problem of many AI applications is explainability. For many, not being able to understand how exactly a particular prediction has been reached might be a major cause of distrust. Of course, this is primarily an organizational and cultural challenge, but a major challenge, nevertheless.

However, these challenges should not be seen as an excuse to ignore the new AI-based solutions completely. Artificial intelligence for demand forecasting is no longer just a theory. Businesses across various verticals are already using it with various, but undeniably positive results. Researchers claim that machine learning methods can achieve up to 50% better accuracy over purely statistical approaches, to say nothing about human intuition.

If your company is not ready for embracing AI yet, make sure you start addressing your shortcomings before your competitors. In the age of digital transformation, having business processes and business data agile and available for new technologies is a matter of survival, after all. More efficient demand forecasting is just one of the benefits you’ll be able to reap afterward.

Feel free to browse our Focus Area: AI for the Future of your Business for more related content.

There Is No “One Stop Shop” for API Management and Security Yet

From what used to be a purely technical concept created to make developers’ lives easier, Application Programming Interfaces (APIs) have evolved into one of the foundations of modern digital business. Today, APIs can be found everywhere – at homes and in mobile devices, in corporate networks and in the cloud, even in industrial environments, to say nothing about the Internet of Things.

When dealing with APIs, security should not be an afterthought

In a world where digital information is one of the “crown jewels” of many modern businesses (and even the primary source of revenue for some), APIs are now powering the logistics of delivering digital products to partners and customers. Almost every software product or cloud service now comes with a set of APIs for management, integration, monitoring or a multitude of other purposes.

As it often happens in such scenarios, security quickly becomes an afterthought at best or, even worse, it is seen as a nuisance and an obstacle on the road to success. The success of an API is measured by its adoption and security mechanisms are seen as friction that limits this adoption. There are also several common misconceptions around the very notion of API security, notably the idea that existing security products like web application firewalls are perfectly capable of addressing API-related risks.

An integrated API security strategy is indispensable

Creating a well-planned strategy and reliable infrastructure to expose their business functionality securely to be consumed by partners, customers, and developers is a significant challenge that has to be addressed not just at the gateway level, but along the whole information chain from backend systems to endpoint applications. It is therefore obvious that point solutions addressing specific links in this chain are not viable in the long term.

Only by combining proactive application security measures for developers with continuous activity monitoring and deep API-specific threat analysis for operations teams and smart, risk-based and actionable automation for security analysts one can ensure consistent management, governance and security of corporate APIs and thus the continuity of business processes depending on them.

Security challenges often remain underestimated

We have long recognized API Economy as one of the most important current IT trends. Rapidly growing demand for exposing and consuming APIs, which enables organizations to create new business models and connect with partners and customers, has tipped the industry towards adopting lightweight RESTful APIs, which are commonly used today.

Unfortunately, many organizations tend to underestimate potential security challenges of opening up their APIs without a security strategy and infrastructure in place. Such popular emerging technologies as the Internet of Things or Software Defined Computing Infrastructure (SDCI), which rely significantly on API ecosystems, are also bringing new security challenges with them. New distributed application architectures like those based on microservices, are introducing their own share of technical and business problems as well.

KuppingerCole’s analysis is primarily looking at integrated API management platforms, but with a strong focus on security features either embedded directly into these solutions or provided by specialized third party tools closely integrated with them.

The API market has changed dramatically within just a few years

When we started following the API security market over 5 years ago, the industry was still in a rather early emerging stage, with most large vendors focusing primarily on operational capabilities, with very rudimentary threat protection functions built into API management platforms and dedicated API security solutions almost non-existent. In just a few years, the market has changed dramatically.

On one hand, the core API management capabilities are quickly becoming almost a commodity, with, for example, every cloud service provider offering at least some basic API gateway functionality built into their cloud platforms utilizing their native identity management, monitoring, and analytics capabilities. Enterprise-focused API management vendors are therefore looking into expanding the coverage of their solutions to address new business, security or compliance challenges. Some, more future-minded vendors are even no longer considering API management a separate discipline within IT and offer their existing tools as a part of a larger enterprise integration platforms.

On the other hand, the growing awareness of the general public about API security challenges has dramatically increased the demand for specialized tools for securing existing APIs. This has led to the emergence of numerous security-focused startups, offering their innovative solutions, usually within a single area of the API security discipline.

Despite consolidation, there is no “one stop shop” for API security yet

Unfortunately, the field of API security is very broad and complicated, and very few (if any) vendors are currently capable of delivering a comprehensive security solution that could cover all required functional areas. Although the market is already showing signs of undergoing consolidation, with larger vendors acquiring these startups and incorporating their technologies into existing products, expecting to find a “one stop shop” for API security is still a bit premature.

Although the current state of API management and security market is radically different from the situation just a few years ago, and the overall developments are extremely positive, indicating growing demand for more universal and convenient tools and increasing quality of available solutions, it is yet to reach anything resembling the stage of maturity. Thus, it’s even more important for companies developing their API strategies to be aware of the current developments and to look for solutions that implement the required capabilities and integrate well with other existing tools and processes.

Hybrid deployment model is the only flexible and future-proof security option

Since most API management solutions are expected to provide management and protection for APIs regardless of where they are deployed – on-premises, in any cloud or within containerized or serverless environments – the very notion of the delivery model becomes complicated.

Most API management platforms are designed to be loosely coupled, flexible, scalable and environment-agnostic, with a goal to provide consistent functional coverage for all types of APIs and other services. While the gateway-based deployment model remains the most widespread, with API gateways deployed either closer to existing backends or to API consumers, modern application architectures may require alternative deployment scenarios like service meshes for microservices.

Dedicated API security solutions that rely on real-time monitoring and analytics may be deployed either in-line, intercepting API traffic or rely on out-of-band communications with API management platforms. However, management consoles, developer portals, analytics platforms and many other components are usually deployed in the cloud to enable a single pane of glass view across heterogeneous deployments. A growing number of additional capabilities are now being offered as Software-as-a-Service with consumption-based licensing.

In short, for a comprehensive API management and security architecture a hybrid deployment model is the only flexible and future-proof option. Still, for highly sensitive or regulated environments customers may opt for a fully on-premises deployment.

Required Capabilities

In our upcoming Leadership Compass on API Management and Security, we evaluate products according to multiple key functional areas of API management and security solutions. These include API Lifecycle Management core capabilities, flexibility of Deployment and Integration, developer engagement with Developer Portal and Tools, strength and flexibility of Identity and Access Control, API Vulnerability Management for proactive hardening of APIs, Real-time Security Intelligence for detecting ongoing attacks, Integrity and Threat Protection means for securing the data processed by APIs, and, last but not least, each solution’s Scalability and Performance.

The preliminary results of our comparison will be presented at out Cybersecurity Leadership Summit, which will take place next week in Berlin.

Can Your Antivirus Be Too Intelligent Sometimes?

Current and future applications of artificial intelligence (or should we rather stick to a more appropriate term “Machine Learning”?) in cybersecurity have been one of the hottest discussion topics in recent years. Some experts, especially those employed by anti-malware vendors, see ML-powered malware detection as the ultimate solution to replace all previous-generation security tools. Others are more cautious, seeing great potential in such products, but warning about the inherent challenges of current ML algorithms.

One particularly egregious example of “AI security gone wrong” was covered in an earlier post by my colleague John Tolbert. In short, to reduce the number of false positives produced by an AI-based malware detection engine, developers have added another engine that whitelisted popular software and games. Unfortunately, the second engine worked a bit too well, allowing hackers to mask any malware as innocent code just by appending some strings copied from a whitelisted application.

However, such cases where bold marketing claims contradict not just common sense but the reality itself and thus force engineers to fix their ML model shortcomings with clumsy workarounds, are hopefully not particularly common. However, every ML-based security product does face the same challenge – whenever a particular file triggers a false positive, there is no way to tell the model to just stop it. After all, machine learning is not based on rules, you have to feed the model with lots of training data to gradually guide it to a correct decision and re-labeling just one sample is not enough.

This is exactly the problem the developers of Dolphin Emulator have recently faced: for quite some time, every build of their application has been recognized by Windows Defender as a malware based on Microsoft’s AI-powered behavior analysis. Every time the developers would submit a report to Microsoft, it would be dutifully added to the application whitelist, and the case would be closed. Until the next build with a different file hash is released.

Apparently, the way this cloud-based ML-powered detection engine is designed, there is simply no way to fix a false positive once and for all future builds. However, the company obviously does not want to make the same mistake as Cylance and inadvertently whitelist too much, creating potential false negatives. Thus, the developers and users of the Dolphin Emulator are left with the only option: submit more and more false-positive reports and hope that sooner or later the ML engine will “change its mind” on the issue.

Machine learning enhanced security tools are supposed to eliminate the tedious manual labor by security analysts; however, this issue shows that sometimes just the opposite happens. Antimalware vendors, application developers, and even users must do more work to overcome this ML interpretation problem. Yet, does it really mean that incorporating machine learning into an antivirus was a mistake? Of course not, but giving too much authority to an ML engine which is, in a sense, incapable of explaining its decisions and does not react well to criticism, probably was.

Potential solutions for these shortcomings do exist, the most obvious being the ongoing work on making machine learning models more explainable, giving insights into the ways they are making decisions on particular data samples, instead of just presenting themselves to users as a kind of a black box. However, we’re yet to see commercial solutions based on this research. In the future, a broader approach towards the “artificial intelligence lifecycle” will surely be needed, covering not just developing and debugging models, but stretching from the initial training data management all the way up to ethical and legal implications of AI.

By the way, we’re going to discuss the latest developments and challenges of AI in cybersecurity at our upcoming Cybersecurity Leadership Summit in Berlin. Looking forward to meeting you there! If you want to read up on Artificial Intelligence and Machine Learning, be sure to browse our KC+ research platform.

Do You Need a Chief Artificial Intelligence Officer?

Well, if you ask me, the short answer is – why not? After all, companies around the world have a long history of employing people with weird titles ranging from “Chief Happiness Officer” to “Galactic Viceroy of Research Excellence”. A more reasonable response, however, would need to take one important thing into consideration – what a CAIO’s job in your organization would be?

There is no doubt that “Artificial Intelligence” has already become an integral part of our daily lives, both at home and at work. In just a few years, machine learning and other technologies that power various AI applications evolved from highly complicated and prohibitively expensive research prototypes to a variety of specialized solutions available as a service. From image recognition and language processing to predictive analytics and intelligent automation - a broad range of useful AI-powered tools is now available to everyone.

Just like the cloud a decade ago (and Big Data even earlier), AI is universally perceived as a major competitive advantage, a solution for numerous business challenges and even as an enabler of new revenue streams. However, does it really imply that every organization needs an “AI strategy” along with a dedicated executive to implement it?

Sure, there are companies around the world that have made AI a major part of their core business. Cloud service providers, business intelligence vendors or large manufacturing and logistics companies – for them, AI is a major part of the core business expertise or even a revenue-generating product. For the rest of us, however, AI is just another toolkit, powerful and convenient, to address specific business challenges.

Whether your goal is to improve the efficiency of your marketing campaign, to optimize equipment maintenance cycle or to make your IT infrastructure more resilient against cyberattacks – a sensible strategy to achieve such a goal never starts with picking up a single tool. Hiring a highly motivated AI specialist to tackle these challenges would have exactly the opposite effect: armed with a hammer, a person is inevitably going to treat any problem as if it were a nail.

This, of course, by no means implies that companies should not hire AI specialists. However, just like the AI itself was never intended to replace humans, “embracing the AI” should not overshadow the real business goals. We only need to look at Blockchain for a similar story: just a couple years ago adding a Blockchain to any project seemed like a sensible goal regardless of any potential practical gains. Today, the technology has already passed the peak of inflated expectations and it finally seems that the fad is transitioning to the productive phase, at least in those usage scenarios where lack of reliable methods of establishing distributed trust was indeed a business challenge.

Another aspect to consider is the sheer breadth of the AI frontier, both from the AI expert’s perspective and from the point of view of a potential user. Even within such a specialized application area as cybersecurity, the choice of available tools and strategies can be quite bewildering. Looking at the current AI landscape as a whole,  one cannot but realize that it encompasses many complex and quite unrelated technologies and problem domains. Last but not least, consider the new problems that AI itself is creating: many of those lie very much outside of the technology scope and come with social, ethical or legal implications.

In this regard, coming up with a single strategy that is supposed to incorporate so many disparate factors and can potentially influence every aspect of a company’s core business goals and processes seems like a leap of faith that not many organizations are ready to make just yet. Maybe a more rational approach towards AI is the same as with the cloud or any other new technology before that: identify the most important challenges your business is facing, set reasonable goals, find the experts that can help identify the most appropriate tools for achieving them and work together on delivering tangible results. Even better if you can collaborate on (probably different) experts on outlining a long-term AI adoption strategy that would ensure that your individual projects and investments align with each other and avoid wasting time and resources. In other words: Think Big, Start Small, Learn Fast.

If you liked this text, feel free to browse our Artificial Intelligence focus area for more related content.

Meet the Next-Generation Oracle

Oracle OpenWorld 2019 has just wrapped yesterday, and if there is a single word that can describe my impressions of it, that would be “different”. Immediately noticeable was the absence of the traditional Oracle Red spilling into the streets around the Moscone Center in San Francisco, and the reason behind it is the new corporate design system called Redwood. You can already see its colors and patterns applied to the company’s website, but more importantly, it defines new UI controls for Oracle applications and cloud services.

Design, however, is by far not the Oracle’s biggest change. It appears that the company has finally reached the stage where a radical cultural shift is inevitable. To adapt to the latest market challenges and to extend the reach towards new customer demographics, Oracle needs to seriously reconsider many of its business practices, just like Microsoft did years ago. And looking at the announcements of this year’s OOW, the company is already making major strides in the right direction.

It’s an open secret that for years, Oracle has been struggling to position itself as one of the leading cloud service providers. Unfortunately, for a latecomer to this market, playing catch-up with more successful competitors is always a losing strategy. It took the company some time to realize that, and now Oracle is trying a different game: learning from others’ mistakes, understanding the challenges and requirements of modern enterprises, and in the end offering a lean, yet complete stack of cloud services that provide the highest level of performance, comprehensive security and compliance controls and, last but not least, intelligent automation for any business process.

The key concept in this vision for Oracle is “autonomy”. To eliminate human labor from cloud management is to eliminate human error, thus preventing the most common reason for data breaches. Last year, we’ve seen the announcement of the self-patching and self-tuning Autonomous Database. This time, Autonomous Linux has been presented – an operating system that can update itself (including kernel patches) without downtime. It seems that the company’s strategic vision is to make every service in their cloud autonomous in the same sense as well. Combined with the Generation 2 cloud infrastructure designed specifically to eliminate many network-based attack vectors, this provides additional weight to Oracle’s claim of having a cloud ready to run the most business-critical workloads.

Oracle Data Safe, a cloud-based service that improves Oracle database security by identifying risky configuration, users and sensitive data, which allows customers to closely monitor user activities and ensure data protection and compliance for their cloud databases, has been announced as well. Now Oracle cloud databases now include a straightforward, easy to use and free service that helps customers protect their sensitive data from security threats and compliance violations.

It is also worth noting that the company is finally starting to think “outside of the box” with regards to their business strategy as well; or rather outside of the “Oracle ecosystem” bubble. Strategic partnerships with Microsoft (to establish low-latency interconnections between Azure and Oracle Cloud datacenters) and VMware (to allow businesses lift and shift their entire VMware stacks to the cloud while maintaining full control over them, impossible in other public clouds) demonstrate this major paradigm shift in the company’s cloud roadmap.

Even more groundbreaking is arguably the introduction of the new always free tier for cloud services – which is exactly what it says on the lid: an opportunity for every developer, student or even a corporate IT worker to use Autonomous Databases, virtual machines, and other core cloud infrastructure services for an unlimited time. Of course, the offer is restricted by allocated resources, but all functional benefits are still there, and not just for testing. Hopefully, Oracle will soon start promoting these tools outside of Oracle events as well. Seen any APEX evangelists around recently?


KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected


How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00