Blog posts by Martin Kuppinger

Cyber Security: Why Machine Learning is Not Enough

Currently, there is a lot of talk about new analytical approaches in the field of cyber security. Anomaly detection and behavioral analytics are some of the overarching trends along with RTSI (Real Time Security Intelligence), which combines advanced analytical approaches with established concepts such as SIEM (Security Information and Event Management).

Behind all these changes and other new concepts, we find a number of buzzwords such as pattern-matching algorithms, predictive analytics, or machine learning. Aside from the fact that such terms frequently aren’t used correctly and precisely, some of the concepts have limitations by design, e.g. machine learning.

Machine learning implies that the “machine” (a piece of software) is able to “learn”. In fact this means that the machine is able to improve its results over time by analyzing the effect of previous actions and then adjusting the future actions.

One of the challenges with cyber security is the fact that there are continuously new attack vectors. Some of them are just variant of established patterns; some of them are entirely new. In an ideal world, a system is able to recognize unknown vectors. Machine learning per se doesn’t – the concept is learning from things that have gone wrong.

This is different from anomaly detection which identifies unknown or changing patterns. Here, the new is something which is identified as an anomaly.

Interestingly, some of the technologies where marketing talks about “machine learning” in fact do a lot more than ex-post-facto machine learning. Frequently, it is not a matter of technology but of the wrong use of buzzwords in marketing. Anyway, customers should be careful about buzzwords: Ask the vendor what is really meant by them. Any ask yourself whether the information provided by the vendor really is valid and solves your challenge.

Security and Privacy: An opportunity, not a threat

One of the lessons I have learned over the years is that it is far simpler “selling” things by focusing on the positive aspects, instead of just explaining that risk can be reduced. This is particularly true for Information Security. It also applies to privacy as a concept. A few days ago I had a conversation about the chances organizations have in better selling their software or services through supporting advanced privacy features. The argument was that organizations can achieve better competitive positioning by supporting high privacy requirements.

Unfortunately, this is only partially true. It is true in areas with strong compliance regulations. It is true for that part of the customer base that is privacy-sensitive. However, it might even become a negative inhibitor in other countries with different regulations and expectations.

There are three different groups of arguments for implementing more security and privacy in applications and services:

  1. Security and regulatory requirements – even while they must be met, these arguments are about something that must be done, with no business benefit.
  2. Competitive differentiation – an opportunity; however, as described above, that argument commonly is only relevant for certain areas and some of the potential customers. For these, it is either a must-have (regulations) or a positive argument, a differentiator (security/privacy sensitive people).
  3. Security and privacy as a means for becoming more agile in responding to business requirements. Here we are talking about positive aspects. Software and services that can be as secure as it needs to be (depending on regulations or customer demand) or as open as the market requires allows organizations to react flexibly on demand amid changing requirements.

The third approach is obviously the most promising one when trying to sell your project internally as well as your product to customers.

Microsoft to offer cloud services from German datacenters

With a recent announcement, Microsoft reacts on both privacy and security concerns of customers and the continuous uncertainty regarding a still pending law suit in the U.S. The latter is about an order Microsoft had received on turning over a customer’s emails stored in Ireland to the U.S. government.

The new data centers will operate from two locations within Germany, Frankfurt/Main and Magdeburg. They will run under the control of T-Systems, a subsidiary of Deutsche Telekom. Thus, an independent German company is acting as the data trustee, as Microsoft has named that role. Microsoft itself will not be able to access the data without the permission of customers or the data trustee, and if permission is granted will do so only under its supervision.

Concretely, customers can access the Microsoft cloud services from a non-Microsoft datacenter which operates locally. They have access to the full functionality of the Microsoft cloud services, but do not work with Microsoft as an U.S.-based company.

Microsoft’s announcement is not the first of that sort. T-Systems e.g. already operates Cisco cloud services, while the Microsoft cloud services are expected being available in the second half of 2016. VMware also works with independent service providers for delivering their cloud services.

Basically, we observe a growing trend of U.S. cloud service providers to provide delivery options altogether with partners from other countries, to serve to the customer requests for privacy, security, and independence of the U.S. court decisions. On one hand, U.S. cloud providers going that path can address their customer’s needs better now. On the other hand, this provides a tremendous potential for locally operating enterprise-class cloud providers, which can act as the local partners by delivering services locally. They even might combine such services with value-add services and integrations, e.g. complete offerings for medium-sized business covering all major enterprise functionalities from email to ERP, CRM, and other areas.

There is no doubt that such offering will come at a price – but I’m sure that many customers will be willing to pay that price, not only in Germany and other European countries but also many other regions worldwide that prefer relying on locally delivered, well-segregated services.

Security is part of the business. Rethink your organization for IoT and Smart Manufacturing

IoT (Internet of Things) and Smart Manufacturing are part of the ongoing digital transformation of businesses. IoT is about connected things, from sensors to consumer goods such as wearables. Smart Manufacturing, also sometimes titled Industry 4.0, is about bridging the gap between the business processes and the production processes, i.e. manufacturing goods.

In both areas, security is a key concern. When connecting things, both things and the central systems receiving data back from things must be sufficiently secure. When connecting business IT and operational IT (OT for Operational Technology), frequently systems that formerly have been behind an “air gap” now become directly connected. The simple rule behind all this is: “Once a system is connected, it can be attacked” – via that connection. Connecting things and moving forward to Smart Manufacturing thus inevitably is about increasing the attack surface.

Traditionally, if there is a separate security (and not only a “safety”) organization in OT, this is segregated from the (business) IT department and the Information Security and IT Security organization. For the things, there commonly is no defined security department. The logical solution when connecting everything apparently is a central security department that oversees all security – in business IT, in OT, in things. However, this is only partially correct.

Things must be constructed following the principles of security by design and privacy by design from the very beginning. Security must not be an afterthought. Notably, this also increases agility. Thus, the people responsible for implementing security must reside in the departments creating the “things”. Security must become an integral part of the organization.

For OT, there is a common gap between the safety view in OT and the security perspective of IT. However, safety and security are no dichotomy – we need to find ways of supporting both, in particular by modernizing the architecture of OT, well beyond security. Again, security has to be considered here at any stage. Thus, execution also should be an integral part of e.g. planning plants and production lines.

Notably, the same applies for IT. Security must not be an afterthought. It must move into the DNA of the entire organization. Software development, procurement, system management etc. all have to think about security as part of their daily work.

Simply said: Major parts of security must move into the line of business departments. There are some cross-functional areas e.g. around the underlying infrastructure that still need to be executed centrally (plus potentially service centers e.g. for software development etc.) – but particularly when it is about things, security must become an integral part of R&D.

On the other hand, the new organization also needs a strong central element. While the “executive” element will become increasingly decentralized, the “legislative” and “judicative” elements most be central – across all functions, i.e. business IT, OT, and IoT. With other words: Governance, setting the guidelines and governing their correct execution, is a central task that must span and cover all areas of the connected enterprise.

Microsoft to acquire Secure Islands – a significant investment in Secure Information Sharing

Microsoft and Secure Islands today announced that Microsoft is to acquire Secure Islands. Secure Islands is a provider of automated classification for documents and further technologies for protecting information. The company already has tight integration into Microsoft’s Azure Rights Management Services (RMS), a leading-edge solution for Secure Information Sharing.

After completing the acquisition, Microsoft plans full integration of Secure Islands’ technology into Azure RMS, which will further enhance the capabilities of the Microsoft product, in particular by enabling interception of data transfer from various sources on-premise and in the cloud, and by automated and, if required, manual classification.

Today’s announcement confirms Microsoft's focus and investment into the Secure Information Sharing market, with protecting information at the information source (e.g. document) itself being one of the essential elements of any Information Security strategy. Protecting what really needs to be protected – the information – obviously (and if done right) is the best strategy for Information Security, in contrast to indirect approaches such as server security or network security.

By integrating Secure Islands' capabilities directly into Microsoft Azure RMS, Microsoft now can deliver an even more comprehensive solution to its customers. Furthermore, Microsoft continues working with its Azure RMS partner ecosystem in providing additional capabilities to its customers.

Your future Security Operations Center (SOC): Not only run by yourself

There is no doubt that organizations need both a plan for what happens in case of security incidents and a way to identify such incidents. For organizations that either have high security requirements or are sufficient large, the standard way for identifying such incidents is setting up a Security Operations Center (SOC).

However, setting up a SOC is not that easy. There are a number of challenges. The three major ones (aside of funding) are:

  1. People
  2. Integration & Processes
  3. Technology

The list is, from our analysis, order in according to the complexity of challenges. Clearly the biggest challenge as of today is finding the right people. Security experts are rare, and they are expensive. Furthermore, for running a SOC you not only need subject matter experts for network security, SAP security, and other areas of security. In these days of a growing number of advanced attacks, you will need people who understand the correlation of events at various levels and in various systems. These are even more difficult to find.

The second challenge is integration. A SOC does not operate independently from the rest of your organization. There is a need for technical integration into Incident Management, IT GRC, and other systems such as Operations Management for automated reactions on known incidents. Incidents must be handled efficiently and in a defined way. Beyond the technical integration, there is a need for well thought-out process for incident and crisis management or, as it commonly is named, Breach & Incident Response.

The third area is technology. Such technology must be adequate for today’s challenges. Traditional SIEM (Security Information and Event Management) isn’t sufficient anymore. SIEM solutions might complement other solutions, but there needs to be a strong focus on analytics and anomaly detection. From our perspective, the overarching trend goes towards what we call RTSI - Real Time Security Intelligence. RTSI is more than just a tool, it is a combination of advanced analytical capabilities and managed services.

We see a growing demand for these solutions – I’d rather say that customers are eagerly awaiting the vendors delivering mature RTSI solutions, including comprehensive managed services. There is more demand than delivery today. Time for the vendors to act. And time for customers to move to the next level of SOCs, well beyond SIEM.

mTAN hacks: You've got SMS - and someone else your money

Do you use mTANS (mobile transaction authentication numbers) for online banking? Have you checked your bank account balance lately? Well, what happened to Deutsche Telekom customers recently has happened to others before and is likely to happen again elsewhere if online banking customers and providers don't follow even the most basic rules of IT security. 

IT protection measures are smart, unfortunately the attackers are often smarter these days: several customers of Deutsche Telekom's mobile offering have become victims of a cunning fraud series while banking online. The German (online-) newspaper "Süddeutsche Zeitung" reported about this in detail. What led to success for the criminals was their clever acting. The whole scam reminded me somehow of the old television series Mission Impossible, only that this time the protagonists were criminals: first, the robbers hacked the bank clients' computers and installed malware - supposedly via e-mail - that sent them the numbers of the online banking accounts and passwords through the net without any knowledge of the PC owners. But that wasn’t all: the hackers also went through their victim's e-mails looking for their online phone bills. Thus they were, according to an article in "Die Welt", also provided with customer IDs. Simultaneously, the thieves found - or spied - out the mobile phone numbers of their victims, clients of various banks who all happened to have at the same time mobile phone contracts with Deutsche Telekom.

With this information in hand  the felons contacted Deutsche Telekom and pretended to be authorized dealers ("Telekom Shop") who needed to activate a substitute SIM card with the mobile number of "their" customer since the original one had been lost or stolen. They had more or less no problems with getting the new cards. Now they were able to receive every text message meant for the original customer. Bingo! The fraudsters could now enter their target's full bank account with all rights and privileges. Transfer in operation.

This sly method could lead to an amazed laugh if it weren't so seriously bad. In dozens of cases the crooks withdrew five-digit amounts, in one known case 30,000 Euro, the whole “take” is estimated to be more than a million Euro. There might still be other victims, but this hasn't been detected so far. The Telekom at least seems to be convinced that the method of the burglars won't work anymore in the future and that they have found safer ways to identify their retailers. But are they prepared for all other hard-to-imagine-now methods in the future? I doubt it. After earlier mTAN hacks providers had already made it generally more difficult to get a second SIM card. Customers have either to show their passports or give a password over the phone. But if it's not Deutsche Telekom, there are other telco providers who might be tricked in the future.

Fitting security concept necessary

Where security relevant elements like SIM cards play a vital part a fitting security concept is absolutely necessary. The whole process and supply chain from ordering to delivery has to be adapted accordingly. However, there are so far no easy solutions available for both secure and comfortable online banking with mTANs. Risk based authentication/authorization might help banks a bit to recognize unusual user behaviour and thus request further credentials, but this is also quite limited - where there are plenty of smaller transactions unusual behaviour quickly remains unrecognized.

The challenges start with the digital certificates and the question of getting them securely from the Certificate Authority to the rightful addressee. Personal handover of e. g. a smart card would be perfect. As well as - on another level - Post Identity Procedure, where one has to appear in person at the post office with an ID card before being able to use online banking. However, such processes require a bigger effort on the user side and they also take longer. This collides with the business models of the providers and the wishes and demands of their customers, like e. g. quickly and comfortably getting a substitute SIM. However, it all depends finally on balancing security needs with demands of both customers and providers. Multi-layer security - identifying the SIM card plus the device, on which the transaction is going to take place - makes mobile banking initially more inconvenient, but there is still the possibility of installing further controls to reduce the risks.

Since it has become a lucrative global industry for criminals, they exert a lot of effort in breaking into the - up to the present day - seemingly most secure infrastructures. Potential victims - vendors of "things and services" as well as end-consumers - should do the same in trying to prevent this. At least everyone should care for state-of-the-art malware protection as well as regular (automatic) software updates and patches. Keep yourself informed: Several non-profit websites provide useful information about cyber threats like phishing, e.g. this one. It cannot be said often enough that there is no one hundred percent security - but for your own sake you better try to come close. It's worth it.

VMware on cloud delivery models

In a press and analyst Q+A at VMworld Europe, Bill Fathers, Executive Vice President and General Manager Cloud Services at VMware, made a bold statement. He stated that from the VMware perspective, a network of (regional or local) service providers will better help in fulfilling customer requirements (particularly around compliance and data sovereignty) than a single, homogeneous US entity can do.

The statement was been made during a discussion of the impact the recent EuGH (Europäischer Gerichtshof, European Union High Court) decision on whether the U.S. can still be considered a “Safe Harbor”.

When I think again about the multitude of conversations with US-based cloud providers, these companies can be grouped in four segments:

  1. Some rely on other IaaS providers for delivering their PaaS or SaaS service.
  2. Some have their own US only data centers.
  3. Others have started building their own data centers in other regions such as the EU or APAC. Among these, there are again two groups: The ones that can split data per tenant, which includes most service providers focused on enterprise customers, and the ones that aren’t capable of doing that, such as search engines and “social” networks.
  4. The fourth group are providers that rely optionally or exclusively on local or regional service providers.

Some are supporting both approach #3 (with segregated tenant data) and approach #4.

VMware, with its One Cloud approach focusing on the technology enabling the range between fully on-premise and fully cloud-based delivery models, obviously is well-targeted to push model #4 – their core business is not delivering IaaS or a single SaaS solution, but the underlying technology for building such infrastructures. Furthermore, VMware also supports an approach they’d call a #3+ model. They are using own (or partnered) data centers running VMware-operated cloud services. These are designed to operate under local country-level legal jurisdiction and privacy laws. E.g. in Germany, vCloud Air has an contract based on EU model clauses with legal and privacy addendums specific to German law, which allows for greater confidence in addressing exactly these types of challenges. VMware currently has 11 data centers worldwide running vCloud Air, each operating with this focus on local country-level legal and privacy specification. When that still isn’t enough, VMware supports approach #4 – services powered by the vCloud Air stack but operated by the VMware partners.

Obviously, relying on a network of (regional/local) service providers eases the compliance and security discussion significantly. If the service is provided locally, it is far easier to comply with the ever-changing and ever-tightening regulations. And providing multiple options to the customers allows them making decisions based on local regulations, their specific understanding and requirements for compliance, their risk appetite, and other aspects such as pricing.

Clearly, going regional is not the only possible answer – but having that option available has a strong potential for shortening sales cycles. There is more than one answer for dealing with the regulatory challenges and customers concerns. Regional services operated by local service providers is one of them.

Notably, I wouldn’t rate “going regional” as a “balkanization” of the cloud market. It is not (primarily) about splitting up related data. In contrast, most commonly data and processing move closer to the tenant, which even provides advantages regarding potential latency issues.

Finally, when it comes to understanding cloud risks when selecting cloud service providers, you might ask KuppingerCole for our standardized Cloud Risk Assessment approach.

Microsoft Azure AD B2B and B2C: Cloud IAM for managing the masses

With its recent announcement of Microsoft Azure Active Directory B2B (Business-to-Business) and B2C (Business-to-Customer/consumer/client), which are in Public Preview now, Microsoft has extended the capabilities of Azure AD (Active Directory). Detailed information is available in the Active Directory Team Blog.

There are two new services available now. One is Azure AD B2C Basic (which suggests that later there will be Azure AD B2C Premium as well). This service focuses on connecting enterprises with customers through a cloud service, allowing authentication of customers and providing access to services. The service supports social logins and a variety of other capabilities. Organizations can manage their consumers in a highly scalable cloud service, instead of implementing an on-premise service for those customers. The primary focus for now is on authenticating such users, e.g. for access to customer portals. Customers can onboard with various social logins such as Facebook or Google+, but also create their own accounts. Applications can work with Azure AD B2C based on OAuth 2.0 and OpenID Connect standards.

The second piece available now is Azure AD B2B Collaboration. This service includes a number of new capabilities allowing management of business partners and, in particular, federation with these business partners. Of particular interest is that small organizations can be invited by a company already using Azure AD B2B. These then can rely – for that particular business relationship – on Azure AD without additional cost.

With this initial release, a strong baseline set of features is delivered for both services. B2C, for example, supports step-up-authentication which can be triggered by applications. Some other features such as account linking, i.e. supporting various logins of one person (e.g. Facebook and Google+) relating back to the same identity, are not yet available. However, being a cloud-based service, new features will be added on a regular basis in rather short intervals.

With the new Azure AD B2B and B2C enhancements, Microsoft is extending its Azure Active Directory towards a service that is capable of supporting all use cases of organizations, whether it is employee access to cloud services; managing business partner relationships; or managing even millions of consumers in an efficient manner based on a standard service. With these new announcements, Microsoft is clearly raising the bar for its competitors in the Cloud IAM market.

Your Domain Controller in the Cloud: Azure AD Domain Services

I have a long Active Directory history. In fact, I started working with Microsoft identities way before there was an AD, back in the days of Microsoft LAN Manager, then worked with Windows NT from the early beta releases on, and the same with Windows 2000 and subsequent editions. So the news of Azure AD Domain Services caught my attention.

Aside from Microsoft Azure AD (Active Directory) - which despite its name has been a new type of directory service without support for features such as Kerberos, NTLM, or even LDAP - Microsoft has offered Active Directory domain controllers as Microsoft Azure instances for a long time now. However, the latter are just domain controllers running on Azure instead of running on-premise.

With the new Azure AD Domain Service, Azure AD becomes a domain controller, supporting features such as the ones listed above plus group policies. Services running in an Azure Virtual Network can rely on these AD services. Thus, applications requiring AD can be easily moved to Azure and rely on the Azure AD Domain Services. Furthermore, Azure AD can connect back to the on-premise AD infrastructure relying on Azure AD Connect. Users then can sign in to the domain using their existing credentials, while other users can be on-boarded and managed in Azure AD.

This announcement is great news for organizations that want to move more applications to the cloud, but struggled with AD dependencies until now. There will be concerns regarding maintaining credentials in a cloud service. On the other hand, many organizations already rely on Azure AD Connect e.g. when using Office 365 in integration with their on-premise Active Directory.

Altogether with other new features such as Azure AD B2B and B2C, Microsoft now offers a multitude of options to enhance the existing Active Directory environments in the cloud, supporting a broad variety of customer use cases. My rating as a long-term Active Directory guy: Cool stuff.

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

AI for the Future of your Business Learn more

AI for the Future of your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00