IoT (Internet of Things) and Smart Manufacturing are part of the ongoing digital transformation of businesses. IoT is about connected things, from sensors to consumer goods such as wearables. Smart Manufacturing, also sometimes titled Industry 4.0, is about bridging the gap between the business processes and the production processes, i.e. manufacturing goods.
In both areas, security is a key concern. When connecting things, both things and the central systems receiving data back from things must be sufficiently secure. When connecting business IT and operational IT (OT for Operational Technology), frequently systems that formerly have been behind an “air gap” now become directly connected. The simple rule behind all this is: “Once a system is connected, it can be attacked” – via that connection. Connecting things and moving forward to Smart Manufacturing thus inevitably is about increasing the attack surface.
Traditionally, if there is a separate security (and not only a “safety”) organization in OT, this is segregated from the (business) IT department and the Information Security and IT Security organization. For the things, there commonly is no defined security department. The logical solution when connecting everything apparently is a central security department that oversees all security – in business IT, in OT, in things. However, this is only partially correct.
Things must be constructed following the principles of security by design and privacy by design from the very beginning. Security must not be an afterthought. Notably, this also increases agility. Thus, the people responsible for implementing security must reside in the departments creating the “things”. Security must become an integral part of the organization.
For OT, there is a common gap between the safety view in OT and the security perspective of IT. However, safety and security are no dichotomy – we need to find ways of supporting both, in particular by modernizing the architecture of OT, well beyond security. Again, security has to be considered here at any stage. Thus, execution also should be an integral part of e.g. planning plants and production lines.
Notably, the same applies for IT. Security must not be an afterthought. It must move into the DNA of the entire organization. Software development, procurement, system management etc. all have to think about security as part of their daily work.
Simply said: Major parts of security must move into the line of business departments. There are some cross-functional areas e.g. around the underlying infrastructure that still need to be executed centrally (plus potentially service centers e.g. for software development etc.) – but particularly when it is about things, security must become an integral part of R&D.
On the other hand, the new organization also needs a strong central element. While the “executive” element will become increasingly decentralized, the “legislative” and “judicative” elements most be central – across all functions, i.e. business IT, OT, and IoT. With other words: Governance, setting the guidelines and governing their correct execution, is a central task that must span and cover all areas of the connected enterprise.
Microsoft and Secure Islands today announced that Microsoft is to acquire Secure Islands. Secure Islands is a provider of automated classification for documents and further technologies for protecting information. The company already has tight integration into Microsoft’s Azure Rights Management Services (RMS), a leading-edge solution for Secure Information Sharing.
After completing the acquisition, Microsoft plans full integration of Secure Islands’ technology into Azure RMS, which will further enhance the capabilities of the Microsoft product, in particular by enabling interception of data transfer from various sources on-premise and in the cloud, and by automated and, if required, manual classification.
Today’s announcement confirms Microsoft's focus and investment into the Secure Information Sharing market, with protecting information at the information source (e.g. document) itself being one of the essential elements of any Information Security strategy. Protecting what really needs to be protected – the information – obviously (and if done right) is the best strategy for Information Security, in contrast to indirect approaches such as server security or network security.
By integrating Secure Islands' capabilities directly into Microsoft Azure RMS, Microsoft now can deliver an even more comprehensive solution to its customers. Furthermore, Microsoft continues working with its Azure RMS partner ecosystem in providing additional capabilities to its customers.
There is no doubt that organizations need both a plan for what happens in case of security incidents and a way to identify such incidents. For organizations that either have high security requirements or are sufficient large, the standard way for identifying such incidents is setting up a Security Operations Center (SOC).
However, setting up a SOC is not that easy. There are a number of challenges. The three major ones (aside of funding) are:
- Integration & Processes
The list is, from our analysis, order in according to the complexity of challenges. Clearly the biggest challenge as of today is finding the right people. Security experts are rare, and they are expensive. Furthermore, for running a SOC you not only need subject matter experts for network security, SAP security, and other areas of security. In these days of a growing number of advanced attacks, you will need people who understand the correlation of events at various levels and in various systems. These are even more difficult to find.
The second challenge is integration. A SOC does not operate independently from the rest of your organization. There is a need for technical integration into Incident Management, IT GRC, and other systems such as Operations Management for automated reactions on known incidents. Incidents must be handled efficiently and in a defined way. Beyond the technical integration, there is a need for well thought-out process for incident and crisis management or, as it commonly is named, Breach & Incident Response.
The third area is technology. Such technology must be adequate for today’s challenges. Traditional SIEM (Security Information and Event Management) isn’t sufficient anymore. SIEM solutions might complement other solutions, but there needs to be a strong focus on analytics and anomaly detection. From our perspective, the overarching trend goes towards what we call RTSI - Real Time Security Intelligence. RTSI is more than just a tool, it is a combination of advanced analytical capabilities and managed services.
We see a growing demand for these solutions – I’d rather say that customers are eagerly awaiting the vendors delivering mature RTSI solutions, including comprehensive managed services. There is more demand than delivery today. Time for the vendors to act. And time for customers to move to the next level of SOCs, well beyond SIEM.
Do you use mTANS (mobile transaction authentication numbers) for online banking? Have you checked your bank account balance lately? Well, what happened to Deutsche Telekom customers recently has happened to others before and is likely to happen again elsewhere if online banking customers and providers don't follow even the most basic rules of IT security.
IT protection measures are smart, unfortunately the attackers are often smarter these days: several customers of Deutsche Telekom's mobile offering have become victims of a cunning fraud series while banking online. The German (online-) newspaper "Süddeutsche Zeitung" reported about this in detail. What led to success for the criminals was their clever acting. The whole scam reminded me somehow of the old television series Mission Impossible, only that this time the protagonists were criminals: first, the robbers hacked the bank clients' computers and installed malware - supposedly via e-mail - that sent them the numbers of the online banking accounts and passwords through the net without any knowledge of the PC owners. But that wasn’t all: the hackers also went through their victim's e-mails looking for their online phone bills. Thus they were, according to an article in "Die Welt", also provided with customer IDs. Simultaneously, the thieves found - or spied - out the mobile phone numbers of their victims, clients of various banks who all happened to have at the same time mobile phone contracts with Deutsche Telekom.
With this information in hand the felons contacted Deutsche Telekom and pretended to be authorized dealers ("Telekom Shop") who needed to activate a substitute SIM card with the mobile number of "their" customer since the original one had been lost or stolen. They had more or less no problems with getting the new cards. Now they were able to receive every text message meant for the original customer. Bingo! The fraudsters could now enter their target's full bank account with all rights and privileges. Transfer in operation.
This sly method could lead to an amazed laugh if it weren't so seriously bad. In dozens of cases the crooks withdrew five-digit amounts, in one known case 30,000 Euro, the whole “take” is estimated to be more than a million Euro. There might still be other victims, but this hasn't been detected so far. The Telekom at least seems to be convinced that the method of the burglars won't work anymore in the future and that they have found safer ways to identify their retailers. But are they prepared for all other hard-to-imagine-now methods in the future? I doubt it. After earlier mTAN hacks providers had already made it generally more difficult to get a second SIM card. Customers have either to show their passports or give a password over the phone. But if it's not Deutsche Telekom, there are other telco providers who might be tricked in the future.
Fitting security concept necessary
Where security relevant elements like SIM cards play a vital part a fitting security concept is absolutely necessary. The whole process and supply chain from ordering to delivery has to be adapted accordingly. However, there are so far no easy solutions available for both secure and comfortable online banking with mTANs. Risk based authentication/authorization might help banks a bit to recognize unusual user behaviour and thus request further credentials, but this is also quite limited - where there are plenty of smaller transactions unusual behaviour quickly remains unrecognized.
The challenges start with the digital certificates and the question of getting them securely from the Certificate Authority to the rightful addressee. Personal handover of e. g. a smart card would be perfect. As well as - on another level - Post Identity Procedure, where one has to appear in person at the post office with an ID card before being able to use online banking. However, such processes require a bigger effort on the user side and they also take longer. This collides with the business models of the providers and the wishes and demands of their customers, like e. g. quickly and comfortably getting a substitute SIM. However, it all depends finally on balancing security needs with demands of both customers and providers. Multi-layer security - identifying the SIM card plus the device, on which the transaction is going to take place - makes mobile banking initially more inconvenient, but there is still the possibility of installing further controls to reduce the risks.
Since it has become a lucrative global industry for criminals, they exert a lot of effort in breaking into the - up to the present day - seemingly most secure infrastructures. Potential victims - vendors of "things and services" as well as end-consumers - should do the same in trying to prevent this. At least everyone should care for state-of-the-art malware protection as well as regular (automatic) software updates and patches. Keep yourself informed: Several non-profit websites provide useful information about cyber threats like phishing, e.g. this one. It cannot be said often enough that there is no one hundred percent security - but for your own sake you better try to come close. It's worth it.
In a press and analyst Q+A at VMworld Europe, Bill Fathers, Executive Vice President and General Manager Cloud Services at VMware, made a bold statement. He stated that from the VMware perspective, a network of (regional or local) service providers will better help in fulfilling customer requirements (particularly around compliance and data sovereignty) than a single, homogeneous US entity can do.
The statement was been made during a discussion of the impact the recent EuGH (Europäischer Gerichtshof, European Union High Court) decision on whether the U.S. can still be considered a “Safe Harbor”.
When I think again about the multitude of conversations with US-based cloud providers, these companies can be grouped in four segments:
- Some rely on other IaaS providers for delivering their PaaS or SaaS service.
- Some have their own US only data centers.
- Others have started building their own data centers in other regions such as the EU or APAC. Among these, there are again two groups: The ones that can split data per tenant, which includes most service providers focused on enterprise customers, and the ones that aren’t capable of doing that, such as search engines and “social” networks.
- The fourth group are providers that rely optionally or exclusively on local or regional service providers.
Some are supporting both approach #3 (with segregated tenant data) and approach #4.
VMware, with its One Cloud approach focusing on the technology enabling the range between fully on-premise and fully cloud-based delivery models, obviously is well-targeted to push model #4 – their core business is not delivering IaaS or a single SaaS solution, but the underlying technology for building such infrastructures. Furthermore, VMware also supports an approach they’d call a #3+ model. They are using own (or partnered) data centers running VMware-operated cloud services. These are designed to operate under local country-level legal jurisdiction and privacy laws. E.g. in Germany, vCloud Air has an contract based on EU model clauses with legal and privacy addendums specific to German law, which allows for greater confidence in addressing exactly these types of challenges. VMware currently has 11 data centers worldwide running vCloud Air, each operating with this focus on local country-level legal and privacy specification. When that still isn’t enough, VMware supports approach #4 – services powered by the vCloud Air stack but operated by the VMware partners.
Obviously, relying on a network of (regional/local) service providers eases the compliance and security discussion significantly. If the service is provided locally, it is far easier to comply with the ever-changing and ever-tightening regulations. And providing multiple options to the customers allows them making decisions based on local regulations, their specific understanding and requirements for compliance, their risk appetite, and other aspects such as pricing.
Clearly, going regional is not the only possible answer – but having that option available has a strong potential for shortening sales cycles. There is more than one answer for dealing with the regulatory challenges and customers concerns. Regional services operated by local service providers is one of them.
Notably, I wouldn’t rate “going regional” as a “balkanization” of the cloud market. It is not (primarily) about splitting up related data. In contrast, most commonly data and processing move closer to the tenant, which even provides advantages regarding potential latency issues.
Finally, when it comes to understanding cloud risks when selecting cloud service providers, you might ask KuppingerCole for our standardized Cloud Risk Assessment approach.
With its recent announcement of Microsoft Azure Active Directory B2B (Business-to-Business) and B2C (Business-to-Customer/consumer/client), which are in Public Preview now, Microsoft has extended the capabilities of Azure AD (Active Directory). Detailed information is available in the Active Directory Team Blog.
There are two new services available now. One is Azure AD B2C Basic (which suggests that later there will be Azure AD B2C Premium as well). This service focuses on connecting enterprises with customers through a cloud service, allowing authentication of customers and providing access to services. The service supports social logins and a variety of other capabilities. Organizations can manage their consumers in a highly scalable cloud service, instead of implementing an on-premise service for those customers. The primary focus for now is on authenticating such users, e.g. for access to customer portals. Customers can onboard with various social logins such as Facebook or Google+, but also create their own accounts. Applications can work with Azure AD B2C based on OAuth 2.0 and OpenID Connect standards.
The second piece available now is Azure AD B2B Collaboration. This service includes a number of new capabilities allowing management of business partners and, in particular, federation with these business partners. Of particular interest is that small organizations can be invited by a company already using Azure AD B2B. These then can rely – for that particular business relationship – on Azure AD without additional cost.
With this initial release, a strong baseline set of features is delivered for both services. B2C, for example, supports step-up-authentication which can be triggered by applications. Some other features such as account linking, i.e. supporting various logins of one person (e.g. Facebook and Google+) relating back to the same identity, are not yet available. However, being a cloud-based service, new features will be added on a regular basis in rather short intervals.
With the new Azure AD B2B and B2C enhancements, Microsoft is extending its Azure Active Directory towards a service that is capable of supporting all use cases of organizations, whether it is employee access to cloud services; managing business partner relationships; or managing even millions of consumers in an efficient manner based on a standard service. With these new announcements, Microsoft is clearly raising the bar for its competitors in the Cloud IAM market.
I have a long Active Directory history. In fact, I started working with Microsoft identities way before there was an AD, back in the days of Microsoft LAN Manager, then worked with Windows NT from the early beta releases on, and the same with Windows 2000 and subsequent editions. So the news of Azure AD Domain Services caught my attention.
Aside from Microsoft Azure AD (Active Directory) - which despite its name has been a new type of directory service without support for features such as Kerberos, NTLM, or even LDAP - Microsoft has offered Active Directory domain controllers as Microsoft Azure instances for a long time now. However, the latter are just domain controllers running on Azure instead of running on-premise.
With the new Azure AD Domain Service, Azure AD becomes a domain controller, supporting features such as the ones listed above plus group policies. Services running in an Azure Virtual Network can rely on these AD services. Thus, applications requiring AD can be easily moved to Azure and rely on the Azure AD Domain Services. Furthermore, Azure AD can connect back to the on-premise AD infrastructure relying on Azure AD Connect. Users then can sign in to the domain using their existing credentials, while other users can be on-boarded and managed in Azure AD.
This announcement is great news for organizations that want to move more applications to the cloud, but struggled with AD dependencies until now. There will be concerns regarding maintaining credentials in a cloud service. On the other hand, many organizations already rely on Azure AD Connect e.g. when using Office 365 in integration with their on-premise Active Directory.
Altogether with other new features such as Azure AD B2B and B2C, Microsoft now offers a multitude of options to enhance the existing Active Directory environments in the cloud, supporting a broad variety of customer use cases. My rating as a long-term Active Directory guy: Cool stuff.
Yesterday, Dell and EMC announced a “definitive agreement” about Dell’s plan to acquire EMC. Dell and EMC are just two factors in that equation, the third one is VMware. EMC owns 80% of the VMware shares.
In a press and analyst conference call held right after the announcement, Michael Dell and Joe Tucci, respectively the leaders of Dell and EMC, provided some high-level information on the deal. However, that call left more questions than answers.
Let’s start with the high-level storyline. First of all, there was no sustainable vision for the combined company unveiled, aside from the usual buzzwords. However, when combining two giants, both from market segments under pressure, the result will not necessarily be positive. Yes, there are now servers, storage, and (with some minority stakeholders) virtualization combined in one entity, allowing for creating new offerings for cloud, on-premise, and “hyper-converged” infrastructures. But it’s in software, as in “software-defined”, not in hardware, where all the action is. On the other hand, there is a need for market consolidation. Being the one who consolidates is clearly better than being consolidated.
Aside from that, the most important (because it’s the software) part in that combination is the one not fully owned, which thus can’t be fully integrated: VMware. On one hand, Michael Dell praised the advantage of a company not being listed, on the other hand they explicitly stated that VMware will remain listed. Where is the logic?
The argument I read today that the “new Dell” will enjoy a freedom HP can only dream of clearly is valid. Being a privately owned company allows for longer-term decisions than being public. Anyhow, altogether with VMware, there would be even more freedom.
The most logical answer from my point of view would lead to a dilemma: They might need to reduce the stake in VMware to pay back some of the debt of the merger. However, doing so would have a negative impact on the synergistic potential of the overall deal.
An explanation that doesn’t convince me is that buying VMware shares back would have made the deal too costly. Yes, that would have added some billions to the 67 billion USD deal. But even with a significant premium, we are talking about 10-15% of the current deal size. Not that much, when looking at the relative cost of buying back VMware shares.
More reasonable are explanations that there either hadn’t been time to make a decision on how to proceed on that issue or that one or some of the strategic investors in VMware weren’t willing to sell. And pushing for a squeeze-out is a tough task.
However, there is another explanation: In the current construction, VMware retains its agility. This agility is key to success in a market segment where innovation is key to success. On the other hand, with having Dell, a privately owned company, as the majority stakeholder, decisions are anyway faster than before. Fully integrating VMware at that stage bears the risk of slowing down VMware, aside of the potential negative impact on VMware’s large partner network. The construction chosen from that perspective is the best option. The organizational impact of the merger is restricted to integrating Dell and EMC, while VMware can move forward (more) autonomously.
During the call, it was also stated that no information on other valuable assets would be unveiled now. These valuable assets include, among others, the software & service business of Dell and RSA, the security division of EMC.
However, there are some questions behind that. The main one is: Will the converged Dell/EMC become primarily a hardware business with a majority share in VMware or will it become a combined business of hardware, services, and software? From my perspective, the latter approach is the only one that has a future.
Such an approach then would raise two more questions:
- How to deal with the overlaps between the portfolios, particularly around IAM (Identity and Access Management)?
- How to grow the services & software business, both organically and inorganically?
When looking at VMware, EMC/RSA, and Dell Software, all of them have IAM offerings. When looking at VMware Identity Manager, Dell Cloud Access Manager, and RSA VIA Access, there is overlap. When looking at RSA and Dell, there is significant overlap between RSA VIA Governance and RSA VIA Lifecycle on one hand and Dell Identity Manager on the other. With VMware not becoming fully integrated (as of now), the overlap with the VMware offering will remain anyway. But what about Dell versus RSA?
Another question is around growing services & software. From my point of view, this is the key success factor for this acquisition. But where to move from here? Clearly, aside from information security that has become ubiquitous, the main area is around software-defined “anything”, creating a central management and governance across the entire infrastructure. This is more than the SDDC (software defined data center), because it goes beyond the data center. My perspective here is that such a move on one hand will require further acquisitions and, on the other hand, will mandate tight integration of VMware as the company of the three (Dell, EMC, VMware) that is most advanced in that area. However, the latter can be done in the construction chosen for that deal – and the benefits of VMware being very agile clearly outweighs the disadvantages of not fully integrating VMware.
My colleague Dave Kearns added another issue around that deal: corporate culture. EMC is based in the Boston area, while Dell is “Texas culture, even Austin culture”, as he stated. He added: “There hasn't been a combination like this since Compaq acquired DEC - and remember how badly that went.” I leave this comment as is.
There are clearly many open questions now. Dell will have to deliver answers soon – they promised to do so and I’m definitely curious to hear the answers. With the information available now, it is too early to rate that deal and the consequences for various products and their customers. At the high level, driving consolidation can be rated positively and leaving VMware listed also makes sense. Let’s wait until more information becomes available on further details. We will follow this topic closely and keep you informed.
Just recently, BlackBerry announced the acquisition of Good. This is just one more acquisition of Mobile Security Management vendors. Quite a while ago, VMware acquired AirWatch, which so far has been the most prominent M&A activity in the field of Mobile Security Management.
However, these acquisitions are not the only fundamental changes for Mobile Device Management (MDM) and, in particular, the Mobile Security Management market – with Mobile Security Management being the most important part of overall MDM anyway. The other fundamental change becomes apparent when looking at what companies such as (in alphabetical order) Centrify, Okta, and VMware are doing. All three have offerings in the field of Cloud IAM, in particular what KuppingerCole calls Cloud User and Access Management. And all three have support for securing access to apps and applications from mobile devices. Be it the Centrify Identity Service, the Okta platform, or VMware Identity Manager: All focus on integrating Mobile Security and Identity Management.
From my perspective, such integration is logical. Back in 2012, I wrote about why Mobile Device Management must change towards a more integrated approach. Right now, we are facing a growing number of offerings that support that integration. I appreciate this, because the main challenge is not managing devices but providing secure and seamless access to all types of applications, for all types of users.
This also puts other players under pressure. In order to succeed, Blackberry and Good will have to go beyond providing access to a limited set of secure, containerized applications. Obviously, this is the wrong approach, as they are facing a growing number of competitors that provide secure access to a multitude of different applications. Pure-play MDM players such as MobileIron also will have to rethink their positioning. MDM for itself is not sufficient anymore. It must become part of a broader approach to delivering seamless and secure access to any type of application from any device for everyone. The path that players such as Centrify, Okta, and VMware are following (with very broad support for existing applications) leads to solutions that tackle the core problem – which isn’t securing a device but secure access to applications and information.
As with most other contracts, be it about a large purchase or an insurance, you should read (standard) contracts with your cloud provider very carefully. Chances are good that you will detect some points that border on insolence. There are certainly good reasons for using the cloud in business of any size, among them cost reductions and the ability to concentrate on the core business. By providing rapid adoption of new services, the cloud also enables quick innovation. But since your whole business will be influenced by the services delivered, they might sooner or later become disruptive to your daily workflow if not properly implemented.
"Uneven" relationship Clearly, the relationship between cloud service provider (CSP) and tenant is "uneven" from the beginning. The latter first of all has to pay for all extras, frequently called Managed Services - even for those that should be naturally included in any cloud contract. This way the customer has to pay more for letting the provider take over more of his normal responsibility. Delivering those kinds of "value-added service" only for much and more money can't be the unique selling point. I wonder what the provider's legal department says to those offers. The providers should be liable for breakdowns in service and data breach or loss. Most if not all deny that responsibility.
Reading the contract carefully can help avoiding the most obvious pitfalls. Make it a game: Find the aspects that could become a challenge for your daily business. There are some, trust me. Begin with the parts of the contract dealing with end-of-service, changes or availability. Don't be surprised if there is a clause that gives your CSP the freedom to go out of business with you at any time. He can also change services flexibly - mind you, flexibility should be on your side in the cloud, not on his - without having to announce it long in advance. Some CSPs think they don't need to announce it at all. Even if the change means that an important application won't run any longer.
Feature changes can pose problems Feature changes can evolve to a massive problem, when employees can't find some data again or see a completely altered user interface. This will lead to an increase in costs for help desk calls. Or imagine you customer relied on a certain feature that suddenly doesn't exist anymore. Just that the CSP thinks it is useless doesn't mean that you do so too.
Another issue concerns availability: Surely it is not always the CSP's fault if a service is not accessible. But where it is, availability guarantees amount to nothing if they are not connected to penalties. CSPs regularly disinclude liability in their contracts for damages on the tenant's side as a consequence of a longer outage - which is understandable. However, like this guarantees are relatively worthless. It should be added in this context that if you really need high availability you'll probably get it a lot cheaper in the cloud than with your internal IT. The cloud idea is not bad in itself.
Customization and the Cloud API (Application Programming Interface) changes might affect the integration between different cloud services or a cloud service and on-premise applications. It might as well affect customized solutions. Customized solutions? Is cloud computing not all about standards? Aren't the greatest benefits to be found in areas where customization won't mean a competitive advantage? Yes, maybe. But most business solutions - CRM, ERP, HR etc. - don't exist in isolation from other applications. They need to be integrated to work optimally. Last but not least APIs have to be upwards compatible. If they change or features will be turned down, the CSP client has to be informed long in advance to be able to prepare for it and to tell his customers in time.
How to find a good CSP So, how do you recognize a good CSP for your business? First of all, he should see the cloud benefits from your perspective, not only from his. For this he has to understand your main issues and challenges. Customers on the other side should always be prepared that things might not run as they expected. Therefore there should always an exit strategy fixed in the contract. This also helps to avoid the problem of a vendor lock-in which often is the result of long-term initial contracts. If a contract ends, the user should get his full data back immediately without any further costs.
Naturally not everybody running a business understands the concept of the cloud and how it works. It suffices to know how to find a good CSP and what elements a contract should contain that's beneficial to the customer.
This article has originally appeared in KuppingerCole Analysts' View newsletter.