Blog posts by Graham Williamson

Cybersecurity is in Crisis

Intel Security recently released an in-depth survey of the cybersecurity industry, looking at causal agents of the low availability of people with training and professional accreditation in computer security. The global report titled “Hacking the Skills Shortage” concludes: “The cybersecurity workforce shortfall remains a critical vulnerability for companies and nations”.

Most respondents to the survey considered the ‘cybersecurity skills gap’ as having a negative effect on their company, three quarters felt that government were not investing appropriately to develop cybersecurity talent and a whopping 82% reported that they can’t get the cybersecurity skills they need.

Only one in five believed current cybersecurity legislation in their country is sufficient. Over half thought current legislation could be improved and a quarter felt current regulation could be significantly enhanced.

From an education viewpoint, the study concluded that colleges are not preparing their students well for a career in cybersecurity. It suggested that there should be a relaxation of the requirement for a graduate degree for cybersecurity positions and that greater stock should be placed on professional certifications. Cybersecurity appreciation should start at an earlier age and we need to move with the times, targeting a more diverse multicultural and mixed gender clientele.

But what does it mean for companies needing assistance with their cybersecurity requirements now? How should we respond to a known deficiency in available expertise? Given that companies are increasing relying on consultants and analysts there is a need to be preparing our staff and suppliers to step into the gap and assist us in identifying requirements, analysing potential solutions and developing a roadmap to follow so that we can maintain our computer security, minimise data loss and protect our intellectual property.

There’s potentially another option to fill the cybersecurity expertise gap in the future – Cognitive Security.

The term Cognitive Security refers to an increasingly important technology that combines self-learning systems with artificial intelligence to be able to look for patterns and identify situations that meet predefined conditions, which can be used to indicate network compromise activity and provide expert advice for diagnostic activity.

While artificial intelligence has had a chequered past it is likely to significantly impact society over the next decade. It’s started to being deployed in big data analysis to enable us to identify trends and understand consumer behaviour, it provides the ability to automate promotional activity and it enables us to better meet customer expectations even as marketing budgets are constrained. In the cybersecurity space it can be used to identify potential nefarious activity and to make decisions on how to respond to events. Increasingly, automated data analysis allows us to detect network compromise and artificial intelligence provides assistance in taking remedial action.

A number of large research organisations are at the forefront of Cognitive Computing. IBM is very active via the Watson initiative which is pioneering data mining, pattern recognition and natural language processing. IBM Watson for CyberSecurity, the one solution that is already announced, focuses on collecting unstructured information and providing the required information to the Information Security professionals, giving them the background information they need, without searching for it. Google DeepMind demonstrated approaching Singularity with AlphaGo beating a GO grand master earlier this year. Microsoft is also heavily involved in the sector with the release of their first set of Cognitive Services, in effect APIs for facial recognition, facial tracking, speech recognition, spell checking and smile prediction software.

So what does this have to do with security on our company’s network?

We’re seeing the beginning of cognitive security in the threat analytics that are now rapidly developing with innovative solutions that monitor corporate networks and then ‘learn’ what normal network traffic looks like. Nefarious activity can be detected via anomalies in network traffic. If an account that normally accesses a departmental subnet for access to work applications suddenly attempts to access another server, not part of their normal activity, threat analytics will identify such events and will act in accordance with the established policy, either issuing a notification for follow-up or disabling the account pending investigation. Many suppliers also maintain, or subscribe to, community threat signature services that identify known attack vectors and can automatically alert on their occurrence on the network being monitored. These systems can also provide triage services to assist in determining remedial action and forensic analysis to aid in developing preventative maintenance processes.

While there’s still a long way to go, the technology holds significant benefit. If we extrapolate the findings of the Intel survey to our own situations, it is unlikely that we will be able to fill the need for human cybersecurity experts, it is therefore prudent to track developments in cognitive security as it applies to our network monitoring and incident response requirements.

While it must not be our only approach to network protection and data loss prevention it holds significant potential to be a major component of our corporate security arsenal in the future.

Comment: Know and Serve Your Customer

‘Know your customer’ started as an anti-money laundering (AML) initiative in the financial industry. Regulators insisted that banks establish a customer ‘due-diligence’ processes to ensure that all bank accounts could be traced back to the entities that owned them. The intent was to make it difficult to establish a business to re-purpose money from illegal activity via a legitimate commercial activity. But while they focus on AML regulation, banks often miss the opportunity to know, and serve, their customers.

Increasingly businesses are realizing that the demographics of their customers are changing. It’s moving away from the ‘baby-boomers’, who are focused on value, to ‘millennials’, who are focused on experience.

Baby-boomers have grown up in a relatively stable environment, with a stable family life and long-term employment. They value ‘best practices’ and loyalty. Millennials, those coming-of-age at the turn of the century, have experienced a much more fluid upbringing. Their family life has been fractured and inconsistent and they have no expectation, nor desire for, long-term employment. They are more interested in flex-time, job-sharing arrangements and sabbaticals.

More importantly, millennials want experience over value. They are less concerned with what they pay for something than they are in their experience in purchasing it. They will not tolerate a bad experience whether it be in-store or on-line. And they have the technology to let others know about their experience.

There’s two approaches to this situation: become despondent and despair of ever attracting this market sector, or consider the vast opportunity of hundreds of millennials posting and tweeting about the fantastic service they experienced when they did business with you.

Coming from Knowing to Serving

So – how do we ‘serve’ our customers? Firstly, we need to know them and then we need to align our marketing practices to them.


Knowing them requires us to build a picture of our customer base and segment them into groups according to their propensity to purchase our products and services. This will likely require an analysis of CRM data and potentially doing some big-data analysis of customer transaction records. Engaging a Cloud service provider and using their Hadoop services and map-reduce functionality may assist. The intent is to build a customer identity management service that can be used for product/service development and automated marketing. Customer analytics and deploying user-managed access i.e. providing users control of their data and management of their transactions with your organisation, are enabled by a good customer identity and access management (CIAM) facility.

Once we know our customers we can tailor our marketing program to ‘serve’ our customers. This means that we need to modify our product or service to suit their requirements. There is no point in offering something that they don’t want, and you can’t rely on history; as the baby-boomer segment must inevitably decline their purchasing patterns becomes irrelevant. Millennials will gladly tell you what they want if they are asked, putting some effort into understanding them will not go un-rewarded.

Pricing must also be commensurate with the product or service being offered. As noted earlier millennials are far less price-conscious than baby-boomers so a ‘differentiation’ strategy is advised. Make your product or service special, and charge for it.

Promotion should also be targeted too. Hardcopy media is of little use. Focus on social networks and on-line advertising. Google AdWords do work and it can be money well-spent. Make sure your website is responsive, millennials are lost on anything bigger than a 12cm screen.

There is no doubt that doing business is becoming much more interesting. The potential for attracting new customers has never been greater and the opportunities are vast. The only question is “are we agile enough to exploit it?”

Stack creep - from the network layer to the application layer

Last year saw an unprecedented interest in protection of corporate data. With several high-profile losses of intellectual property organisations have started looking for a better way.

For the past 30 years the bastion against data loss has been network devices. We have relied on routers, switches and firewalls to protect our classified data and ensure it’s not accessed by un-authorised persons. Databases were housed on protected sub-nets to which we could restrict access on the basis of IP address, a Kerberos ticket or AD group membership.

But there are a couple of reasons that this approach is no longer sufficient. Firstly, with the relentless march of technology the network perimeter is increasingly “fuzzy”. No longer can we rely on secure documents being used and stored on the corporate network. Increasingly we must share data with business partners and send documents external to the corporate network. We need to store documents on Cloud storage devices and approve external collaborators to access, edit, print and save our documents as part of our company’s business processes.

Secondly, we are increasingly being required to support mobile devices. We can no longer rely on end-point devices that we can control with a standard operating environment and a federated login. We must now support tablets and smartphone devices that may be used to access our protected documents from public spaces and at unconventional times of the day.

As interest in a more sophisticated way to protect documents has risen, so have the available solutions. We are experiencing unprecedented interest in Information Rights Management (IRM) whereby a user’s permission to access or modify a document is validated at the time access is requested. When a document is created the author, or a corporate policy, classifies the document appropriately to control who can read it, edit it, save it or print it. IRM can also be used to limit the number of downloads of a document or time-limit access rights. Most solutions in this space support AD Rights Management and Azure Rights Management; some adopt their own information rights management solution with end-point clients that manage external storage or emailing.

Before selecting a solution companies should understand their needs. A corporate-wide secure document repository solution for company staff is vastly different from a high-security development project team sharing protected documents with collaboration partners external to the company. A CIA approach to understanding requirements is appropriate:

Confidentiality – keep secret Typically encryption is deployed to ensure data at rest is protected from access by unapproved persons. Solutions to this requirement vary from strong encryption of document repositories to a rights management approach requiring document classification mechanisms and IRM-enabled client software.
Integrity – keep accurate Maintaining a document’s integrity typically involves a digital signature and key management in order to sign and verify signatures. Rights management can also be employed to ensure that a document has not been altered.
Availability – secure sharing Supporting business processes by making protected documents available to business partners is at the core of Secure information sharing. Persons wanting accesses to confidential information should not have to go through a complex or time-consuming procedure in order to gain access to the required data. Rights management can provide a secure way to control permissions to protected documents while making appropriate data available for business purposes.

Never has there been a better time to put in-place a secure data sharing infrastructure that leverages an organisation’s identity management environment to protect corporate IP, while at the same time enhance business process integration.

IoT in industrial computer systems (ICS)

IoT, the Internet of Things, covers a wide range of technologies. My Fitbit e.g. is an IoT device, it connects to my smartphone which formats the data collected on my movements. Also, vehicles that communicate with diagnostic instruments and my home thermostat that I can control via the Internet are IoT gadgets.

This article, however, is concerned with a very particular type of IoT device: a sensor or actuator that is used in an industrial computer system (ICS). There are many changes occurring in the Industrial computer sector; the term Industry 4.0 has been coined as a term to describe this 4th generation disruption.

A typical ICS configuration looks like the following:

  • The SCADA display unit shows the process under management in a graphic display. Operators can typically use the SCADA system to enter controls to modify the operation in real-time.
  • The Control Unit is the main processing unit that attaches the remote terminal units to the SCADA system. The Control unit responds to the SCADA system commands.

The Remote Terminal Unit (RTU) is a device, such as a Programmable Logic Controller (PLC), that is used to connect one or more devices (monitors or actuators) to the control unit. It is typically positioned close to the process being managed or monitored but the RTUs may be hundreds of kilometres away from the SCADA system.

Communication links can be Ethernet for a production system, a WAN link over the Internet, a private radio link for a distributed operation or a telemetry link for equipment in a remote area without communications facilities.

So what are the main concerns regarding IoT in the ICS space? As can be seen from the above diagram there are two interfaces that need to be secured. The device to RTU and the fieldbus link between the RTO and the Control Unit.

The requirement on the device interface is for data integrity. In the past ICS vendors have relied upon proprietary unpublished interfaces i.e. security by obscurity. This is not sustainable because device suppliers are commoditising the sector and devices are increasingly becoming generic in nature. Fortunately, these devices are close to the RTU and in controlled areas in many ICS environments.

The interface to the Control Unit is typically more vulnerable. If this link is compromised the results can be catastrophic. The main requirement here is for confidentiality; the link should be encrypted if possible and this should be taken into account when selecting a communications protocol. Manufacturing applications will often use MQTT which supports encryption, electrical distribution systems will often use DNP3 which can support digital signatures, in other cases poor quality telemetry links must be used in which case a proprietary protocol may be the best option to avoid potential spoofing attacks.

One big benefit of the current developments in the ICS sector is the increasing support for security practices for operational technology. Whereas in the past there was a reliance in isolation of the ICS network, there is now an appreciation that security technology can protect sensitive systems while enjoying the benefits of accessibility. In fact, both worlds can be seen as siblings, focused on different parts of the enterprise. There already exist promising possibilities to enable this duality, e.g. this one. But understanding the technology is also important: One home automation equipment supplier released a line of sensor equipment with an embedded digital certificate with a one-year validity.

Conclusion: Despite all – yet partly unseen – benefits of connected things, there are still many pitfalls in vulnerable industrial networks and there is a massive danger of doing IoT basically wrong. The right path has still to be found and the search for the best solutions is a constant discovery process. As always, one of the best ways to success is sharing one’s experiences and knowledge with others, who are on the same journey.

Adaptive Policy-based Access Management (APAM)

Attribute-based Access Control (ABAC ) has been with us for many years; it embodies a wide range of systems that control access to protected resources based on attributes of the requesting party. As the field has developed there are three characteristics that are most desirable in an ABAC system:

  • it should externalise decision making i.e. not require applications to maintain their own access control logic
  • it should be adaptive i.e. decisions are made in real-time
  • it should be policy-based i.e. access permissions should be determined based on after evaluation of policies
  • it should be more than just control i.e. user access should “manage” user’s access control.

Most access control environments today are role-based. Users are granted access to applications based on their position within an organisation. For instance, department managers within a company might get access to the HR system for their department. When a new department manager joins the organisation they can be automatically provisioned to the HR system based on their role. Most organisation use Active Directory groups to managed roles within an organisation. If you’re in the “Fire Warden” group you get access to the fire alarm system. One of the problems with role-based systems is the access control decisions are coarse-grained, you’re either a department manager or you’re not. RBAC systems are also quite static, group memberships will typically be updated once a day or, worse still, require manual intervention to add and remove members. Whenever access control depends upon a person to make an entry in a control list, inefficiencies result and errors occur.

Attribute-based systems have several advantages: decisions are externalised to dedicated infrastructure that preforms the policy evaluation. Decisions are more fine-grained: if a user is a department manager an APAM system can also check a user’s department code and so decide, for instance, whether or not to give them access to the Financial Management system. It can check whether or not they are using their registered smartphone; it can determine the time of day, in order to make decisions that reduce the risk associated with an access request. Such systems are usually managed via a set of policies that allow business units to determine, for instance, whether or not they want to allow access from a smartphone, and if they do, to elevate the authorisation level by using a two-factor mechanism. The benefits are obvious: no longer are we dependent upon someone in IT to update an Active Directory group, and more sophisticated decisions are possible. APAM systems are also real-time. As soon as HR updates a person’s position, their permissions are modified. The very next access request will be evaluated against the same policy set but the new attributes will return a different decision.

So what’s holding us back from deploying APAM systems? Firstly, there’s the “if it’s not broken don’t fix it” syndrome that encourages us to put up with less than optimal systems. Another detractor is the requirement for a mature identity management system, since access to attributes is needed. There is also a need to manage policies but often business groups are unwilling to take on the policy management task.

It’s incumbent on C-level management to grapple with these issues. They must set the strategy and implement the requisite change management. If they do, not only will they be reducing the risk profile associated with their access control system, they’ll open up new opportunities. It will be possible to more easily extend business system access to their business partners, and customers, for whom it is unsustainable to populate Active Directory groups.

APAM has much to offer, we just need is a willingness to embrace it.

This article has originally appeared in KuppingerCole Analysts' View newsletter.

OT, ICS, SCADA – What’s the difference?

Operational Technology (OT) refers to computing systems that are used to manage industrial operations as opposed to administrative operations. Operational systems include production line management, mining operations control, oil & gas monitoring etc.

ot_ics_scada.jpg

Industrial control systems (ICS) is a major segment within the operational technology sector. It comprises systems that are used to monitor and control industrial processes. This could be mine site conveyor belts, oil refinery cracking towers, power consumption on electricity grids or alarms from building information systems. ICSs are typically mission-critical applications with a high-availability requirement.

Most ICSs fall into either a continuous process control system, typically managed via programmable logic controllers (PLCs), or discrete process control systems (DPC), that might use a PLC or some other batch process control device.

Industrial control systems (ICS) are often managed via a Supervisory Control and Data Acquisition (SCADA) systems that provides a graphical user interface for operators to easily observe the status of a system, receive any alarms indicating out-of-band operation, or to enter system adjustments to manage the process under control.

Supervisory Control and Data Acquisition (SCADA) systems display the process under control and provide access to control functions. A typical configuration is shown in Figure 1 - Typical SCADA Configuration Figure 1.

scada_configuration.pg.jpg
Figure 1 - Typical SCADA Configuration

The main components are:

  • SCADA display unit that shows the process under management in a graphic display with status messages and alarms shown at the appropriate place on the screen. Operators can typically use the SCADA system to enter controls to modify the operation in real-time. For instance, there might be a control to turn a valve off, or turn a thermostat down.
  • Control Unit that attaches the remote terminal units to the SCADA system. The Control unit must pass data to and from the SCADA system in real-time with low latency.
  • Remote terminal units (RTUs) are positioned close to the process being managed or monitored and are used to connect one or more devices (monitors or actuators) to the control unit, a PLC can fulfil this requirement. RTUs may be in the next room or hundreds of kilometres away.
  • Communication links can be Ethernet for a production system, a WAN link over the Internet or private radio for a distributed operation or a telemetry link for equipment in a remote area without communications facilities.

There are some seminal changes happening in the OT world at the moment. Organisations want to leverage their OT assets for business purposes, they want to be agile and have the ability to make modifications to their OT configurations. They want to take advantage of new, cheaper, IP sensors and actuators. They want to leverage their corporate identity provider service to authenticate operational personnel. It’s an exciting time for operational technology systems.

So what do we mean by “Internet of Things” and what do we need to get right?

The phase “Internet of Things” (IoT) was coined to describe the wide range of devices coming on the market with an interface that allows them to be connected to another device or network. There is no question that the explosion in the number of such devices is soon going to change our lives for ever. We are going to be monitoring more, controlling more and communicating more. The recent FTC Staff report indicates there will be 25 billion devices attached to networks this year and 50 billion in 5 years’ time.

It’s generally agreed that there are several categories in the IoT space:

  • Smart appliances:
these are devices that monitor things, actuate things or communicate data. Included in this category are remote weather stations, remote lighting controllers or car that communicate status to receivers at service centres.
  • Wearables:
these devices typically monitor something e.g. pedometers or heart monitors and transmit the data to a close-by device such as a smartphone on which there is an app that either passively reports the data or actively transmits it to a repository for data analysis purposes.
  • Media devices:
these are typically smartphones or tablets that need one or more connections to external devices such as a Bluetooth speaker or a network connected media repository.

By far the largest category is the smart appliance. For instance, in the building industry it is now normal to have hundreds of IP devices in a building feeding information back to the building information system for HVAC control, security monitoring and physical access devices. This has significantly reduced building maintenance costs for security and access control, and has significantly reduced energy costs by automating thermostat control and even anticipate weather forecast impacts.

In his book “Abundance: The Future is Better than You Think” Peter Diamandis paints a picture of an interconnected world with unprecedented benefits for society. He is convinced that within a few years we will have devices that, with a small blood sample a saliva swab, will provide a better medical diagnosis than many doctors.

So what’s the problem?
For most connected devices there are no concerns. Connecting a smartphone to a Bluetooth speaker is simplicity itself and, other than annoying neighbours within earshot, there is simply no danger or security consideration. But for other devices there are definite concerns and significant danger in poorly developed and badly managed interfaces. If a device has an application interface that can modify a remote device the interface must be properly designed with appropriate protection built in. There is now a body of knowledge on how such application programmable interfaces (APIs) should be constructed and constrained and initiatives are being commenced to provide direction on security issues.

For instance, if a building information system can open a security door based on an input from a card swipe reader, the API had better require digital signing and possibly encryption to ensure the control can’t be spoofed. If a health monitor can make an entry in the user’s electronic health record database the API needs to ensure only the appropriate record can be changed.

Another issue is privacy. What if my car that communicates its health to my local garage? That’s of great benefits because I should get better service. But what if the driver’s name and address is also communicated, let alone their credit card details? Social media has already proven that the public at large is notoriously bad at protecting their privacy; it’s up to the industry to avoid innovation that on the surface looks beneficial and benign, but in reality is leading us down a dangerous slippery slope to a situation in which hackers can exploit vulnerabilities.

What can we do?
The onus is on suppliers of IoT to ensure the design of their systems is both secure and reliable. This means they must mandate standards for developers to adhere to in using the APIs of their devices or systems. It is important that developers know the protocols to be used and the methods that can be employed to send data or retrieve results.

For example:

  • Smart appliances should use protocols such as OAuth (preferably three-legged for a closed user-group) to ensure properly authentication of the user or device to the application being accessed.
  • Building information systems should be adequately protected with an appropriate access control mechanism; two-factor authentication should be the norm and no generic accounts should be allowed.
  • Systems provided to the general public should install with a basic configuration that does not collect or transmit personally identifiable information.
  • APIs must be fully documented with a description, data schemas, authentication scopes and methods supported; clearly indicating safe and idempotent methods in web services environments.
  • Organisations installing systems with APIs must provide a proper software development environment with full development, test, pre-production and production environments. Testing should include both functional and volume testing with a defined set of regression tests.

Conclusion
The promise of IoT is immense. We can now attach a sensor or actuator to just about anything. We can communicate with it via NFC, Bluetooth, Wi-Fi or 3G technology. We can watch, measure and control our world. This will save money because we can shut things off remotely to save energy, improve safety beacuse we will be notified more quickly when an event occurs, and save time because we can communicate service detail accurately and fully.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.

Microsoft’s Taking Aim at the Hybrid Cloud

One of the most strategic moves Microsoft ever made was the release of Active Directory (AD). It was released with Windows Server 2000 and, depending on the statistics you use, between 85 and 95% of Fortune 1000 companies now run Active Directory, because it is the authentication source used by the Windows operating environment.

But with the increasing migration to cloud services there’s a problem – the same companies that have so overwhelmingly adopted AD in their on-premise environments now need a solution for the cloud, where AD is anything but the right answer. Firstly it’s not a good idea to expose your on-premise LDAP service outside the organisation to cloud-based applications. Secondly, Windows is too verbose; with Exchange, SharePoint and Office tightly bound to AD, network latency precludes the use of an on-premise AD instance by these applications when they are operating in the cloud.

So, it became obvious to Microsoft that they would need a solution to support hybrid environments where customers could enjoy the efficiency of their on-premise Windows operation when accessing their cloud applications in the Azure environment. Microsoft’s answer is Azure Active Directory (AAD).

Microsoft first released Azure AD in 2012 and, as with anything significantly different, it initially received criticism. The core schema that we had become used to in AD was no longer so important and the LDAP interface that we had come to embrace for all our directory work was gone. In its place was an OData interface and a very different Graph API. But it didn’t take long for the industry to understand the need for a higher-level interface in a cross-boundary environment, and Azure AD (AAD) is now well adopted as the standard for Microsoft Azure. Refer to Martin Kuppinger’s article in this newsletter for more detail.

The next problem for Microsoft was how to populate AAD and keep it current. Initially the DirSync tool provided this function with a replication of the on-premise AD to the cloud-based AAD. This was a bit of a brute-force approach requiring the whole directory to be resident in the cloud. A further problem was how to achieve single sign-on (SSO) between on-premise and Azure applications. Microsoft’s answer was Active Directory Federation Services (ADFS) to maintain session details between the two environments. While it worked – it was not an elegant solution.

These issues were comprehensively fixed with Windows Server 2012 R2 and the release of Azure AD Premium Enterprise Suite. It is now no longer necessary to replicate the whole on-premise AD to the cloud but more importantly synchronising the password hash allows two-way password reset so users can maintain their passwords in either environment and it will be replicated to the other. Group management has been improved and a self-service feature is available. Federation is supported and integration with other directories and Identity Providers, beyond on-premise AD, is a real option. Multi-factor authentication services have also been extended. ADFS has been improved with support for Microsoft’s Device Registration Service to extend authentication services to registered mobile devices, a highly desirable feature when managing mobile device access to cloud-based applications.

So – what is the right choice when it comes to adopting cloud services?

If you are a Microsoft shop i.e. you use Windows extensively, you use Exchange and Office – and particularly if you are a big SharePoint user then embracing the Azure environment is a logical decision at least for that part of your IT infrastructure. Check the article by Alexei Balaganski in this newsletter to decide what level of adoption is appropriate. One word on hybrid – try not to stay there too long. The cloud has an impressive economic imperative, but not when everything remains hybrid. So if you opt for Office 365, do it consequently.

If you only dabble in the Microsoft ecosystem, maybe you have Windows clients but you maintain a mixed server environment and your applications are primarily web-based, look to the open cloud suppliers such as AWS as an alternative to Azure. If you’re going this route you’ll need to solve the Identity-as-a-Service (IDaaS) issue anyway. This is not done only by replicating your AD to the cloud. There is an increasing number of services for managing Identities in the cloud, including Okta, PingOne – or Microsoft AAD, that can do more than just being your AD in the cloud. If you’re a Salesforce user you might want to adopt their identity management solution as your IDaaS service. Have a look at the upcoming KuppingerCole Leadership Compass on Cloud User and Access Management which will be published later this month to understand the vendor landscape for managing identities and access in the cloud.

We’re on the cusp of a real polarisation in the adoption of cloud services. The main service providers offer an unprecedented ease-of-use in the movement to the cloud whether it be cloud-based storage, cloud-based virtual machines or a complete Office/SharePoint/Exchange cloud-based infrastructure. But the decision to adopt cloud services is a strategic one. It should not be made lightly and, even if your initial implementation of just a proof-of-concept, you need to do it under the umbrella of a “cloud roadmap” to ensure it archives the outcomes you need. For assistance along the journey check out the KuppingerCole CLoud Assessment Service (KC Class); it will help to keep you on the right road.

But whatever way you look at it, the cloud future has a light blue hue.

This article was originally published in the KuppingerCole Analysts' View Newsletter.

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

Connected Consumer Learn more

Connected Consumer

When dealing with consumers and customers directly the most important asset for any forward-thinking organisation is the data provided and collected for these new type of identities. The appropriate management of consumer identities is of utmost importance. Handing over personal data to a commercial organisation the consumer typically does this with two contrasting expectations. On one hand the consumer wants to benefit from the organisation as a contract partner for goods or services. Customer-facing organizations get into direct contact with their customers today as they are accessing their [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00