English   Deutsch   Русский   中文    

KuppingerCole Blog

Bridging the Gap Between IT, OT and Business in the Digital Transformation Age

May 04, 2016 by Bruce Hughes

The Digital Transformation age is focussed on integrating digital technologies such as social, mobile, manufacturing, cloud computing etc.  It will inherently lead to new types of innovation and creativity and is already having far reaching application across business, government, medical and mass communications to name a few.  The Internet of Things (IoT), that is connecting everything to everything, also presents new challenges to organisations.   This new world places Business at risk because they have not embraced security standardisation, developed a holistic view of business risks across the business, or determined how Information Technology (IT) and Operational Technology (OT) will work together to minimise the risks.   

Digital Transformation is really a business transformation.  Business Models need to be rewritten to take advantage of the new possibilities that Digital Transformation brings as well as how to monetise these opportunities.  It is not just about deploying Smart Objects on the factory floor or implementing a blockchain solution to take care of one aspect of the business, it is about developing a go-to-market blueprint that will include reorganising the business, embracing the new technologies, optimising processes, binding customers and aim for a profitable outcome.

There is a huge trend to move away from offering just products and replacing them with customer services.  We have seen this for years with Cloud-based software licensing and, as an example, several markets have introduced electric motor vehicles on a “user pays” basis, so instead of buying a car for city use, you rent one by the hour or the day (find one on the street, walk up, and open it with an app on your smartphone), just like other services like bicycle rental.

In the Manufacturing sector, Smart Manufacturing has brought with it a whole new set of business opportunities but also increased risks.  The object of Industry 4.0 is to connect the manufacturing environment and OT to optimise the end-to-end processes and to build a service infrastructure between the business and the end customer.  Optimisation will be disruptive and may well disenfranchise the middlemen, such as brokers and dealers, from the new operating model.

Optimising the end-to-end view of an organisation that joins the business view to the manufacturing view opens up the manufacturing side to attack as well as the business systems.   This changes the security paradigm and puts everything at risk.  The IoT and “things” controlling a manufacturing process open up areas of cyber threat that were not previously there.  With Smart Vehicles a blackbox could capture data such as performance, location or payment information which would be made available to service providers, motor manufacturers, insurance companies, law enforcement etc.   There are a myriad of possibilities and they all need to be managed in an optimal, controlled, safe and secure manner.

A new Business Model must incorporate the requirement to adopt a standardised and configurable security infrastructure to manage cyber risk and at the same time, enable the business to become agile.  Agility will enable the business to quickly react to new opportunities or changed circumstances and improve its competitive advantage.

Businesses must also develop a Risk Management Plan to deal with the new circumstances, with a focus on risk mitigation.  While risk cannot be totally eliminated, major risks can be identified and mitigated that could endanger the organisation from a number of different perspectives: cost, reputation, regulation, legal, business process, or technical.  A comprehensive communications plan is also vital to addressing incident responses across the spectrum of the enterprise.

In this new Digital Transformation age, organisations have to think about security by design and, as a result, agility by design.  The IT/OT group must implement a secure, standardised and configurable security infrastructure that embraces security and privacy by design.  This will allow an organisation the flexibility required to open or close configurations to meet changing regulatory demands, exchange information with the outside, and address risks as they occur in a quick and economical way and not in the old inefficient ways of costly and risky code changes.

Organisations might consider merging the IT and OT organisations to deliver their part of the Business Model in a more efficient and integrated manner.  OT has always been challenging in its own right.  OT systems are required to control valves, engines, conveyors and other machines to regulate various process values, such as temperature, pressure, flow, and to monitor them to prevent hazardous conditions. OT systems have used various technologies for hardware design and communications protocols, that are unknown in IT. The most common problems are legacy system and devices and numerous vendor architectures and standards.  The focus of OT has been availability rather than confidentiality, integrity and availability as is the case with IT.  As OT embraces Smart Devices, integrating OT into an overall enterprise solution will require standardised data exchange abilities and standardised, configurable security to manage the environment.  Combining the IT and OT organisations can help facilitate and optimise an organisations end-to-end security and data management in a consistent and optimal manner.

 


Google+

Multi-Factor, Adaptive Authentication Security Cautions

Apr 22, 2016 by Ivan Niccolai

KuppingCole has written previously on the benefits of adaptive authentication and authorization, and the need for authentication challenges that go beyond the password. These benefits fall largely under the categories of an improved user experience, since the user only gets challenged for multiple authentication challenges based on risk and context, as well as improved security precisely due to the use of multi-factor, multi-channel authentication challenges.

However, these multi-factor authentication challenges only offer additional security if the multiple challenges used for these authentication challenges are sufficiently separated. Some examples of common approaches to multi-factor authentication include the use of one-time passwords sent via an SMS message, or smartphone applications which function as soft tokens for time-limited passwords. These are generally a good idea, and do offer additional security benefits. But, if the application that depends on multi-factor authentication as an additional security measure is itself a mobile application then the lack of separation between the channels used for multi-factor authentication vitiates the possible security benefits of MFA.

Security researchers have recently proven how both a compromised Android or iOS smartphone can be manipulated by attackers in order to enable them to capture the additional step-up authentication password from the smartphone itself. This is one of the outstanding challenges of anywhere computing. Another attack that that is immune to the additional security provided by multi-factor authentication is the man-in-the-browser-attack MITB. With this type of attack, a malicious actor gains control of a user’s browser via a browser exploit. The user then logs into, for example, online banking, and successfully completes all necessary, multi-factor authentication challenges perform a high risk action such as performing an electronic fund transfer, the hijacked browser can be used by the attacker to substitute form data the the user is imputing. In this example the sum could be redirected to a strangers bank account.

With the MITB attack, the user is seen by the accessed service as fully authenticated, but since the browser itself has been compromised, any action the user could have done legitimately can also appear to have been done by the attacker.

With a user’s smartphone already receiving emails and being used for browsing, the additional use of smartphones for multi-factor authentication must be carefully considered. Otherwise, it only provides the illusion of security. These attacks do not make adaptive, multi-factor authentication useless, but they do show that there is no single mitigation approach that allows an organization to ignore the ever-evolving cybersecurity threat landscape.

Tactical security approaches here include careful selection and separation of authentication channels when MFA is used, as well as the use of additional web service and browser scripting protection approaches which have been developed to mitigate MITB attacks.

Yet the strategic solution remains an approach that is not solely focused on prevention. With the digital transformation well underway, it is difficult to employee endpoints, and almost impossible to control consumer endpoints. A strategic, holistic security approach should focus on prevention, detection and response, an approach known as Real-Time Security Intelligence. It should focus on the data governance, regardless of the location of the information asset, an approach known as Information Rights Management.

Unknown and sophisticated attack vectors will persist, and balancing security and user experience does remain a challenge, but the RTSI approach recognizes this and does not ever assume that a system or approach can be 100% immune to vulnerabilities.


Google+

Be careful not to DROWN

Mar 03, 2016 by Mike Small

On March 1st OpenSSL published a security advisory CVE-2016-0800, known as “DROWN”. This is described as a cross-protocol attack on TLS using SSLv2 and is classified with a High Severity. The advice given by OpenSSL is:

“We strongly advise against the use of SSLv2 due not only to the issues described below, but to the other known deficiencies in the protocol as described at https://tools.ietf.org/html/rfc6176

This vulnerability illustrates how vigilant organizations need to be over the specific versions of software that they use. However, this is easier said than done. Many organizations have a website or application that was built by a third party. The development may have been done some time ago and used what were the then current versions of readily available Open Source components. The developers may or may not have a contract to keep the package they developed up to date.

The application or website may be hosted on premise or externally; wherever it is hosted, the infrastructure upon which it runs also needs to be properly managed and kept up to date. OpenSSL is part of the infrastructure upon which the website runs. While there may be some reasons for continuing to use SSLv2 for compatibility, there is no possible excuse for reusing SSL Private Keys between websites. It just goes against all possible security best practices.

It may be difficult to believe but I have heard auditors report that when they ask “what does that server do?” they get the response “I don’t know – it’s always been here and we never touch it”. The same can be true of VMs in the cloud which get created, used and then forgotten (except by the cloud provider who keeps on charging for them).

So as vulnerabilities are discovered, there may be no process to take action to remediate the operational package. The cyber criminals just love this. They can set up an automated process to externally scan to find where known vulnerabilities exist unpatched and exploit the results at their leisure.

There are two basic lessons from this:

  1. Most code contains exploitable errors and its evolution generally leads to a deterioration in its quality over time unless there are very stringent controls over change. It is attractive to add functionality but increase in size and complexity leads to more vulnerabilities. Sometimes it is useful to go back to first principles and recode using a stringent approach.
    I provided an example of this in my blog AWS Security and Compliance Update. AWS has created a replacement for OpenSSL TLS - S2N Open Source implementation of TLS. S2N replaces the 500,000 lines code in OpenSSL with approximately 6,000 lines of audited code. This code has been contributed to Open Source and is available from S2N GitHub Repository.

  2. Organizations need to demand maintenance as part of the development of code by third parties. This is to avoid the need to maintain out of date infrastructure components for compatibility.
    The infrastructure, whether on premise or hosted, should be kept up to date. This will require change management processes to ensure that changes do not impact on operation. This should be supported by regular vulnerability scanning of operational IT systems using one of the many tools available together with remediation of the vulnerabilities detected.

IT systems need to have a managed lifecycle. It is not good enough to develop, deploy and forget.


Google+

IBM Acquires Resilient Systems

Mar 01, 2016 by Alexei Balaganski

Yesterday at the RSA Conference, IBM has officially confirmed what’s already been a rumor for some time – the company is planning to acquire Resilient Systems for an undisclosed amount.

Resilient Systems, a relatively small privately held company based in Cambridge, MA, is well known for its Incident Response Platform, a leading solution for orchestrating and automating incident response processes. With the number of security breaches steadily growing, the focus within IT security industry is currently shifting more and more from detection and prevention towards managing the consequences of an attack that’s already happened. Such an incident response solution can provide a company with a predefined strategy for responding to various types of attacks, tailored to specific laws and industry regulations. It would then support the IT department at every step of the process, helping to get the affected infrastructure back online, address privacy concerns, solve organizational and legal issues and so on.

Despite being on the market for less than 5 years, Resilient Systems has already become a leading player in this segment, with their IRP solution being used by a variety of clients in all verticals, from mid-size businesses to Fortune 500 companies. Among other features, the product is known for its integration with multiple leading security solutions. In fact, Resilient Systems has been IBM’s partner for some time, integrating their product with IBM’s QRadar.

So, in the hindsight, the announcement doesn’t really come as a big surprise. For IBM Security, this acquisition means not just incorporating a leading incident response solution into their cyber security portfolio, but also hiring a 100 men strong team of security experts including the venerable Bruce Schneier, who’s currently serving as the Resilient Systems’ CTO. What’s in the deal for Resilient Systems is not as easy to say, since the financial details of the deal are not disclosed, but we can definitely be sure that gaining access to IBM’s vast partner network opens a lot of interesting business prospects.

By adding the new Incident Response capabilities to their existing QRadar security intelligence solution and X-Force Exchange threat intelligence platform, IBM is hoping to become the world’s first vendor with a fully integrated platform for security operations and response. In the same press release, the company has already announced their new IBM X-Force Incident Response Services.


Google+

There is no Consumer Identity & Access Management at all – at least not as a separate discipline

Mar 01, 2016 by Martin Kuppinger

These days, there is a lot of talk about Consumer Identity & Access Management or CIAM. However, there is no such thing as CIAM, at least not as a separate discipline within IAM. There are technologies that are of higher relevance when dealing with customers and consumers than they are when dealing with employees. But there neither are technologies that are required for CIAM only nor is there any benefit in trying to set up a separate CIAM infrastructure.

This does not mean that IAM should or must not focus on consumers – in contrast. But it is about extending and, to some extent, renovating the existing on-premise IAM, which commonly is focused on employees and some business partners. It is about one integrated approach for all identities (employees, partner, consumers,…), managing their access to all services regardless of the deployment model, using all types of devices and things. It is about seamlessly managing all access of all identities in a consistent way. Notably, “consistent way” is not the same as “from a single platform”.

So why don’t we need a separate CIAM? The easiest answer is found by asking a simple question: “Is there any single application in your organization that is only accessed by consumers?” This implies “and not by at least some of your employees, e.g. for customer services, administration & operations, or analyzing the data collected.” The obvious answer on that question is that there is no such application. There are applications, which are only used by employees, but not the other way round. So why should there be separate IAM deployments for applications that are used by a common group of users? That could only result in security issues and management trouble.

The other aspect is that the way applications are used within the enterprise is changing anyway. Mobile users access cloud applications without even touching the internal network anymore. Thus, technologies such as Adaptive Authentication, Cloud IAM or IDaaS (Identity Management as a Service), Identity Federation, etc. are relevant not only for consumer-facing solutions but for all areas of IAM.

Finally, there is the aspect, that users frequently have multiple digital identities, even in relationship to their employers. Many employees of car manufacturers also are customers of that company. Many employees of insurance companies also have insurance contracts with that companies, and some even act as freelance insurance brokers. Managing such complex relationships becomes far easier when having one IAM for all – employees, partner, and consumers. One IAM that serves all applications, on-premise and in the Cloud. And one IAM, that supports all type of access.

That might anyway result in projects that focus on managing consumer access to services, IAM for cloud services, and so on. But all these projects should be part of moving IAM to the next level: An IAM that serves all requirements, from the traditional access of an employee using his PC in the corporate LAN to access a legacy backend system to the mobile consumer coming in via a social login and accessing a cloud service.


Google+

The role of Adaptive Authentication in Consumer Identity Management

Mar 01, 2016 by Ivan Niccolai

As more and more traditional services move online as part of the digital transformation trend, consumer-centric identity management is becoming increasingly vital business success factor. Customers aren’t just physical persons, they are also the devices used by customers, they are also intermediate organisations and systems which operate together to enable the provisioning of the service.

While traditionally the identity and access management (IAM) discipline has focused on employee use cases, consumer-centric identity management is an approach to identification, authentication and authorisation of the consumers of services by customers, devices and organisations who are external to the organisation providing the product or service. It is more than just external user IAM, it is an approach which, as the name implies, recognises that consumer interaction with services from businesses and government is predominantly via online channels. So when planning and designing IAM capabilities, the customer must be the starting point, not technology, not standards, not products – these are key factors too, but user experience, along with security and scalability must be at the forefront.

While usability and security are typically seen as objective in conflict with each other, it is possible today to offer a better user experience which is also more secure. An example of this is seen with identification, by making use of federation standards to leverage social logins, thus externalising the risks associated with passwords. If social logins are not appropriate, adaptive authentication, which for some time now being used by almost all online banking services, offers better security and user experience by reducing the reliance on passwords for securing both authentication and authorisation through the use of multi-factor authentication challenges. Dynamic, adaptive authentication will also improve the user experience by stepping up or down the authentication challenge depending on the action the user is requesting as well as the risk profile of the user. Here we can see how consumer-centricity, coupled with a holistic approach to security and risk management can leverage adaptive authentication and authorisation to understand what it is that a user is trying to do, linking that action to the risks examined in the risk management exercise, to ensure that low-risk actions do not entail an excessively onerous user experience as well as ensuring appropriate security controls are in place for high-risk actions. Dynamic, adaptive authorisation and authentication will also be able to flag anomalous user activity and respond with accordingly.

Scalability is also a key factor in consumer-centric IAM, consumer IAM generally has much higher performance and throughput requirements which must not be neglected during the planning and design phases. A good functional user experience will be fail if the underlying systems cannot support the performance stresses of production use. Performance and capacity planning is often a big unknown and prone to large variations in line with consumer demand. As with security, performance tuning is a process, not a project, and consumer IAM systems must be designed to scale up or down as required.

Consumer-centric IAM must also be threat-centric. With the loss of the traditional network perimeter, IAM becomes the key common denominator for determining appropriate access to resources, regardless of where they reside (cloud, on-premise) or the device used to access them. Consumer-centric IAM becomes a key component of a Real-Time Security Intelligence strategy.


Google+

Challenges of large-scale IAM environments

Mar 01, 2016 by Matthias Reinwarth

Long before analysts, vendors or journalists were coining terms like Digitalization, Identity Relationship Management or Customer IAM, several industries were already confronted with large-scale Identity and Access management (IAM) environments. Due to the character of their businesses they were challenged with the storage of huge amounts of identity data while serving massive volumes of both read and write requests at a constantly high access speed. Especially providers of telecommunication infrastructure like voice or data services typically handle identity data for several millions of subscribers. This information is leveraged for various purposes: One highly essential task focuses on controlling which subscribers are permitted to access which services and keeping track which resources they have used. This is typically done in highly specialized AAA (Triple-A) systems providing real-time Authentication (who?), Authorization (what?) and Accounting (how many?) services.

As this forms the basis for the actual core business processes, performance, availability, reliability and security are of utmost importance. Therefore, telco operators have always been in the forefront of designing and implementing highly redundant, scalable, sustainable special-purpose IAM systems as directory or database systems capable of fulfilling their unique requirements.

But several other systems traditionally need access to various subsets of subscriber=customer data: Customer Relationship Management (CRM) systems are the foundation for sales and help desks processes, while this information needs to be merged with AAA-data to produce e.g. the monthly bills. But apart from the traditional help desk systems, where customers call and want to interact with helpdesk personnel, the service landscape has changed dramatically: Many telco operators have transformed into being full service providers of communication and entertainment services, e.g. IPTV. In parallel subscribers have more and more gotten used to online portals for self-service access to their operator’s product portfolio. Having online access to their billing information, while being able to change, extend or cancel their subscriptions has become the new normal. This of course requires strong security mechanisms, especially rock-solid authentication and authorisation functionalities, while this is also true for ordering immediate access to streaming a blockbuster movie or gaining access to live coverage of their favourite sports event directly from the set-top box. These devices among money others (tablets, mobile phones or even gaming consoles) represent the identities individual subscribers and are of course more sources for additional billing information as well.

Providing a large-scale IAM system comes with many promises and requirements: gaining better insight into subscriber data through big data analytics might lead to efficient and agile business decisions and new products. The resulting information might be even more valuable when own subscriber data is intelligently merged with information provided by third parties (e.g. financial data, market research) and even social data, e.g. from Facebook, Google or Twitter logins. On the other hand, the privacy, security and reliability of sensitive information provided to the operators by subscribers is highly important. An example for that is when mobile devices are used for mobile, online payments (which is already done for example by Swisscom with their Easypay system) or secure mobile authentication (e.g. as a second factor) in the not so far future.

In large-scale IAM environment we observe that the traditional use case scenarios don’t go away, while they are constantly complemented with completely new requirements and business models. New technical requirements (new access methods, new devices, optimized performance, new data processing like big data analytics and lots more) are the results from such developments. And this often introduces the need for compliance to new sets of legal or regulatory requirements. All of this has to be adequately implemented in parallel, while existing requirements continue to be fulfilled, but usually with rising numbers of subscribers and increasing volumes of access requests.

With the traditional business models of providing mere access to voice or data services getting more and more irrelevant, telco operators have to constantly re-invent themselves and their business models. Existing and changing IAM systems for large numbers of customers and subscribers might turn out to be one of their biggest challenges but also their most significant asset to provide added value to their subscribers and new customer groups in the future.


Google+

Microsoft Security Announcements

Mar 01, 2016 by Alexei Balaganski

With RSA kicking off this week, security experts from around the world are getting ready for a flurry of announcements from security vendors. Last Friday, it was Microsoft’s turn, and the company’s CISO Bret Arsenault has publicly announced some interesting news. The motto of the announcement is “Enterprise security for our mobile-first, cloud-first world” and it was all about unifying several key components, such as real-time predictive intelligence, correlating security data with threat intelligence data and, last but not least, collaboration with the industry and partners to provide a unified and agile security platform that can protect, detect and respond to the numerous security risks out there. After the initial announcement last November, the company is ready to deliver the first concrete products and services developed around this concept.

Perhaps the most important and yet the least surprising announced product is Microsoft Cloud App Security. Since the company has acquired a well-known cloud application security vendor Adallom, analysts have been waiting for Microsoft to integrate this technology into their products. With this product, Microsoft’s customers are promised to achieve the same level of visibility and control over their cloud applications as they are used to with their on-premise infrastructures. By combining a proven underlying technology from Adallom with a large number of integrations with popular cloud services like Box, ServiceNow, Salesforce and naturally Office 365, and by leveraging the threat intelligence collected from the world’s largest identity management service, Microsoft has all the chances to become an important player in the rapidly growing CASB (Cloud Access Security Broker) market, compensating for their relatively late coming to the market.

Cloud App Security will become generally available as a standalone product (or as a part of the Enterprise Mobility Suite) in April 2016. Much more interesting however is the announcement that this technology will also power new security management capabilities of Office 365 and will eventually be available to all existing Office 365 customers. With the release planned for Q3 2016, we should expect functions like advanced security alerts, cloud app discovery and permissions management for 3rd party cloud services integrated directly into the platform.

Another major announcement is the public preview of Azure Active Directory Identity Protection service. With this service, Microsoft is tapping into the vast amount of threat intelligence collected from their Azure Active Directory infrastructure and using machine learning algorithms to identify brute force attacks, leaked credentials and various types of anomalies in any applications working with Azure AD. Besides real-time detection, customers will be able to get remediation recommendations or even define their own risk-based policies for automated identity protection. In other words, what we have here is a classic example of a specialized Real Time Security Intelligence solution!

Other announced additions to Microsoft’s secure platform include, for example, Customer Lockbox feature for SharePoint Online and OneDrive for Business, which provides cloud service customers complete and explicit control over privileged access to their data by Microsoft’s support engineers. Combining technical and organizational measures, this feature is aimed at improving trust between Microsoft as a cloud service provider and its customers, which we at KuppingerCole see as one of the critical aspects of Cloud Provider Assurance.

Additionally, numerous improvements in security management and reporting have been announced in Azure Security Center. These include integrations with multiple third party security products (nextgen firewalls and web application firewalls) from vendors like Cisco, Check Point, CloudFlare, Imperva, etc.

To summarize it all, Microsoft is again showing that it’s able to consistently follow their long term strategy, working in parallel in several directions and keeping their new products and services synchronized and integrated into a holistic security platform. Of course, it would have been interesting to learn more about 3rd party integrations and partnerships, especially with various industry alliances. However, we can be sure that this wasn’t the last announcement from Microsoft, so we’re staying tuned for more.


Google+

„Disruptive Change“: Right time to think security anew

Feb 29, 2016 by Martin Kuppinger

Is „Digital Transformation“ something of the future? Definitely not. It has long become reality. With connected things and production, business models of enterprises are already changing profoundly. Communication with customers no longer happens over traditional websites. It encompasses apps and increasingly connected things as well. Rapidly changing business models and partnerships lead to new application architectures like micro service models, especially however to a more intensive usage of APIs (Application Programming Interfaces, interfaces of applications for external function calls), in order to combine functions of various internal and external services to new solutions.

This quick change is often being used as an argument that security can't be improved, since there is the believe that this would hinder the fulfilment of temporal and functional business requirements, especially of all at once. No new, better, up-to-date and future-oriented security concepts in applications are being implemented due to alleged time pressure. However, exactly the opposite is the case: Precisely this change is the chance to implement security faster than ever before. And anyhow, for communication from apps to backend and external systems, user authentication and of course complete handling of connected things one can’t use the same concepts that were introduced for websites five, ten or fifteen years ago.

Furthermore, by now there is a whole lot of established standards, from the more traditional SAML (Security Assertion Markup Language) to more modern worldwide standards, in which REST-based access of apps to services and between services is normal. OAuth 2.0 and OpenID Connect are good examples of this. Or, in other words: Mature possibilities for better security solutions are already a reality, in the form of standards as well as on a conceptual level.

Another good example is the new (and not yet really established) UMA (User Managed Access) standard of the Kantara Initiative. With this standard, users can share “their” data purposefully with applications beyond the basic OAuth 2.0 functions. If you look for example at some of the data challenges associated with the “connected car”, it soon becomes clear how useful new concepts can be.

UMA and other new standards enable easy control of who gets access when and to which data. Traditional concepts don’t allow this – as soon as diverse user groups need access to diverse data sources in diverse situations, one hits the wall or needs to “tinker” solutions (with much effort). If you look e.g. at the crash data recorder, to which insurances, manufacturers and the police need to have access – however, not always and definitely not to all data – it becomes clear how expansively some new challenges in digital transformation have to be solved if not built on modern security concepts.

“Disruption”, the fundamental change we experience in the digital transformation in many places – contrary to the slow and continual development that was the rule in many industries for years – is the chance to become faster, more agile and more secure. For this, we need to deploy new concepts that are oriented towards these new requirements. Already in the first project you are often quicker with this approach than by trying to adapt old concepts to new problems. We should use the chance to make security stronger, especially in the digital transformation. The alternative is risking not to be sufficiently agile enough to withstand competition, due to outdated software and old security architectures.


Google+

Thycotic acquires Arellia – moving beyond pure Privilege Management

Feb 24, 2016 by Martin Kuppinger

On February 23rd, 2016, Thycotic, one of the leading vendors in the area of Privilege Management (also commonly referred to as Privileged Account Management or Privileged Identity Management) announced the acquisition of Arellia. Arellia delivers Endpoint Security functionality and, in particular, Application Control capabilities. Both Thycotic and Arellia have built their products on the Microsoft Windows platform, which will allow the more rapid integration of the two offerings.

Thycotic, with its Secret Server product, has evolved over the past years from an entry-level solution towards an enterprise-level product, with significant enhancements in functionality. With the addition of the Arellia products, Thycotic will be able not only to protect access to shared accounts, to discover privileged identities, and to manage sessions, but furthermore can actually control what users do with their privileged accounts and restrict account usage. Applications can be whitelisted or blacklisted, further enhancing control.

With this acquisition, another vendor is combining Privilege Management and Application Control, after CyberArk’s acquisition of Viewfinity some months ago. While it might be too early to name this a trend, there is logic in extending Privilege Management beyond the account management or session management aspect. Protecting not only the access to privileged accounts, but furthermore limiting and controlling the use of such accounts had already become part of Privilege Management with Session Management capabilities, but also more commonly in Unix and Linux environments with restrictions for the use of shell commands. Thus, adding Application Control and other Endpoint Security features is just a logical step.

Our view on Privilege Management always has been beyond pure Shared Account Password Management. The current evolution towards integration with Application Control and other features fits in our broader view of protecting all accounts with elevated privileges at any time, both for access and use.


Google+


top
KuppingerCole Blog
By:
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live training sessions.
Register now
Spotlight
Customer-Centric Identity Management
As more and more traditional services move online as part of the digital transformation trend, consumer-centric identity management is becoming increasingly vital business success factor. Customers aren’t just physical persons, they are also the devices used by customers, they are also intermediate organisations and systems which operate together to enable the provisioning of the service.
KC EXTEND
KC EXTEND shows how the integration of new external partners and clients in your IAM can be done while at the same time the support of the operational business is ensured.
Links
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing
Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2016 KuppingerCole