Blog posts by Martin Kuppinger
Martin Kuppinger about Blockchain and that it is more than just a part of the Bitcoin cryptocurrency.
81 million dollars, that was the sum hackers stole from the central bank of Bangladesh this year in April by breaching the international payment system SWIFT. Three other SWIFT hacks followed quickly in other banks. SWIFT reacted by announcing security improvements, including two-factor authorization, after first remarks that the reasons for the successful attacks lie with the robbed banks and their compromised systems.
Whoever has made a mistake here, maybe all involved parties, the growing number of cyberattacks against banks is not really surprising, since hackers tend to go where the money is. And even if the Bangladesh case might have been the biggest assault so far, it is just one in a long chain of attempts and conducts of online bank robberies. Cybercrime has become the biggest risk for financial institutes today. The reason behind this are – besides the money - often the heterogeneous legacy systems of many institutes, which simply weren’t originally built for the cyber world. They open huge doors for successful attacks. What does that mean for financial institutes? First, they urgently need to consider a huge paradigm shift concerning IT and information security.
For years the last bastion against digitalization, many banks successfully withstood the cloud and all later developments like IoT without their business models having to suffer. They maintained their own infrastructures in secluded data center silos and kept running their own monolithic systems for core banking applications. Customers, both B2B and B2C, accepted this. It seemed to be safe and normal. (It had also to do a lot with regulatory requirements, of course.)
This initial situation has however changed dramatically: More and more young and dynamic competitors enter the market. Most of these fintechs specialize in a certain aspect of financial services and use the latest technologies to communicate and deal with clients when needed everywhere in real-time. Traditional banks already notice the heavy winds of change through a decreasing number of younger customers, “millennials”, who like to bank mobile “on the go” and put more trust into peers than into classic institutions.
To stay relevant by becoming more agile and satisfying the needs of connected consumers, banks have, at least partly, begun to integrate the new world into their business models. However, this also demands rethinking of information security questions. In a hyperconnected world the old perimeters like firewalls are not of much use any more, if at all. With IT being anytime everywhere and more and more people, devices and things becoming connected with each other, the attack surface grows exponentially. New threats arise in these internal and external relationships, elaborated phishing and privileged user attacks just being two examples.
The perimeter shifts to the identities of people, KYC (Know your customer) compliance being one example, but also devices and billions of ever new things. In this context the further development of blockchain technology with advanced identity and access management prospects promises a huge leap for worldwide secure and transparent financial transactions (unforgeable records of identity, no double spending possible, automated verification, self-executing contracts, encryption, data integrity through time-stamps, hashing etc.), even though certain limits to this innovative technology still
need to be addressed. Could they e. g. better be solved with permissioned, private ledgers, where only known users are enabled to participate? SWIFT seems to be already experimenting on this.
Whatever the solution(s), Security and Privacy may not be an afterthought anymore. Both need to start right with the development of products and solutions. Many industries have already understood that. It’s time for the digital finance world to internalize the concept of security and privacy by design too. I can almost hear those who say that this will hinder and agility and slow processes down. In fact, it is clearly the other way round and cannot be emphasized enough: Security and Privacy by Design help any business to become even more agile than ever before. They’re actually the foundation of successful and economic Agility by Design.
Of course many banks already considered “security by design” even in their old mainframe infrastructures. In fact, they were often really good and quite progressive at it, with dynamic authorization (ABAC) and so forth. Sadly, these efforts don’t count much in a highly dynamic and digitalized world. Agility by design can today only be reached by thinking security by design anew and by also realizing the regulatory demands of privacy by design. If they do both aspects right, financial institutes stand a good chance to persist also in completely new competitive and risk environments. This won’t work with the old core banking IT however, since it is neither agile nor secure enough and it also doesn’t fulfil modern privacy requirements.
Yesterday, Ping Identity announced the intent of Vista Equity Partners to acquire it. Ping Identity is a privately held company backed by venture capital, and will become acquired by a private equity firm.
This acquisition is no real surprise to me. We have seen a few private equity deals in the past years, with SailPoint and NetIQ becoming acquired by Thoma Bravo (with the most of NetIQ’s business now being part of Micro Focus) or Courion becoming acquired by K1 Investment Management. Ping Identity has grown massively over the past years, so this acquisition is a logical step in their evolution and, as Ping states in the press release, “the acquisition by Vista does not preclude us from the option of IPO”.
Anyway, the main questions are about the impact on customers and on the competition. For customers, there is no direct impact. The shareholder structure of Ping Identity has changed, however the intent is to keep the management team including founder Andre Durand in place and to grow the team further. There are growth plans with a focus on both organic and inorganic growth. From my perspective, Ping Identity benefits from that acquisition, because it can now focus on innovation and growth. Mid-term, Ping Identity has the opportunity to grow their portfolio, continuing their journey from a vendor in a particular segment of the IAM (Identity and Access Management) market, i.e. Federation and Web Access, towards a platform provider with more products and services. The recent innovations and enhancements in their portfolio already have set the direction.
For the competitors, the days where they could name Ping Identity a niche vendor are finally past. Notably, Ping Identity has already grown into a 400+ employee company, well beyond the level of a start-up or a niche vendor. With the new owner, Ping Identity will be able to focus even more on extending their position in the market.
From my perspective, this deal is positive for both Ping Identity and their customers. I expect the company to further strengthen their market position, beyond pure-play Identity Federation and Access Management.
Oh, and Vista Equity Partners announced on May 31, 2016 that they have entered into a definitive agreement to acquire Marketo. While Ping Identity plays an important role on the IAM side of CIAM (Customer Identity and Access Management) and KYC (Know Your Customer), Marketo is a strong player in marketing automation and analytics.
For a long time, IT risks have been widely ignored by business people, including Corporate Risk Officers (CROs) and C-level management. This has changed recently with the increasing perception of cyber-security risks. With the move to the IoT (Internet of Things) or, better, the IoEE (Internet of Everything and Everyone), we are beginning upon a new level.
When a company starts selling and deploying connected things, this also raises product liability questions. Obviously, goods that are connected are more in danger than goods that aren’t. Connecting things creates a new type of product liability risk, by creating a specific attack surface over the Internet. Thus, when enthusiastically looking at the new business potential of connecting things, organizations must also analyze the impact on product liability. If things go really wrong, this might put the entire organization at risk.
Product security inevitably becomes a #1 topic for any organization that starts selling connected things. These things contain some software – let’s call this a “thinglet”. It’s not an app with a user interface. It is a rather autonomous piece of code that connects to apps and to backend services – and vice versa. Such thinglets must be designed following the principles of Security by Design and Privacy by Design. They also must be operated securely, including a well thought-out approach to patch management.
It’s past time for vendors to analyze the relationship of the IoEE, product security, and product liability risks.
Sounds like “security as the notorious naysayer”? Sounds like “security kills agility”? Yes, but only at first glance. If you use the security argument for blocking innovation, then security stays in its well-known, negative role. However, as I have written in a recent post (and, in more details, in some other posts linked to that post), security and privacy, if done right, are an opportunity not a threat. Security by Design and Privacy by Design drive Agility by Design. A shorter time-to-market results from consequently following these principles. If you don’t do so, you will have to decide between the security risk and the risk of being too late – but only then. Security done right is a key success factor nowadays.
The German ZVEI (Zentralverband Elektrotechnik- und Elektroindustrie), the association of the electrical and electronic industries, and the VDI (Verein Deutscher Ingenieure), the association of German engineers, has published a concept called RAMI (Referenzarchitekturmodell Industrie 4.0). This reference architecture model has a length of about 25 pages, which is OK. The first target listed for RAMI 4.0 is “providing a clear and simple architecture model as reference”.
However, when analyzing the model, there is little clearness and simplicity in it. The model is full of links to other norms and standards. It is full of multi-layer, sometimes three-dimensional architecture models. On the other hand, the model doesn’t provide answers on details, and only a few links to other documents.
RAMI 4.0 e.g. says that the minimal infrastructure of Industry 4.0 must fulfill the principles of Security-by-Design. There is no doubt that Industry 4.0 should consequently implement the principles of Security-by-Design. Unfortunately, there is not even a link to a description of what Security-by-Design concretely means.
Notably, security (and safety) are covered in a section of the document spanning not even 1% of the entire content. In other words: Security is widely ignored in that reference architecture, in these days of ever-increasing cyber-attacks against connected things.
RAMI 4.0 has three fundamental faults:
- It is not really concrete. It lacks details in many areas and doesn’t even provides links to more detailed information.
- While only being 25 pages in length and not being very detailed, it is still overly complex, with multi-layered, complex models.
- It ignores the fundamental challenges of security and safety.
Hopefully, we will see better concepts soon, that focus on supporting the challenges of agility and security, instead of over-engineering the world of things and Industry 4.0.
Martin Kuppinger explains why there is no API economy.
Cion Systems, a US-based IAM (Identity and Access Management) vendor, recently released a free service that allows Windows users to implement two-factor authentication. It works only for Windows and supports either a PIN code sent via SMS (not available in all countries) or a standard USB storage key as the second factor. Thus, users can increase the security of their Windows devices in a simple and efficient manner, without the need of paying for a service or purchasing specialized hardware.
Their free service is based on the same technology Cion Systems is using for their enterprise self-service and multi-factor authentication solutions and services, which, e.g., also support password synchronization with Microsoft Office 365.
While such a service is not unique, it is a convenient approach for Windows users for increasing the security of their systems that still commonly are only protected via username and password.
Is „Digital Transformation“ something of the future? Definitely not. It has long become reality. With connected things and production, business models of enterprises are already changing profoundly. Communication with customers no longer happens over traditional websites. It encompasses apps and increasingly connected things as well. Rapidly changing business models and partnerships lead to new application architectures like micro service models, especially however to a more intensive usage of APIs (Application Programming Interfaces, interfaces of applications for external function calls), in order to combine functions of various internal and external services to new solutions.
This quick change is often being used as an argument that security can't be improved, since there is the believe that this would hinder the fulfilment of temporal and functional business requirements, especially of all at once. No new, better, up-to-date and future-oriented security concepts in applications are being implemented due to alleged time pressure. However, exactly the opposite is the case: Precisely this change is the chance to implement security faster than ever before. And anyhow, for communication from apps to backend and external systems, user authentication and of course complete handling of connected things one can’t use the same concepts that were introduced for websites five, ten or fifteen years ago.
Furthermore, by now there is a whole lot of established standards, from the more traditional SAML (Security Assertion Markup Language) to more modern worldwide standards, in which REST-based access of apps to services and between services is normal. OAuth 2.0 and OpenID Connect are good examples of this. Or, in other words: Mature possibilities for better security solutions are already a reality, in the form of standards as well as on a conceptual level.
Another good example is the new (and not yet really established) UMA (User Managed Access) standard of the Kantara Initiative. With this standard, users can share “their” data purposefully with applications beyond the basic OAuth 2.0 functions. If you look for example at some of the data challenges associated with the “connected car”, it soon becomes clear how useful new concepts can be.
UMA and other new standards enable easy control of who gets access when and to which data. Traditional concepts don’t allow this – as soon as diverse user groups need access to diverse data sources in diverse situations, one hits the wall or needs to “tinker” solutions (with much effort). If you look e.g. at the crash data recorder, to which insurances, manufacturers and the police need to have access – however, not always and definitely not to all data – it becomes clear how expansively some new challenges in digital transformation have to be solved if not built on modern security concepts.
“Disruption”, the fundamental change we experience in the digital transformation in many places – contrary to the slow and continual development that was the rule in many industries for years – is the chance to become faster, more agile and more secure. For this, we need to deploy new concepts that are oriented towards these new requirements. Already in the first project you are often quicker with this approach than by trying to adapt old concepts to new problems. We should use the chance to make security stronger, especially in the digital transformation. The alternative is risking not to be sufficiently agile enough to withstand competition, due to outdated software and old security architectures.
On February 23rd, 2016, Thycotic, one of the leading vendors in the area of Privilege Management (also commonly referred to as Privileged Account Management or Privileged Identity Management) announced the acquisition of Arellia. Arellia delivers Endpoint Security functionality and, in particular, Application Control capabilities. Both Thycotic and Arellia have built their products on the Microsoft Windows platform, which will allow the more rapid integration of the two offerings.
Thycotic, with its Secret Server product, has evolved over the past years from an entry-level solution towards an enterprise-level product, with significant enhancements in functionality. With the addition of the Arellia products, Thycotic will be able not only to protect access to shared accounts, to discover privileged identities, and to manage sessions, but furthermore can actually control what users do with their privileged accounts and restrict account usage. Applications can be whitelisted or blacklisted, further enhancing control.
With this acquisition, another vendor is combining Privilege Management and Application Control, after CyberArk’s acquisition of Viewfinity some months ago. While it might be too early to name this a trend, there is logic in extending Privilege Management beyond the account management or session management aspect. Protecting not only the access to privileged accounts, but furthermore limiting and controlling the use of such accounts had already become part of Privilege Management with Session Management capabilities, but also more commonly in Unix and Linux environments with restrictions for the use of shell commands. Thus, adding Application Control and other Endpoint Security features is just a logical step.
Our view on Privilege Management always has been beyond pure Shared Account Password Management. The current evolution towards integration with Application Control and other features fits in our broader view of protecting all accounts with elevated privileges at any time, both for access and use.
Sometime last autumn I started researching the field of Micro-Segmentation, particularly as a consequence of attending a Unisys analyst event and, subsequently, VMworld Europe. Unisys talked a lot about their Stealth product, while at VMworld there was much talk about the VMware NSX product and its capabilities, including security by Micro-Segmentation.
The basic idea of Datacenter Micro-Segmentation, the most common approach on Micro-Segmentation, is to segment by splitting the network into small (micro) segments for particular workloads, based on virtual networks with additional capabilities such as integrated firewalls, access control enforcement, etc.
Using Micro-Segmentation, there might even be multiple segments for a particular workload, such as the web tier, the application tier, and the database tier. This allows further strengthening security by different access and firewall policies that are applied to the various segments. In virtualized environments, such segments can be easily created and managed, far better than in physical environments with a multitude of disparate elements from switches to firewalls.
Obviously, by having small, well-protected segments with well-defined interfaces to other segments, security can be increased significantly. However, it is not only about datacenters.
The applications and services running in the datacenter are accessed by users. This might happen through fat-client applications or by using web interfaces; furthermore, we see a massive uptake in the use of APIs by client-side apps, but also backend applications consuming and processing data from other backend services. Furthermore, there is also a variety of services where, for example, data is stored or processed locally, starting with downloading documents from backend systems.
Apparently, not everything can be protected perfectly well. Data accessed through browsers is out of control once it is at the client – unless the client can become a part of the secure environment as well.
Anyway, there are – particularly within organizations with good control of everything within the perimeter and at least some level of control around the devices – more options. Ideally, everything becomes protected across the entire business process, from the backend systems to the clients. Within that segmentation, other segments can exist, such as micro-segments at the backend. Such “Business Process Micro-Segmentation” stands not in contrast to Datacenter Micro-Segmentation, but extends that concept.
From my perspective, we will need two major extensions for moving beyond Datacenter Micro-Segmentation to Business Process Micro-Segmentation. One is encryption. While there is limited need for encryption within the datacenter (don’t consider your datacenter being 100% safe!) due to the technical approach on network virtualization, the client resides outside the datacenter. The minimal approach is protecting the transport by means like TLS. More advanced encryption is available in solutions such as Unisys Stealth.
The other area for extension is policy management. When looking at the entire business process —and not only the datacenter part — protecting the clients by integrating areas like endpoint security into the policy becomes mandatory.
Neither Business Process Micro-Segmentation nor Datacenter Micro-Segmentation will solve all of our Information Security challenges. Both are only building blocks within a comprehensive Information Security strategy. In my opinion, thinking beyond Datacenter Micro-Segmentation towards Business Process Micro-Segmentation is also a good example of the fact that there is not a “holy grail” for Information Security. Once organizations start sharing information with external parties beyond their perimeter, other technologies such as Information Rights Management – where documents are encrypted and distributed along with the access controls that are subsequently enforced by client-side applications – come into play.
While there is value in Datacenter Micro-Segmentation, it is clearly only a piece of a larger concept – in particular because the traditional perimeter no longer exists, which also makes it more difficult to define the segments within the datacenter. Once workloads are flexibly distributed between various datacenters in the Cloud and on-premises, pure Datacenter Micro-Segmentation reaches its limits anyway.
Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]