English   Deutsch   Русский   中文    

Blog posts by Martin Kuppinger

Safety vs. security – or both?

Jul 07, 2015 by Martin Kuppinger

When it comes to OT (Operational Technology) security in all its facets, security people from the OT world and IT security professionals quickly can end up in a situation of strong disagreement. Depending on the language they are talking, it might even appear that they seem being divided by a common language. While the discussion in English quickly will end up with a perceived dichotomy between security and safety, e.g. in German it would be “Sicherheit vs. Sicherheit”.

The reason for that is that OT thinking traditionally – and for good reason – is about safety of humans, machines, etc. Other major requirements include availability and reliability. If the assembly line stops, this can quickly become expensive. If reliability issues cause faulty products, it also can cost vast amounts of money.

On the other hand, the common IT security thinking is around security – protecting systems and information and enforcing the CIA – confidentiality, integrity, and availability. Notably, even the perception of the common requirement of availability is slightly different, with IT primarily being interested in not losing data while OT looking for always up. Yes, IT also frequently has requirements such as 99.9% availability. However, sometimes this is unfounded requirement. While it really costs money if your assembly line is out of service, the impact of HR not working for a business day is pretty low.

While IT is keen on patching systems to fix known security issues, OT in tendency is keen on enforcing reliability and, in consequence, availability and security. From that perspective, updates, patches, or even new hardware and software versions are a risk. That is the reason for OT frequently relying on rather old hardware and software. Furthermore, depending on the type of production, maintenance windows might be rare. In areas with continuous production, there is no way of quickly patching and “rebooting”.

Unfortunately, with smart manufacturing and the increased integration of OT environments with IT, the risk exposure is changing. Furthermore, OT environments for quite a long time have become attack targets. Information about such systems is widely available, for instance using the Shodan search engine. The problem: The longer software remains unpatched, the bigger the risk. Simply said: The former concept of focusing purely on safety (and reliability and availability) no longer works in connected OT. On the other hand, the IT thinking also does not work. Many of us have experienced problems and downtimes to due erroneous patches.

There is no simple answer, aside that OT and IT must work hand in hand. It’s, cynically said, not about “death by patch vs. death by attacker”, but about avoiding death at all. From my perspective, the CISO must be responsible for both OT and IT – split responsibilities, ignorance, and stubbornness do not help us in mitigating risks. Layered security, virtualizing existing OT and exposing it as standardized devices with standardized interfaces appears being a valid approach, potentially leading the way towards SDOT (Software-defined OT). Aside of that, providers of OT must rethink their approaches, enabling updates even with small maintenance windows or at runtime, while enforcing stable and reliable environments. Not easy to do, but a premise when moving towards smart manufacturing or Industry 4.0.

One thing to me is clear: Both parties can learn from each other – to the benefit of all.


Google+

The business case for user empowerment

Jun 09, 2015 by Martin Kuppinger

At the end of the day, every good idea stays and falls with the business model. If there is no working business model, the best idea will fail. Some ideas appear at a later time and are successful then. Let’s take tablets. I used a Windows tablets back in the days of Windows XP, way before the Apple iPad arrived. But it obviously was too early for widespread adoption (and yes, it was a different concept than the iPad, but one that is quite popular these days again).

So, when talking about user empowerment, the first question must be: Is there a business case? I believe it is, more than ever before. When talking about user empowerment, we are talking about enabling the user to control their data. When looking at the fundamental concept we have outlined initially back in 2012 as Life Management Platforms (there is an updated version available, dating late 2013), this includes the ability of sharing data with other parties in a controlled way. It furthermore is built on the idea on having a centralized repository for personal information – at least logically centralized, physically it might be distributed.

Such centralized data store simplifies management of personal information, from scanned contracts to health data collected via one of these popular activity-tracking wristbands. Furthermore, users can manage their preferences, policies, etc. in a single location.

Thus, it seems to make a lot of sense for instance for health insurance companies to support the concept of Life Management Platforms.

However, there might be a counterargument: The health insurance company wants to get a full grip on the data. But is this really in conflict with supporting user empowerment and concepts such as Life Management Platforms? Honestly, I believe that there is a better business case for Health Insurance Companies in supporting user empowerment. Why?

  1. They get rid of the discussion what should happen to e.g. the data collected with an activity tracker once a customer moves to another health insurance company – the user can just revoke access to that information (OK, the health insurance still will have to care for its copies, but that is easier to solve, particularly within advanced approaches).
  2. However, the customer still might allow access to a pseudonymised version of that data – he (or she) might even do so without being a customer at all, which then would allow health insurance companies to gain access to more statistical information, allowing them to better shape rates and contracts. There might be even a centralized statistical service for the industry, collecting data across all health insurance companies.
  3. Anyway, the most compelling argument from my perspective is another one: It is quite simple to connect to a Life Management Platform. Supporting a variety of activity trackers and convincing the customers that they must rely on a specific one isn’t the best approach. Just connecting to a service that provides health data in a standardized way is simpler and cheaper. And the customer can use the activity tracker he wants or already does – if he wants to share the data and benefit from better rates.

User empowerment does not stand in stark contrast to the business model of most organizations. It is only in conflict with the business models of companies such as Facebook, Google, etc. However, in many cases, the organizations such as retailers, insurance companies, etc. do not really benefit from relying on the data these companies collect – they pay for it and they might even pay twice through unwillingly collecting information that is then sold to the competition.

For most organizations, supporting user empowerment means simplified access to information and less friction by privacy discussions. Yes, the users can revoke access – but companies also might build far better relationships with customers and thus minimize that risk. There are compelling business cases today. And, in contrast to 2012, the world appears being ready for solutions that force user empowerment.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

Consent – Context – Consequence

May 21, 2015 by Martin Kuppinger

Consent and Context: They are about to change the way we do IT. This is not only about security, where context already is of growing relevance. It is about the way we have to construct most applications and services, particularly the ones dealing with consumer-related data and PII in the broadest sense. Consent and context have consequences. Applications must be constructed such that these consequences can be taken.

Imagine the EU comes up with tighter privacy regulations in the near future. Imagine you are a service provider or organization dealing with customers in various locations. Imagine your customers being more willing to share data – consent with sharing – when they remain in control of data. Imagine that what Telcos must already do, e.g. in at least some EU countries, becoming mandatory for other industries and countries: Handing over customer data to other Telcos and “forgetting” about large parts of that data rapidly.

There are many different scenarios where regulatory changes or changing expectations of customers mandate changes in applications. Consent (and regulations) increasingly control application behavior.

On the other hand, there is context. Mitigating risks is tightly connected to understanding the user context and acting accordingly. The days of black and white security are past. Depending on the context, an authenticated user might be authorized to do more or less.

Simply said: Consent and context have – must mandatorily have – consequences in application behavior. Thus, application (and this includes cloud services) design must take consent and context into account. Consent is about following the principles of Privacy by Design. An application designed for privacy can be opened up if the users or regulations allow. This is quite easy, when done right. Far easier than, for example, adapting an application to tightening privacy regulations. Context is about risk-based authentication and authorization or, in a broader view, APAM (Adaptive, Policy-based Access Management). Again, if an application is designed for adaptiveness, it easily can react to changing requirements. An application with static security is hard to change.

Understanding Consent, Context, and Consequences can save organizations – software companies, cloud service providers, and any organization developing its own software – a lot of money. And it’s not only about cost savings, but agility – flexible software makes business more agile and resilient to changes and increases time-to-market.


Google+

100%, 80% or 0% security? Make the right choice!

May 19, 2015 by Martin Kuppinger

Recently, I have had a number of conversations with end user organizations, covering a variety of Information Security topics but all having the same theme: There is a need for certain security approaches such as strong authentication on mobile devices, secure information sharing, etc. But the project has been stopped due to security concerns: The strong authentication approach is not as secure as the one currently implemented for desktop systems; some information needs to be stored in the cloud; etc.

That’s right, IT Security people stopped Information Security projects due to security concerns.

The result: There still is 0% security, because nothing has been done yet.

There is the argument, that insecure is insecure. Either something is perfectly secure or it is insecure. However, when following that path, everything is insecure. There are always ways to break security, if you just invest sufficient criminal energy.

It is time to move away from our traditional black-and-white approach to security. It is not about being secure or insecure, but, rather, about risk mitigation. Does a technology help in mitigating risk? Is it the best way to achieve that target? Is it a good economic (or mandatory) approach?

When thinking in terms of risk, 80% security is obviously better than 0% security. 100% might be even better, but also worse, because it’s costly, cumbersome to use, etc.

It is time to stop IT security people from inhibiting improvements in security and risk mitigation by setting unrealistic security baselines. Start thinking in terms of risk. Then, 80% of security now and at fair cost are commonly better than 0% now or 100% sometime in the future.

Again: There never ever will be 100% security. We might achieve 99% or 98% (depending on the scale we use), but cost grows exponentially. The limit of cost is infinite for security towards 100%.


Google+

Managing the relationships for the new ABC: Agile Business, Connected

May 12, 2015 by Martin Kuppinger

Over the past years, we talked a lot about the Computing Troika with Cloud Computing, Mobile Computing, and Social Computing. We raised the term of the Identity Explosion, depicting the exponential growth of identities organizations have to deal with. We introduced the need for a new ABC: Agile Business, Connected. While agility is a key business requirement, connected organizations are a consequence of both the digital transformation of business and of mobility and IoT.

This rapid evolution in consequence means that we also have to transform our understanding of identities and access. We still see a lot of IAM projects focusing on employees. However, it is about employees, business partners, customers and consumers, leads, prospects etc. when we look at human identities.

 
Fig. 1: People, organizations, devices, and things are becoming connected –
organizations will have to deal with more identities and relations than ever before.

Even more, human identities are becoming only a fraction of the identities we have to deal with. People use devices, which communicate with backend services. Things are becoming increasingly connected. Everything and everyone, whether being a human, a device, a service, or a thing, have their own identity.

Relationships can become quite complex. A device might be used by multiple persons. A vehicle is not only connected to the driver or manufacturer, but to many other parties such as insurance companies, leasing companies, police, dealer, garage, inhabitants, other vehicles, etc. Not so speak about the fact that the vehicle for itself consists of many things that frequently interact.

Managing access to information requires a new thinking around identities and access. Only that will enable us to manage and restrict access to information as needed. Simply said: Identity and Access Management is becoming bigger than ever before and it is one of the essential foundations to make the IoT and the digital transformation of businesses work.

 
Fig. 2: APIs will increasingly connect everything and everyone – it becomes essential
to understand the identity context in which APIs are used.

In this context, APIs (Application Programming Interfaces) play a vital role. While I don’t like that term, being far too technical, it is well established in the IT community. APIs – the communication interfaces of services, apps (on devices) and what we might call thinglets (on things) – are playing the main role in this new connected world. Humans interact with some services directly, via browser. They use the UI of their devices to access apps. And they might even actively interact with things, even while these commonly act autonomously.

But the communication than happens between apps, devices, and services, using these APIs. For managing access to information via services, devices and things, we need in particular a good understanding of the relationship between them and the people and organizations. Without that, we will fail in managing information security in the new ABC.

Understanding and managing relations of a massive number of connected people, things, devices, and services is today’s challenge for succeeding with the new ABC: Agile Businesses, Connected.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

Redesigning access controls for IAM deployments?

Apr 20, 2015 by Martin Kuppinger

A few weeks ago I read an article in Network World, entitled “A common theme in identity and access management failure: lack of Active Directory optimization”. Essentially, it is about the fact that Microsoft Active Directory (AD) commonly needs some redesign when starting an IAM (Identity and Access Management) project. Maybe yes, and maybe no.

In fact, it is common that immature, chaotic, or even “too mature” (e.g. many years of administrative work leaving their traces with no one cleaning up) access control approaches in target systems impose a challenge when connecting them to an IAM (and Governance) system. However, there are two points to consider:

  1. This is not restricted to AD, it applies to any target system.
  2. It must not be allowed to lead to failures in your IAM deployment.

I have frequently seen this issue with SAP environments, unless they already have undergone a restructuring, e.g. when implementing SAP Access Control (formerly SAP GRC Access Control). In fact, the more complex the target system and the older it is, the more likely it is that the structure of access controls, be they roles or groups or whatever, is anything but perfect.

There is no doubt that redesign of the security model is a must in such situations. The question is just about “when” this should happen (as Jonathan Sander, the author of the article mentioned above, also states). In fact, if we would wait for all these security models to be redesigned, we probably never ever would see an IAM program succeeding. Some of these redesign projects take years – and some (think about mainframe environments) probably never will take place. Redesigning the security model of an AD or an SAP environment is a quite complex project by itself, despite all the tools supporting this.

Thus, organizations typically will have to decide about the order of projects. Should they push their IAM initiative or do the groundwork first? There is no single correct answer to that question. Frequently, IAM projects are so much under pressure that they have to run first.

However, this must not end in the nightmare of a failing project. The main success factor for dealing with these situations is having a well thought-out interface between the target systems and the IAM infrastructure for exposing entitlements from the target systems to IAM. At the IAM level, there must be a concept of roles (or at least a well thought-out concept for grouping entitlements). And there must be a clear definition of what is exposed from target systems to the IAM system. That is quite easy for well-structured target systems, where, for instance, only global groups from AD or business roles from SAP might become exposed, becoming the smallest unit of entitlements within IAM. These might appear as “system roles” or “system-level roles” (or whatever term you choose) in IAM.

Without that ideal security model in the target systems, there might not be that single level of entitlements that will become exposed to the IAM environment (and I’m talking about requests, not about the detailed analysis as part of Entitlement & Access Governance which might include lower levels of entitlements in the target systems). There are two ways to solve that issue:

  1. Just define these entitlements, i.e. global groups, SAP business roles, etc. first as an additional element in the target system, map them to IAM, and then start the redesign of the underlying infrastructure later on.
  2. Or accept the current structure and invest more in mappings of system roles (or whatever term you use) to the higher levels of entitlements such as IT-functional roles and business roles (not to mix up with SAP business roles) in your IAM environment.

Both approaches work and, from my experience, if you understand the challenge and put your focus on the interface, you will be quickly able to identify the best way to handle the challenge of executing your IAM program while still having to redesign the security model of target systems later on. In both cases, you will need a good understanding of the IAM-level security model (roles etc.) and you need to enforce this model rigidly – no exceptions here.


Google+

Data Security Intelligence – better understanding where your risks are

Apr 08, 2015 by Martin Kuppinger

Informatica, a leader in data management solutions, introduced a new solution to the market today. The product named Secure@Source also marks a move of Informatica into the Data (and, in consequence, Database) Security market. Informatica already has solutions for data masking in place, which is one of the elements in data security. However, data masking is only a small step and it requires awareness of the data that needs protection.

In contrast to traditional approaches to data security – Informatica talks about “data-centric security” – Informatica does not focus on technical approaches alone e.g. for encrypting databases or analyzing queries. As the name of their approach implies, they are focusing on protection of the data itself.

The new solution builds on two pillars. One is Data Security Intelligence, which is about discovery, classification, proliferation analysis, and risk assessment for data held in a variety of data sources (and not only in a particular database). The other is Data Security Controls, which as of now includes persistent and dynamic masking of data plus validation and auditing capabilities.

The target is reducing the risk of leakage, attacks, and other data-related incidents for structured data held in databases and big data stores. The approach is understanding where the data resides and applying adequate controls, particularly masking of sensitive data. This is all based on policies and includes e.g. alerting capabilities. It also can integrate data security events from other sources and work together with external classification engines.

These interfaces will also allow third parties to attach tokenization and encryption capabilities, along with other features. It will also support advanced correlation, for instance by integrating with an IAM solution, and thus adding user context, or by integrating with DLP solutions to secure the endpoint.

Informatica’s entry into the Information Security market is, in our view, a logical consequence of where the company is already positioned. While it provides deep insight into where sensitive data resides - in other words its source - and a number of integrations, we’d like to see a growing number of out-of-the-box integrations, for instance, with security analytics, IAM, or encryption solutions. While there are integration points for partners, relying too much on partners for all these various areas might make the solution quite complex for customers. While this makes sense for IAM, security analytics such as SIEM or RTSI (Real-time Security Intelligence) and other areas such as encryption might become integral parts of future releases of Informatica Secure@Source.

Anyway, Informatica is taking the right path for Information Security by focusing on security at the data level instead of focusing on devices, networks, etc. – the best protection is always at the source.


Google+

Facebook profile of the German Federal Government thwarts efforts to improve data protection

Mar 05, 2015 by Martin Kuppinger

There is a certain irony that the federal government has almost simultaneously launched a profile on Facebook with the change of the social network’s terms of use. While the Federal Minister of Justice, Heiko Maas, is backing up consumer organizations with their warnings of Facebook, the Federal Government has taken the first step in setting up its own Facebook profile.

With the changes in the terms of use, Facebook has massively expanded its ability to analyze the data of users. Data is also stored which is left behind by users on pages outside of Facebook for use in targeted advertising and possibly other purposes. On the other hand, the user has the possibility of better managing personal settings for his/her own privacy. The bottom line: it remains clear that Facebook is collecting even more data in a hard to control manner.

Like Federal Minister of Justice Maas says, "Users do not know which data is being collected or how it is being used."

For this reason alone, it is difficult to understand why the Federal Government is taking this step right at this moment. After all, it has been able to do its work so far without Facebook.

With its Facebook profile, the Federal Government is ensuring that Facebook is, for example, indirectly receiving information on the political interests and preferences of the user. Since it is not clear just how this information could be used today or in the future, it is a questionable step.

If one considers the Facebook business model, it can also have an imminent negative impact. Facebook's main source of income is from targeted advertising based on the information that the company has collected on its users. With the additional information that will be available via the Federal Government's Facebook profile, for example, interest groups can, in the future, selectively advertise on Facebook to track their goals.

Here it is apparent, as with many businesses, that the implications of commercial Facebook profiles are frequently not understood. On the one hand, there is the networking with interested Facebook users. Their value is often overrated - these are not customers, not leads and NOT voters, but at best people with a more or less vague interest. On the other hand, there is information that a company, a government, a party or anyone else with a Facebook profile discloses to Facebook: Who is interested in my products, my political opinions (and which ones) or for my other statements on Facebook?

The Facebook business model is exactly that - to monetize this information - today more than ever before with the new business terms. For a company, this means that the information is also available to the competition. You could also say that Facebook is the best possibility of informing the competition about a company's (more or less interested) followers. In marketing, but also in politics, one should understand this correlation and weigh whether it is worth paying the implicit price for the added value in the form of data that is interesting to competitors.

Facebook may be "in" - but it is in no way worth it for every company, every government, every party or other organization.

End users have to look closely at the new privacy settings and limit them as much as possible if they intend to stay on Facebook. In the meantime, a lot of the communication has moved to other services like WhatsApp, so now is definitely the time to reconsider the added value of Facebook. And sometimes, reducing the amount of communication and information that reaches one is also added value.

The Federal Government should in any case be advised to consider the actual benefits of its Facebook presence. 50,000 followers are not 50,000 voters by any means - the importance of this number is often massively overrated. The Federal Government has to be clear about the contradiction between its claim to strong data protection rules and its actions. To go to Facebook now is not even fashionable any more - it is plainly the wrong step at the wrong time.

According to KuppingerCole, marketing managers in companies should also exactly analyze which price they are paying for the anticipated added value of a Facebook profile - one often pays more while the actual benefits are much less. Or has the number of customers increased accordingly in the last fiscal year because of 100,000 followers? A Facebook profile can definitely have its uses. But you should always check carefully whether there is truly added value.


Google+

Gemalto feels secure after attack - the rest of the world does not

Feb 25, 2015 by Martin Kuppinger

In today’s press conference regarding the last week’s publications on a possible compromise of SIM cards from Gemalto by the theft of keys the company has confirmed security incidents during the time frame mentioned in the original report. It’s difficult to say, however, whether their other security products have been affected, since significant parts of the attack, especially in the really sensitive part of their network, did not leave any substantial traces. Gemalto therefore makes a conclusion that there were no such attacks.

According to the information published last week, back in 2010 a joint team of NSA and GCHQ agents has carried out a large-scale attack on Gemalto and its partners. During the attack, they have obtained secret keys that are integrated into SIM cards on the hardware level. Having the keys, it’s possible to decrypt mobile phone calls as well as create copies of these SIM cards and impersonate their users on the mobile provider networks. Since Gemalto, according to their own statements, produces 2 billion cards each year, and since many other companies have been affected as well, we are facing a possibility that intelligence agencies are now capable of global mobile communication surveillance using simple and nonintrusive methods.

It’s entirely possible that Gemalto is correct with their statement that there is no evidence for such a theft. Too much time has passed since the attack and a significant part of the logs from the affected network components and servers, which are needed for the analysis of such a complex attack, are probably already deleted. Still, this attack, just like the theft of so called “seeds” from RSA in 2011, makes it clear that manufacturers of security technologies have to monitor and upgrade their own security continuously in order to minimize the risks. Attack scenarios are becoming more sophisticated – and companies like Gemalto have to respond.

Gemalto itself recognizes that more has to be done for security and incident analysis: "Digital security is not static. Today's state of the art technologies lose their effectiveness over time as new research and increasing processing power make innovative attacks possible. All reputable security products must be re-designed and upgraded on a regular basis". In other words, one can expect that the attacks were at least partially successful - not necessarily against Gemalto itself, but against their customers and other SIM card manufacturers. There is no reason to believe that new technologies are secure. According to the spokesperson for the company, Gemalto is constantly facing attacks and outer layers of their protection have been repeatedly breached. Even if Gemalto does maintain a very high standard in security, the constant risks of new attack vectors and stronger attackers should not be underestimated.

Unfortunately, no concrete details were given during the press conference, what changes to their security practices are already in place and what are planned, other than a statement regarding continuous improvement of these practices. However, until the very concept of a “universal key”, in this case the encryption key on a SIM card, is fundamentally reconsidered, such keys will remain attractive targets both for state and state-sponsored attackers and for organized crime.

Gemalto considers the risk for the secure part of their infrastructure low. Sensitive information is apparently kept in isolated networks, and no traces of unauthorized access to these networks have been found. However, the fact that there were no traces of attacks does not mean that there were no attacks.

Gemalto has also repeatedly pointed out that the attack has only affected 2G network SIMs. There is, however, no reason to believe that 3G and 4G networks must be safer, especially not against massive attacks of intelligence agencies. Another alarming sign is that, according to Gemalto, certain mobile service providers are still using insecure transfer methods. Sure, they are talking about “rare exceptions”, but it nevertheless means that unsecured channels still exist.

The incident at Gemalto has once again demonstrated that the uncontrolled actions of intelligence agencies in the area of cyber security poses a threat not only to fundamental constitutional principles such as privacy of correspondence and telecommunications, but to the economy as well. The image of companies like Gemalto and thus their business success and enterprise value are at risk from such actions.

Even more problematic is that the knowledge of other attackers is growing with each published new attack vector. Stuxnet and Flame have long been well analyzed. It can be assumed that the intelligence agencies of North Korea, Iran and China, as well as criminal groups have studied them long ago. The act can be compared to leaking of atomic bomb designs, with a notable difference: you do not need plutonium, just a reasonably competent software developer to build your own bomb. Critical infrastructures are thus becoming more vulnerable.

In this context, one should also consider the idea of German state and intelligence agencies to procure zero-day exploits in order to carry out investigations of suspicious persons’ computers. Zero-day attacks are called that way because code to exploit a newly discovered vulnerability is available before the vendor even becomes aware of the problem, because they literally have zero days to fix it. In reality, this means that attackers are able to exploit a vulnerability long before anyone else discovers it. Now, if government agencies are keeping the knowledge about such vulnerabilities to create their own malware, they are putting the public and the businesses in a great danger, because one can safely assume that they won’t be the only ones having that knowledge. After all, why would sellers of such information make their sale only once?

With all due respect for the need for states and their intelligence agencies to respond to the threat of cyber-crime, it is necessary to consider two potential problems stemming from this approach. On one hand, it requires a defined state control over this monitoring, especially in light of the government’s new capability of nationwide mobile network monitoring in addition to already available Internet monitoring. On the other hand, government agencies finally need to understand the consequences of their actions: by compromising the security of IT systems or mobile communications, they are opening a Pandora's Box and causing damage of unprecedented scale.


Google+

Operational Technology: Safety vs. Security – or Safety and Security?

Feb 24, 2015 by Martin Kuppinger

In recent years, the area of “Operational Technology” – the technology used in manufacturing, in Industrial Control Systems (ICS), SCADA devices, etc. – has gained the attention of Information Security people. This is a logical consequence of the digital transformation of businesses as well as concepts like the connected (or even hyper-connected) enterprise or “Industry 4.0”, which describes a connected and dynamic production environment. “Industry 4.0” environments must be able to react to customer requirements and other changes by better connecting them. More connectivity is also seen between industrial networks and the Internet of Things (IoT). Just think about smart meters that control local power production that is fed into large power networks.

However, when Information Security people start talking about OT Security there might be a gap in common understanding. Different terms and different requirements might collide. While traditional Information Security focuses on security, integrity, confidentiality, and availability, OT has a primary focus on aspects such as safety and reliability.

Let’s just pick two terms: safety and security. Safety is not equal to security. Safety in OT is considered in the sense of keeping people from harm, while security in IT is understood as keeping information from harm. Interestingly, if you look up the definitions in the Merriam-Webster dictionary, they are more or less identical. Safety there is defined as “freedom from harm or danger: the state of being safe”, while security is defined as “the state of being protected or safe from harm”. However, in the full definition, the difference becomes clear. While safety is defined as “the condition of being safe from undergoing or causing hurt, injury, or loss”, security is defined as “measures taken to guard against espionage or sabotage, crime, attack, or escape”.

It is a good idea to work on a common understanding of terms first, when people from OT security and IT security start talking. For decades, they were pursuing their separate goals in environments with different requirements and very little common ground. However, the more these two areas become intertwined, the more conflicts occur between them – which can be best illustrated when comparing their views on safety and security.

In OT, there is a tendency to avoid quick patches, software updates etc., because they might result in safety or reliability issues. In IT, staying at the current release level is mandatory for security. However, patches occasionally cause availability issues – which stands in stark contrast to the core OT requirements. In this regard, many people from both sides consider this a fundamental divide between OT and IT: the “Safety vs. Security” dichotomy.

However, with more and more connectivity (even more in the IoT than in OT), the choice between safety and security is no longer that simple. A poorly planned change (even as simple as an antivirus update) can introduce enough risk of disruption of an industrial network that OT experts will refuse even to discuss it: “people may die because of this change”. However, in the long term, not making necessary changes may lead to an increased risk of a deliberate disruption by a hacker. A well-known example of such a disruption was the Stuxnet attack in Iran back in 2007. Another much more recent event occurred last year in Germany, where hackers used malware to get access to a control system of a steel mill, which they then disrupted to such a degree that it could not be shut down and caused massive physical damage (but, thankfully, no injuries or death of people).

When looking in detail at many of the current scenarios for connected enterprises and – in consequence – connected OT or even IoT, this conflict between safety and security isn’t an exception; every enterprise is doomed to face it sooner or later. There is no simple answer to this problem, but clearly, we have to find solutions and IT and OT experts must collaborate much more closely than they are (reluctantly) nowadays.

One possible option is limiting access to connected technology, for instance, by defining it as a one-way road, which enables information flow from the industrial network, but establishes an “air gap” for incoming changes. Thus, the security risk of external attacks is mitigated.

However, this doesn’t appear to be a long-term solution. There is increasing demand for more connectivity, and we will see OT becoming more and more interconnected with IT. Over time, we will have to find a common approach that serves both security and safety needs or, in other words, both OT security and IT security.


Google+


top
Author info

Martin Kuppinger
Founder and Principal Analyst
Profile | All posts
KuppingerCole Blog
By:
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Register now
Spotlight
User Empowerment / Life Management
For most organizations, supporting user empowerment means simplified access to information and less friction by privacy discussions. Yes, the users can revoke access – but companies also might build far better relationships with customers and thus minimize that risk. There are compelling business cases today. And, in contrast to 2012, the world appears being ready for solutions that force user empowerment.
KuppingerCole Services
KuppingerCole offers clients a wide range of reports, consulting options and events enabling aimed at providing companies and organizations with a clear understanding of both technology and markets.
Links
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on Google+

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing
Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2015 KuppingerCole