Shodan is a computer search engine. They call themselves the “world’s first search engine for Internet-connected devices”, including buildings, refrigerators, power plants, the Internet of Things (IoT), webcams, and whatever else you can imagine. Shodan isn’t new. The search engine has been online for several years now. The only new thing is the change in the URL from www.shodanhq.com to www.shodan.io.
When talking about the challenges we are facing in the IoT and in Smart Manufacturing, I commonly bring up Shodan as an example of what is visible today in this hyper-connected world. Interestingly, most CIOs and other Information Security Professionals, not to mention the rest of the world, are unaware of the fact that such a website exists.
Just the fact that there is such a search engine around is scary. It allows searching for everything that is connected to the Internet. It even allows downloading results and creating reports or using that information in other ways. Running automated attacks based on search results is just one example, even while there clearly are “good” use cases as well.
What is even scarier, though, are the results a simple query such as
“default password” country:de
will show. Just run such query. It proves that reality is worse than your worst dreams. When I ran it today, it delivered 664 results containing default passwords of a variety of systems. Even while you could argue that some of these are not current anymore, quite a number of these passwords will do their job.
The important lesson to learn from the fact that there is Shodan (and that there are others around) is to do the best job you can do on security. Understand your potential attackers, know which devices expose themselves on the Internet (and stop the ones that don’t need to from doing so), avoid standard usernames and passwords, change passwords regularly, harden your systems, etc. At least follow the standard best practices for security. And clearly, “security by obscurity” is not the best, not a good, not even an acceptable practice – it never worked and clearly will not in the age of computer search engines.
Furthermore, when providing connected things or moving towards smart manufacturing, first understand that all these connected things will be visible to the Internet. Thus, they can be attacked. Security must not be an afterthought in IoT and Smart Manufacturing, because the attackers already are waiting for you to connect more things or even entire plants.
The recently discovered remote hack vulnerability of Fiat Chrysler Jeep cars, based on their Uconnect functionality, puts a spotlight on the miserable state of connected vehicle security these days. Another recently published article in a German newspaper not only identified a gap in functionality but also illustrates on how in particular German automotive vendors and suppliers implement (or plan to implement) security in their connected vehicles.
While the U.S. has introduced the Spy Car Act (Security and Privacy in Your Car Act) which is about defining industrywide benchmarks and standards for security and privacy in connected vehicles and forces the industry to collaborate, similar legislation is still lacking in the EU.
The automotive industry currently is in a rush to roll out new smart and digital features (or whatever they perceive as being smart), emulating many other industries facing the need for joining the Digital Transformation. Unfortunately, security is an afterthought, as recent incidents as well as the current trends within the industry indicate.
Ironically, the lack of well thought-out security and privacy features is already becoming an inhibitor for the industry. While the cost of sending out USB sticks with a patch is still considerably low (and the approach is impressively insecure), the cost of calling back 1.4 million cars to the garages is significant, even without speaking of the indirect cost of reputation loss or, if something really goes wrong, the liability issues.
But that is only one part of the problem. The lack of Security by Design and Privacy by Design is also becoming an inhibitor for the Digital Transformation. An essential element of the Digital Transformation is the change of business models, including rapid innovation and (ever-changing) partnerships.
A simple example that illustrates the limitations caused by the lack of security and privacy by design is the black box EDR (Event Data Recorder) becoming increasingly common an increasingly mandatory by legislation. Both automotive vendors and insurance companies are interested in “owning” the data held in such devices. While I come to the complexity of dealing with data access demands and requirements of various parties later in this post, it is obviously impossible to easily solve this conflict with technology that e.g. relies only on a single key for accessing that data. Modern concepts for security and privacy would minimize such conflicts by allowing various parties to have defined and controlled access to information they are entitled to access.
Cynically said: automotive vendors are rushing to roll out new features to succeed in the Digital Transformation, but by failing to do it right, with Security by Design and Privacy by Design, they are struggling with exactly the same transformation. Neither security nor privacy can be an afterthought for succeeding in the Digital Transformation.
From my perspective, there are five essentials the automotive industry must follow to succeed with both the connected vehicle and, in its concept, the Digital Transformation:
- Security by Design and Privacy by Design must become essential principles that any developer follows. A well-designed system can be opened up, but a weakly designed system never can be shut down. Simply said: security and privacy by design are not inhibitors, but enablers, because these allow flexible configuration of the vehicles for ever-changing business models and regulations.
- Modern hardened implementations of technology are required. Relying on a single key for accessing information of a component in the vehicle or other security concepts dating back decades aren’t adequate anymore for today’s requirements.
- Identities and Access Control must become key elements in these new security concepts. Just look at the many things, organizations, and humans around the connected vehicle. There are entertainment systems, engine control, EDR systems, gear control, and many other components. There is the manufacturer, the leasing company, the police in various countries, the insurance company, the garage, the dealer, and many other organizations. There is the driver, the co-driver, the passengers, the owner, etc. Various parties might access some information in certain systems, but might not be entitled to do so in others. Some might only see parts of the EDR data at all times, while others might be entitled to see all of that information after specific incidents. Without a concept of identities, their relations, and for managing their access, e.g. for security and privacy by design, there are too many inhibitors for supporting change in business models and regulations. From my perspective, it is worth spending some time and thoughts in looking at the concept of Life Management Platforms in that context. These concepts and standards such as UMA (User Managed Access) are the foundation for better, future-proof security in connected vehicles.
- Standards are another obvious element. It is ridiculous assuming that such complex ecosystems with manufacturers, suppliers, governmental agencies, customers, consumers, etc. can be supported with proprietary concepts.
- Finally, it is about solving the patch and update issues. Providing updates by USB stick is as inept as calling back the cars to the garages every “patch Tuesday”. There is a need for a secure approach for regular as well as emergency patches and updates, which most become part of the concept. Again, there is a need for standards, given the fact that every car today consists of (connected) components from a number of suppliers.
Notably, all these points apply to virtually all other areas of IoT (Internet of Things) and Smart Manufacturing. Security must not be an afterthought anymore. The risk for all of us is far too high – and, as mentioned above, done right, security and privacy by design enable rapidly switching to new business models and complying with new regulations, while old school “security” approaches don’t.
When it comes to OT (Operational Technology) security in all its facets, security people from the OT world and IT security professionals quickly can end up in a situation of strong disagreement. Depending on the language they are talking, it might even appear that they seem being divided by a common language. While the discussion in English quickly will end up with a perceived dichotomy between security and safety, e.g. in German it would be “Sicherheit vs. Sicherheit”.
The reason for that is that OT thinking traditionally – and for good reason – is about safety of humans, machines, etc. Other major requirements include availability and reliability. If the assembly line stops, this can quickly become expensive. If reliability issues cause faulty products, it also can cost vast amounts of money.
On the other hand, the common IT security thinking is around security – protecting systems and information and enforcing the CIA – confidentiality, integrity, and availability. Notably, even the perception of the common requirement of availability is slightly different, with IT primarily being interested in not losing data while OT looking for always up. Yes, IT also frequently has requirements such as 99.9% availability. However, sometimes this is unfounded requirement. While it really costs money if your assembly line is out of service, the impact of HR not working for a business day is pretty low.
While IT is keen on patching systems to fix known security issues, OT in tendency is keen on enforcing reliability and, in consequence, availability and security. From that perspective, updates, patches, or even new hardware and software versions are a risk. That is the reason for OT frequently relying on rather old hardware and software. Furthermore, depending on the type of production, maintenance windows might be rare. In areas with continuous production, there is no way of quickly patching and “rebooting”.
Unfortunately, with smart manufacturing and the increased integration of OT environments with IT, the risk exposure is changing. Furthermore, OT environments for quite a long time have become attack targets. Information about such systems is widely available, for instance using the Shodan search engine. The problem: The longer software remains unpatched, the bigger the risk. Simply said: The former concept of focusing purely on safety (and reliability and availability) no longer works in connected OT. On the other hand, the IT thinking also does not work. Many of us have experienced problems and downtimes to due erroneous patches.
There is no simple answer, aside that OT and IT must work hand in hand. It’s, cynically said, not about “death by patch vs. death by attacker”, but about avoiding death at all. From my perspective, the CISO must be responsible for both OT and IT – split responsibilities, ignorance, and stubbornness do not help us in mitigating risks. Layered security, virtualizing existing OT and exposing it as standardized devices with standardized interfaces appears being a valid approach, potentially leading the way towards SDOT (Software-defined OT). Aside of that, providers of OT must rethink their approaches, enabling updates even with small maintenance windows or at runtime, while enforcing stable and reliable environments. Not easy to do, but a premise when moving towards smart manufacturing or Industry 4.0.
One thing to me is clear: Both parties can learn from each other – to the benefit of all.
At the end of the day, every good idea stays and falls with the business model. If there is no working business model, the best idea will fail. Some ideas appear at a later time and are successful then. Let’s take tablets. I used a Windows tablets back in the days of Windows XP, way before the Apple iPad arrived. But it obviously was too early for widespread adoption (and yes, it was a different concept than the iPad, but one that is quite popular these days again).
So, when talking about user empowerment, the first question must be: Is there a business case? I believe it is, more than ever before. When talking about user empowerment, we are talking about enabling the user to control their data. When looking at the fundamental concept we have outlined initially back in 2012 as Life Management Platforms (there is an updated version available, dating late 2013), this includes the ability of sharing data with other parties in a controlled way. It furthermore is built on the idea on having a centralized repository for personal information – at least logically centralized, physically it might be distributed.
Such centralized data store simplifies management of personal information, from scanned contracts to health data collected via one of these popular activity-tracking wristbands. Furthermore, users can manage their preferences, policies, etc. in a single location.
Thus, it seems to make a lot of sense for instance for health insurance companies to support the concept of Life Management Platforms.
However, there might be a counterargument: The health insurance company wants to get a full grip on the data. But is this really in conflict with supporting user empowerment and concepts such as Life Management Platforms? Honestly, I believe that there is a better business case for Health Insurance Companies in supporting user empowerment. Why?
- They get rid of the discussion what should happen to e.g. the data collected with an activity tracker once a customer moves to another health insurance company – the user can just revoke access to that information (OK, the health insurance still will have to care for its copies, but that is easier to solve, particularly within advanced approaches).
- However, the customer still might allow access to a pseudonymised version of that data – he (or she) might even do so without being a customer at all, which then would allow health insurance companies to gain access to more statistical information, allowing them to better shape rates and contracts. There might be even a centralized statistical service for the industry, collecting data across all health insurance companies.
- Anyway, the most compelling argument from my perspective is another one: It is quite simple to connect to a Life Management Platform. Supporting a variety of activity trackers and convincing the customers that they must rely on a specific one isn’t the best approach. Just connecting to a service that provides health data in a standardized way is simpler and cheaper. And the customer can use the activity tracker he wants or already does – if he wants to share the data and benefit from better rates.
User empowerment does not stand in stark contrast to the business model of most organizations. It is only in conflict with the business models of companies such as Facebook, Google, etc. However, in many cases, the organizations such as retailers, insurance companies, etc. do not really benefit from relying on the data these companies collect – they pay for it and they might even pay twice through unwillingly collecting information that is then sold to the competition.
For most organizations, supporting user empowerment means simplified access to information and less friction by privacy discussions. Yes, the users can revoke access – but companies also might build far better relationships with customers and thus minimize that risk. There are compelling business cases today. And, in contrast to 2012, the world appears being ready for solutions that force user empowerment.
This article has originally appeared in the KuppingerCole Analysts' View newsletter.
Consent and Context: They are about to change the way we do IT. This is not only about security, where context already is of growing relevance. It is about the way we have to construct most applications and services, particularly the ones dealing with consumer-related data and PII in the broadest sense. Consent and context have consequences. Applications must be constructed such that these consequences can be taken.
Imagine the EU comes up with tighter privacy regulations in the near future. Imagine you are a service provider or organization dealing with customers in various locations. Imagine your customers being more willing to share data – consent with sharing – when they remain in control of data. Imagine that what Telcos must already do, e.g. in at least some EU countries, becoming mandatory for other industries and countries: Handing over customer data to other Telcos and “forgetting” about large parts of that data rapidly.
There are many different scenarios where regulatory changes or changing expectations of customers mandate changes in applications. Consent (and regulations) increasingly control application behavior.
On the other hand, there is context. Mitigating risks is tightly connected to understanding the user context and acting accordingly. The days of black and white security are past. Depending on the context, an authenticated user might be authorized to do more or less.
Simply said: Consent and context have – must mandatorily have – consequences in application behavior. Thus, application (and this includes cloud services) design must take consent and context into account. Consent is about following the principles of Privacy by Design. An application designed for privacy can be opened up if the users or regulations allow. This is quite easy, when done right. Far easier than, for example, adapting an application to tightening privacy regulations. Context is about risk-based authentication and authorization or, in a broader view, APAM (Adaptive, Policy-based Access Management). Again, if an application is designed for adaptiveness, it easily can react to changing requirements. An application with static security is hard to change.
Understanding Consent, Context, and Consequences can save organizations – software companies, cloud service providers, and any organization developing its own software – a lot of money. And it’s not only about cost savings, but agility – flexible software makes business more agile and resilient to changes and increases time-to-market.
Recently, I have had a number of conversations with end user organizations, covering a variety of Information Security topics but all having the same theme: There is a need for certain security approaches such as strong authentication on mobile devices, secure information sharing, etc. But the project has been stopped due to security concerns: The strong authentication approach is not as secure as the one currently implemented for desktop systems; some information needs to be stored in the cloud; etc.
That’s right, IT Security people stopped Information Security projects due to security concerns.
The result: There still is 0% security, because nothing has been done yet.
There is the argument, that insecure is insecure. Either something is perfectly secure or it is insecure. However, when following that path, everything is insecure. There are always ways to break security, if you just invest sufficient criminal energy.
It is time to move away from our traditional black-and-white approach to security. It is not about being secure or insecure, but, rather, about risk mitigation. Does a technology help in mitigating risk? Is it the best way to achieve that target? Is it a good economic (or mandatory) approach?
When thinking in terms of risk, 80% security is obviously better than 0% security. 100% might be even better, but also worse, because it’s costly, cumbersome to use, etc.
It is time to stop IT security people from inhibiting improvements in security and risk mitigation by setting unrealistic security baselines. Start thinking in terms of risk. Then, 80% of security now and at fair cost are commonly better than 0% now or 100% sometime in the future.
Again: There never ever will be 100% security. We might achieve 99% or 98% (depending on the scale we use), but cost grows exponentially. The limit of cost is infinite for security towards 100%.
Over the past years, we talked a lot about the Computing Troika with Cloud Computing, Mobile Computing, and Social Computing. We raised the term of the Identity Explosion, depicting the exponential growth of identities organizations have to deal with. We introduced the need for a new ABC: Agile Business, Connected. While agility is a key business requirement, connected organizations are a consequence of both the digital transformation of business and of mobility and IoT.
This rapid evolution in consequence means that we also have to transform our understanding of identities and access. We still see a lot of IAM projects focusing on employees. However, it is about employees, business partners, customers and consumers, leads, prospects etc. when we look at human identities.
Fig. 1: People, organizations, devices, and things are becoming connected –
organizations will have to deal with more identities and relations than ever before.
Even more, human identities are becoming only a fraction of the identities we have to deal with. People use devices, which communicate with backend services. Things are becoming increasingly connected. Everything and everyone, whether being a human, a device, a service, or a thing, have their own identity.
Relationships can become quite complex. A device might be used by multiple persons. A vehicle is not only connected to the driver or manufacturer, but to many other parties such as insurance companies, leasing companies, police, dealer, garage, inhabitants, other vehicles, etc. Not so speak about the fact that the vehicle for itself consists of many things that frequently interact.
Managing access to information requires a new thinking around identities and access. Only that will enable us to manage and restrict access to information as needed. Simply said: Identity and Access Management is becoming bigger than ever before and it is one of the essential foundations to make the IoT and the digital transformation of businesses work.
Fig. 2: APIs will increasingly connect everything and everyone – it becomes essential
to understand the identity context in which APIs are used.
In this context, APIs (Application Programming Interfaces) play a vital role. While I don’t like that term, being far too technical, it is well established in the IT community. APIs – the communication interfaces of services, apps (on devices) and what we might call thinglets (on things) – are playing the main role in this new connected world. Humans interact with some services directly, via browser. They use the UI of their devices to access apps. And they might even actively interact with things, even while these commonly act autonomously.
But the communication than happens between apps, devices, and services, using these APIs. For managing access to information via services, devices and things, we need in particular a good understanding of the relationship between them and the people and organizations. Without that, we will fail in managing information security in the new ABC.
Understanding and managing relations of a massive number of connected people, things, devices, and services is today’s challenge for succeeding with the new ABC: Agile Businesses, Connected.
This article has originally appeared in the KuppingerCole Analysts' View newsletter.
A few weeks ago I read an article in Network World, entitled “A common theme in identity and access management failure: lack of Active Directory optimization”. Essentially, it is about the fact that Microsoft Active Directory (AD) commonly needs some redesign when starting an IAM (Identity and Access Management) project. Maybe yes, and maybe no.
In fact, it is common that immature, chaotic, or even “too mature” (e.g. many years of administrative work leaving their traces with no one cleaning up) access control approaches in target systems impose a challenge when connecting them to an IAM (and Governance) system. However, there are two points to consider:
- This is not restricted to AD, it applies to any target system.
- It must not be allowed to lead to failures in your IAM deployment.
I have frequently seen this issue with SAP environments, unless they already have undergone a restructuring, e.g. when implementing SAP Access Control (formerly SAP GRC Access Control). In fact, the more complex the target system and the older it is, the more likely it is that the structure of access controls, be they roles or groups or whatever, is anything but perfect.
There is no doubt that redesign of the security model is a must in such situations. The question is just about “when” this should happen (as Jonathan Sander, the author of the article mentioned above, also states). In fact, if we would wait for all these security models to be redesigned, we probably never ever would see an IAM program succeeding. Some of these redesign projects take years – and some (think about mainframe environments) probably never will take place. Redesigning the security model of an AD or an SAP environment is a quite complex project by itself, despite all the tools supporting this.
Thus, organizations typically will have to decide about the order of projects. Should they push their IAM initiative or do the groundwork first? There is no single correct answer to that question. Frequently, IAM projects are so much under pressure that they have to run first.
However, this must not end in the nightmare of a failing project. The main success factor for dealing with these situations is having a well thought-out interface between the target systems and the IAM infrastructure for exposing entitlements from the target systems to IAM. At the IAM level, there must be a concept of roles (or at least a well thought-out concept for grouping entitlements). And there must be a clear definition of what is exposed from target systems to the IAM system. That is quite easy for well-structured target systems, where, for instance, only global groups from AD or business roles from SAP might become exposed, becoming the smallest unit of entitlements within IAM. These might appear as “system roles” or “system-level roles” (or whatever term you choose) in IAM.
Without that ideal security model in the target systems, there might not be that single level of entitlements that will become exposed to the IAM environment (and I’m talking about requests, not about the detailed analysis as part of Entitlement & Access Governance which might include lower levels of entitlements in the target systems). There are two ways to solve that issue:
- Just define these entitlements, i.e. global groups, SAP business roles, etc. first as an additional element in the target system, map them to IAM, and then start the redesign of the underlying infrastructure later on.
- Or accept the current structure and invest more in mappings of system roles (or whatever term you use) to the higher levels of entitlements such as IT-functional roles and business roles (not to mix up with SAP business roles) in your IAM environment.
Both approaches work and, from my experience, if you understand the challenge and put your focus on the interface, you will be quickly able to identify the best way to handle the challenge of executing your IAM program while still having to redesign the security model of target systems later on. In both cases, you will need a good understanding of the IAM-level security model (roles etc.) and you need to enforce this model rigidly – no exceptions here.
Informatica, a leader in data management solutions, introduced a new solution to the market today. The product named Secure@Source also marks a move of Informatica into the Data (and, in consequence, Database) Security market. Informatica already has solutions for data masking in place, which is one of the elements in data security. However, data masking is only a small step and it requires awareness of the data that needs protection.
In contrast to traditional approaches to data security – Informatica talks about “data-centric security” – Informatica does not focus on technical approaches alone e.g. for encrypting databases or analyzing queries. As the name of their approach implies, they are focusing on protection of the data itself.
The new solution builds on two pillars. One is Data Security Intelligence, which is about discovery, classification, proliferation analysis, and risk assessment for data held in a variety of data sources (and not only in a particular database). The other is Data Security Controls, which as of now includes persistent and dynamic masking of data plus validation and auditing capabilities.
The target is reducing the risk of leakage, attacks, and other data-related incidents for structured data held in databases and big data stores. The approach is understanding where the data resides and applying adequate controls, particularly masking of sensitive data. This is all based on policies and includes e.g. alerting capabilities. It also can integrate data security events from other sources and work together with external classification engines.
These interfaces will also allow third parties to attach tokenization and encryption capabilities, along with other features. It will also support advanced correlation, for instance by integrating with an IAM solution, and thus adding user context, or by integrating with DLP solutions to secure the endpoint.
Informatica’s entry into the Information Security market is, in our view, a logical consequence of where the company is already positioned. While it provides deep insight into where sensitive data resides - in other words its source - and a number of integrations, we’d like to see a growing number of out-of-the-box integrations, for instance, with security analytics, IAM, or encryption solutions. While there are integration points for partners, relying too much on partners for all these various areas might make the solution quite complex for customers. While this makes sense for IAM, security analytics such as SIEM or RTSI (Real-time Security Intelligence) and other areas such as encryption might become integral parts of future releases of Informatica Secure@Source.
Anyway, Informatica is taking the right path for Information Security by focusing on security at the data level instead of focusing on devices, networks, etc. – the best protection is always at the source.
Like Federal Minister of Justice Maas says, "Users do not know which data is being collected or how it is being used."
For this reason alone, it is difficult to understand why the Federal Government is taking this step right at this moment. After all, it has been able to do its work so far without Facebook.
With its Facebook profile, the Federal Government is ensuring that Facebook is, for example, indirectly receiving information on the political interests and preferences of the user. Since it is not clear just how this information could be used today or in the future, it is a questionable step.
If one considers the Facebook business model, it can also have an imminent negative impact. Facebook's main source of income is from targeted advertising based on the information that the company has collected on its users. With the additional information that will be available via the Federal Government's Facebook profile, for example, interest groups can, in the future, selectively advertise on Facebook to track their goals.
Here it is apparent, as with many businesses, that the implications of commercial Facebook profiles are frequently not understood. On the one hand, there is the networking with interested Facebook users. Their value is often overrated - these are not customers, not leads and NOT voters, but at best people with a more or less vague interest. On the other hand, there is information that a company, a government, a party or anyone else with a Facebook profile discloses to Facebook: Who is interested in my products, my political opinions (and which ones) or for my other statements on Facebook?
The Facebook business model is exactly that - to monetize this information - today more than ever before with the new business terms. For a company, this means that the information is also available to the competition. You could also say that Facebook is the best possibility of informing the competition about a company's (more or less interested) followers. In marketing, but also in politics, one should understand this correlation and weigh whether it is worth paying the implicit price for the added value in the form of data that is interesting to competitors.
Facebook may be "in" - but it is in no way worth it for every company, every government, every party or other organization.
End users have to look closely at the new privacy settings and limit them as much as possible if they intend to stay on Facebook. In the meantime, a lot of the communication has moved to other services like WhatsApp, so now is definitely the time to reconsider the added value of Facebook. And sometimes, reducing the amount of communication and information that reaches one is also added value.
The Federal Government should in any case be advised to consider the actual benefits of its Facebook presence. 50,000 followers are not 50,000 voters by any means - the importance of this number is often massively overrated. The Federal Government has to be clear about the contradiction between its claim to strong data protection rules and its actions. To go to Facebook now is not even fashionable any more - it is plainly the wrong step at the wrong time.
According to KuppingerCole, marketing managers in companies should also exactly analyze which price they are paying for the anticipated added value of a Facebook profile - one often pays more while the actual benefits are much less. Or has the number of customers increased accordingly in the last fiscal year because of 100,000 followers? A Facebook profile can definitely have its uses. But you should always check carefully whether there is truly added value.