English   Deutsch   Русский   中文    

KuppingerCole Blog

Connected Vehicle: Security First

Jul 27, 2015 by Martin Kuppinger

The recently discovered remote hack vulnerability of Fiat Chrysler Jeep cars, based on their Uconnect functionality, puts a spotlight on the miserable state of connected vehicle security these days. Another recently published article in a German newspaper not only identified a gap in functionality but also illustrates on how in particular German automotive vendors and suppliers implement (or plan to implement) security in their connected vehicles.

While the U.S. has introduced the Spy Car Act (Security and Privacy in Your Car Act) which is about defining industrywide benchmarks and standards for security and privacy in connected vehicles and forces the industry to collaborate, similar legislation is still lacking in the EU.

The automotive industry currently is in a rush to roll out new smart and digital features (or whatever they perceive as being smart), emulating many other industries facing the need for joining the Digital Transformation. Unfortunately, security is an afterthought, as recent incidents as well as the current trends within the industry indicate.

Ironically, the lack of well thought-out security and privacy features is already becoming an inhibitor for the industry. While the cost of sending out USB sticks with a patch is still considerably low (and the approach is impressively insecure), the cost of calling back 1.4 million cars to the garages is significant, even without speaking of the indirect cost of reputation loss or, if something really goes wrong, the liability issues.

But that is only one part of the problem. The lack of Security by Design and Privacy by Design is also becoming an inhibitor for the Digital Transformation. An essential element of the Digital Transformation is the change of business models, including rapid innovation and (ever-changing) partnerships.

A simple example that illustrates the limitations caused by the lack of security and privacy by design is the black box EDR (Event Data Recorder) becoming increasingly common an increasingly mandatory by legislation. Both automotive vendors and insurance companies are interested in “owning” the data held in such devices. While I come to the complexity of dealing with data access demands and requirements of various parties later in this post, it is obviously impossible to easily solve this conflict with technology that e.g. relies only on a single key for accessing that data. Modern concepts for security and privacy would minimize such conflicts by allowing various parties to have defined and controlled access to information they are entitled to access.

Cynically said: automotive vendors are rushing to roll out new features to succeed in the Digital Transformation, but by failing to do it right, with Security by Design and Privacy by Design, they are struggling with exactly the same transformation. Neither security nor privacy can be an afterthought for succeeding in the Digital Transformation.

From my perspective, there are five essentials the automotive industry must follow to succeed with both the connected vehicle and, in its concept, the Digital Transformation:

  1. Security by Design and Privacy by Design must become essential principles that any developer follows. A well-designed system can be opened up, but a weakly designed system never can be shut down. Simply said: security and privacy by design are not inhibitors, but enablers, because these allow flexible configuration of the vehicles for ever-changing business models and regulations.
  2. Modern hardened implementations of technology are required. Relying on a single key for accessing information of a component in the vehicle or other security concepts dating back decades aren’t adequate anymore for today’s requirements.
  3. Identities and Access Control must become key elements in these new security concepts. Just look at the many things, organizations, and humans around the connected vehicle. There are entertainment systems, engine control, EDR systems, gear control, and many other components. There is the manufacturer, the leasing company, the police in various countries, the insurance company, the garage, the dealer, and many other organizations. There is the driver, the co-driver, the passengers, the owner, etc. Various parties might access some information in certain systems, but might not be entitled to do so in others. Some might only see parts of the EDR data at all times, while others might be entitled to see all of that information after specific incidents. Without a concept of identities, their relations, and for managing their access, e.g. for security and privacy by design, there are too many inhibitors for supporting change in business models and regulations. From my perspective, it is worth spending some time and thoughts in looking at the concept of Life Management Platforms in that context. These concepts and standards such as UMA (User Managed Access) are the foundation for better, future-proof security in connected vehicles.
  4. Standards are another obvious element. It is ridiculous assuming that such complex ecosystems with manufacturers, suppliers, governmental agencies, customers, consumers, etc. can be supported with proprietary concepts.
  5. Finally, it is about solving the patch and update issues. Providing updates by USB stick is as inept as calling back the cars to the garages every “patch Tuesday”. There is a need for a secure approach for regular as well as emergency patches and updates, which most become part of the concept. Again, there is a need for standards, given the fact that every car today consists of (connected) components from a number of suppliers.

Notably, all these points apply to virtually all other areas of IoT (Internet of Things) and Smart Manufacturing. Security must not be an afterthought anymore. The risk for all of us is far too high – and, as mentioned above, done right, security and privacy by design enable rapidly switching to new business models and complying with new regulations, while old school “security” approaches don’t.


Google+

Amazon enters another market with their API Gateway

Jul 15, 2015 by Alexei Balaganski

What a surprising coincidence: on the same day we were preparing our Leadership Compass on API Security Management for publication, Amazon has announced their own managed service for creating, publishing and securing APIs – Amazon API Gateway. Well, it’s already too late to make changes in our Leadership Compass, but the new service is still worth having a look, hence this blog post.

Typically for Amazon, the solution is fully managed and based on AWS cloud infrastructure, meaning that there is no need to set up any physical or virtual machines or configure resources. The solution is tightly integrated with many other AWS services and is built directly into the central AWS console, so you can start creating or publishing APIs in minutes. If you already have existing backend services running on AWS infrastructure, such as EC2 or RDS, you can expose them to the world as APIs literally with a few mouse clicks. Even more compelling is the possibility to use AWS Lambda service to create completely managed “serverless” APIs without any need to worry about resource allocation or scaling.

In fact, this seems to be the primary focus of the solution. Although it is possible to manage external API endpoints, this is only mentioned in passing in the announcement: the main reason for releasing the service seems to be providing a native API management solution for AWS customers, which until now had to manage their APIs themselves or rely on third-party solutions.

Again typically for Amazon, the solution they delivered is a lean and no-frills service without all the fancy features of an enterprise API gateway, but, since it is based on the existing AWS infrastructure and heavily integrates with other well-known services from Amazon, with guaranteed scalability and performance, extremely low learning curve and, of course, low prices.

For API traffic management, Amazon CloudFront is used, with a special API caching mechanism added for increased performance. This ensures high scalability and availability for the APIs, as well as reasonable level of network security such as SSL encryption or DDoS protection. API transformation capabilities, however, are pretty basic, only XML to JSON conversion is supported.

To authorize access to APIs, the service integrates with AWS Identity and Access Management, as well as with Amazon Cognito, providing the same IAM capabilities that are available to other AWS services. Again, the gateway provides basic support for OAuth and OpenID Connect, but lacks the broad support for authentication methods typical for enterprise-grade solutions.

Analytics capabilities are provided by Amazon CloudWatch service, meaning that all API statistics are available in the same console as all other AWS services.

There seems to be no developer portal functionality provided with the service at the moment. Although it is possible to create API keys for third-party developers, there is no self-service for that. In this regard, the service does not seem to be very suitable for public APIs.

To summarize it, Amazon API Gateway is definitely not a competitor for existing enterprise API gateways like products from CA Technologies, Axway or Forum Systems. However, as a native replacement for third-party managed services (3scale, for example), it has a lot of potential and, with Amazon’s aggressive pricing policies, it may very well threaten their market positions.

Currently, Amazon API Gateway is available in selected AWS regions, so it’s possible to start testing it today. According to the first reports from developers, there are still some kinks to iron out before the service becomes truly usable, but I’m pretty sure that it will quickly become popular among existing AWS customers and may even be a deciding factor for companies to finally move their backend services to the cloud (Amazon cloud, of course).


Google+

Safety vs. security – or both?

Jul 07, 2015 by Martin Kuppinger

When it comes to OT (Operational Technology) security in all its facets, security people from the OT world and IT security professionals quickly can end up in a situation of strong disagreement. Depending on the language they are talking, it might even appear that they seem being divided by a common language. While the discussion in English quickly will end up with a perceived dichotomy between security and safety, e.g. in German it would be “Sicherheit vs. Sicherheit”.

The reason for that is that OT thinking traditionally – and for good reason – is about safety of humans, machines, etc. Other major requirements include availability and reliability. If the assembly line stops, this can quickly become expensive. If reliability issues cause faulty products, it also can cost vast amounts of money.

On the other hand, the common IT security thinking is around security – protecting systems and information and enforcing the CIA – confidentiality, integrity, and availability. Notably, even the perception of the common requirement of availability is slightly different, with IT primarily being interested in not losing data while OT looking for always up. Yes, IT also frequently has requirements such as 99.9% availability. However, sometimes this is unfounded requirement. While it really costs money if your assembly line is out of service, the impact of HR not working for a business day is pretty low.

While IT is keen on patching systems to fix known security issues, OT in tendency is keen on enforcing reliability and, in consequence, availability and security. From that perspective, updates, patches, or even new hardware and software versions are a risk. That is the reason for OT frequently relying on rather old hardware and software. Furthermore, depending on the type of production, maintenance windows might be rare. In areas with continuous production, there is no way of quickly patching and “rebooting”.

Unfortunately, with smart manufacturing and the increased integration of OT environments with IT, the risk exposure is changing. Furthermore, OT environments for quite a long time have become attack targets. Information about such systems is widely available, for instance using the Shodan search engine. The problem: The longer software remains unpatched, the bigger the risk. Simply said: The former concept of focusing purely on safety (and reliability and availability) no longer works in connected OT. On the other hand, the IT thinking also does not work. Many of us have experienced problems and downtimes to due erroneous patches.

There is no simple answer, aside that OT and IT must work hand in hand. It’s, cynically said, not about “death by patch vs. death by attacker”, but about avoiding death at all. From my perspective, the CISO must be responsible for both OT and IT – split responsibilities, ignorance, and stubbornness do not help us in mitigating risks. Layered security, virtualizing existing OT and exposing it as standardized devices with standardized interfaces appears being a valid approach, potentially leading the way towards SDOT (Software-defined OT). Aside of that, providers of OT must rethink their approaches, enabling updates even with small maintenance windows or at runtime, while enforcing stable and reliable environments. Not easy to do, but a premise when moving towards smart manufacturing or Industry 4.0.

One thing to me is clear: Both parties can learn from each other – to the benefit of all.


Google+

Security and Operational Technology / Smart Manufacturing

Jul 07, 2015 by Mike Small

Industry 4.0 is the German government’s strategy to promote the computerization of the manufacturing industry. This strategy foresees that industrial production in the future will be based on highly flexible mass production processes that allow rich customization of products. This future will also include the extensive integration of customers and business partners to provide business and value-added processes. It will link production with high-quality services to create so-called “hybrid products”.

At the same time, in the US, the Smart Manufacturing Leadership Coalition is working on their vision for “Smart Manufacturing”. In 2013 the UK the Institute for Advanced Manufacturing, which is part of the University of Nottingham, received a grant of £4.6M for a study on Technologies for Future Smart Factories.

This vision depends upon the manufacturing machinery and tools containing embedded computer systems that will communicate with each other inside the enterprise, and with partners and suppliers across the internet. This computerization and communication will enable optimization within the organizations, as well as improving the complete value adding chain in near real time through the use of intelligent monitoring and autonomous decision making processes. This is expected to lead to the development of completely new business models as well as exploiting the considerable potential for optimization in the fields of production and logistics.

However there are risks, and organizations adopting this technology need to be aware of and manage these risks. Compromising the manufacturing processes could have far reaching consequences. These consequences include the creation of flawed or dangerous end products as well as disruption of the supply chain. Even when manufacturing processes based on computerized machinery are physically isolated they can still be compromised through maladministration, inappropriate changes and infected media. Connecting these machines to the internet will only increase the potential threats and the risks involved.

Here are some key points to securely exploiting this vision:

  • Take a Holistic Approach: the need for security is no longer confined to the IT systems, the business systems of record but needs to extend to cover everywhere that data is created, transmitted or exploited. Take a holistic approach and avoid creating another silo.
  • Take a Risk based approach: The security technology and controls that need to be built should be determined by balancing risk against rewards based on the business requirements, the assets at risk together with the needs for compliance as well as the organizational risk appetite. This approach should seek to remove identifiable vulnerabilities and put in place appropriate controls to manage the risks.
  • Trusted Devices: This is the most immediate concern since many devices that are being deployed today are likely to be in use, and hence at risk, for long periods into the future. These devices must be designed and manufactured to be trustworthy. They need an appropriate level of physical protection as well as logical protection against illicit access and administration. It is highly likely that these devices will become a target for cyber criminals who will seek to exploit any weaknesses through malware. Make sure that they contain protection that can be updated to accommodate evolving threats.
  • Trusted Data: The organization needs to be able to trust the data from this. It must be possible to confirm the device from which the data originated, and that this data has not been tampered with or intercepted. There is existing low power secure technology and standards that have been developed for mobile communications and banking, and these should be appropriately adopted or adapted to secure the devices.
  • Identity and Access Management – to be able to trust the devices and the data they provide means being able to trust their identities and control access. There are a number of technical challenges in this area; some solutions have been developed for some specific kinds of device however there is no general panacea. Hence it is likely that more device specific solutions will emerge and this will add to the general complexity of the management challenges.

More information on this subject can be found in Advisory Note: Security and the Internet of Everything and Everyone - 71152 - KuppingerCole


Google+

OT, ICS, SCADA – What’s the difference?

Jul 07, 2015 by Graham Williamson

Operational Technology (OT) refers to computing systems that are used to manage industrial operations as opposed to administrative operations. Operational systems include production line management, mining operations control, oil & gas monitoring etc.

ot_ics_scada.jpg

Industrial control systems (ICS) is a major segment within the operational technology sector. It comprises systems that are used to monitor and control industrial processes. This could be mine site conveyor belts, oil refinery cracking towers, power consumption on electricity grids or alarms from building information systems. ICSs are typically mission-critical applications with a high-availability requirement.

Most ICSs fall into either a continuous process control system, typically managed via programmable logic controllers (PLCs), or discrete process control systems (DPC), that might use a PLC or some other batch process control device.

Industrial control systems (ICS) are often managed via a Supervisory Control and Data Acquisition (SCADA) systems that provides a graphical user interface for operators to easily observe the status of a system, receive any alarms indicating out-of-band operation, or to enter system adjustments to manage the process under control.

Supervisory Control and Data Acquisition (SCADA) systems display the process under control and provide access to control functions. A typical configuration is shown in Figure 1 - Typical SCADA Configuration Figure 1.

scada_configuration.pg.jpg
Figure 1 - Typical SCADA Configuration

The main components are:

  • SCADA display unit that shows the process under management in a graphic display with status messages and alarms shown at the appropriate place on the screen. Operators can typically use the SCADA system to enter controls to modify the operation in real-time. For instance, there might be a control to turn a valve off, or turn a thermostat down.
  • Control Unit that attaches the remote terminal units to the SCADA system. The Control unit must pass data to and from the SCADA system in real-time with low latency.
  • Remote terminal units (RTUs) are positioned close to the process being managed or monitored and are used to connect one or more devices (monitors or actuators) to the control unit, a PLC can fulfil this requirement. RTUs may be in the next room or hundreds of kilometres away.
  • Communication links can be Ethernet for a production system, a WAN link over the Internet or private radio for a distributed operation or a telemetry link for equipment in a remote area without communications facilities.

There are some seminal changes happening in the OT world at the moment. Organisations want to leverage their OT assets for business purposes, they want to be agile and have the ability to make modifications to their OT configurations. They want to take advantage of new, cheaper, IP sensors and actuators. They want to leverage their corporate identity provider service to authenticate operational personnel. It’s an exciting time for operational technology systems.


Google+

The need for an "integrated identity" within hybrid cloud infrastructures

Jul 02, 2015 by Matthias Reinwarth

Yes, you might have heard it in many places: "Cloud is the new normal". And this is surely true for many modern organisations, especially start-ups or companies doing all or parts of their native business within the cloud. But for many other organisations this new normal is only one half of normal.

A lot of enterprises currently going through the process of digital transformation are maintaining their own infrastructure on premises and are looking into extending their business into the cloud. This might be done for various reasons, for example for the easier creation of infrastructure allowing rapid scalability and the ability to replace costly infrastructure which is not mission-critical to be implemented within the traditional organisational perimeter.

For many organisations it is simply not an option to move completely to the cloud for various good reasons including the protection of intellectual property within the traditional infrastructure or the necessity to maintain legacy infrastructure which in turn is business critical. For this type of enterprises, typically large and with a decent history regarding their IT, of which many are in highly regulated sectors, the future infrastructure paradigm has to be the hybrid cloud, at least for the near or medium-term future.

Cloud service providers are required to offer standardized technological approaches for this type of customers. A seamless, strategic approach to extending the existing on-premises infrastructure into the cloud is an important prerequisite for this type of customers. This is true for the actual network connectivity basis and it is especially true for the administration, the operation and the security aspects of modern IT infrastructures.

For every company that already has a well-defined IAM/IAG infrastructure and the relevant maintenance and governance processes in place it is essential that Identity Management for and within the cloud is well integrated into the existing processes. Many successful, corporate IAM systems build upon the fact, that enterprise–internal data silos have been broken up and have been integrated into an overall identity and Access Management system. For the maintenance of the newly designed cloud infrastructure it obviously does not make any sense to create a new silo of identity information for the cloud. Maintaining technical and business accounts for cloud usage is in the end a traditional identity management task. Designing the appropriate entitlement structure and assigning the appropriate access rights to the right people within the cloud while adhering to best practice like the least privilege principle is in the end a traditional Access Management task. Defining, implementing and enforcing appropriate processes to control and govern assigned access rights to identities across a hybrid infrastructure are in the end traditional access governance and access intelligence tasks.

Providers of traditional, on premises IAM infrastructures and cloud service providers alike have to support this class of customer organisations to fulfil their hybrid security and hence their IAM/IAG needs. CSPs like Amazon Web Services embrace this hybrid market by providing overall strategies for hybrid cloud infrastructures including a suitable identity, access and security approach. The implementation of a concept for an "integrated identity" across all platforms, be they cloud or on premises, is therefore a fundamental requirement. Leveraging mechanisms like inbound and outbound federation, the deployment of open standards like SAML 2.0, the availability of APIs for integrative access to the AWS IAM/IAG functionality and the integration of existing policies into the AWS IAM policies implemented as JSON files are important steps towards this "integrated identity". For the access intelligence and access governance side the AWS CloudTrail component can provide detailed logs down to an API-call-per-user-level for the existing cloud infrastructure. Such extensive logs can then be evaluated by means of an existing Access Intelligence, an existing Real-Time Security Intelligence (RTSI) solution or by deploying the AWS analytics mechanisms like Lambda.

It is obvious that these are "only" building blocks for solutions, not a fully designed solution architecture. But we're one step closer to the design and implementation for an appropriate solution for each individual enterprise. Covering all relevant aspects of security and IT GRC inside and outside the cloud will be one key challenge for the deployment of cloud infrastructures for this type of organisations.

Hybrid architectures might not be the final target architecture for some organisations, but for the next years they will form an important deployment scenario for many large organisations. Vendors and implementation partners alike have to make sure that easily deployable, standardised mechanisms are in place to extend an existing on-premises IAM seamlessly into the cloud, providing the required levels of security and governance. And since we are talking about standards and integration: This will have to work seamlessly for other, probably upcoming types of architectures as well, e.g. for those where the step towards cloud based IAM systems deploying Azure Active Directory has already been taken.


Google+

From Hybrid Cloud to Standard IT?

Jun 18, 2015 by Mike Small

I have recently heard from a number of cloud service providers (CSP) telling me about their support for a “hybrid” cloud. What is the hybrid cloud and why is it important? What enterprise customers are looking for is a “Standard IT” that would allow them to deploy their applications flexibly wherever is best. The Hybrid Cloud concept goes some way towards this.

There is still some confusion about the terminology that surrounds cloud computing and so let us go back to basics. The generally accepted definition of cloud terminology is in NIST SP-800-145. According to this there are three service models and four deployment models. The service models being IaaS, PaaS and SaaS. The four deployment models for cloud computing are: Public Cloud, Private Cloud, Community Cloud and Hybrid Cloud. So “Hybrid” is related to the way cloud services are deployed. The NIST definition of the Hybrid Cloud is:

“The cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).”

However sometimes Hybrid is used to describe a cloud strategy – meaning that the organization using the cloud will use cloud services for some kinds of application but not for others. This is a perfectly reasonable strategy but not quite in line with the above definition. So I refer to this as a Hybrid Cloud Strategy.

In fact this leads us on to the reality for most enterprises is that the cloud is just another way of obtaining some of their IT services. Cloud services may be the ideal solution for development because of the speed with which they can be obtained. They may be good for customer interaction services because of their scalability. They may be the best way to perform data analytics needing the occasional burst of very high performance computing. Hence, to the enterprise, the cloud becomes another added complexity in their already complex IT environment.

So the CSPs have recognised that in order to tempt the enterprises to use their cloud services they need recognise this complexity challenge that enterprises face and provide help to solve it. So the “Hybrid” cloud that will be attractive to enterprises needs to:

* Enable the customer to easily migrate some parts of their workload and data to a cloud service. This is because there may be some data that is required to remain on premise for compliance or audit reasons.

* Orchestrate the end to end processing which may involve on premise as well as services from other cloud providers.

* Allow the customer to assure the end to end security and compliance for their workload.

When you look at these requirements it becomes clear that standards are going to be a key component to allow this degree of flexibility and interoperability. The standards needed go beyond the support for Hypervisors, Operating Systems, Databases and middleware to include the

deployment, management and security of workloads in a common way across on premise and cloud deployments as well as between cloud services from different vendors.

There is no clear winner in the standards yet – although OpenStack has wide support including from IBM, HP and Rackspace – but one of the challenges is that vendors offer versions of this with their own proprietary additions. Other important vendors have their own proprietary offerings that they would like customers to adopt including AWS, Microsoft and VMWare. So the game is not over yet, but the industry should recognize that the real requirement is for a “Standard IT” that can easily be deployed in whatever way is most appropriate at any given time.


Google+

Why Cybersecurity and Politics Just Don’t Mix Well

Jun 12, 2015 by Alexei Balaganski

With the number of high-profile security breaches growing rapidly, more and more large corporations, media outlets and even government organizations are falling victim to hacking attacks. These attacks are almost always widely publicized, adding insult to already substantial injury for the victims. It’s no surprise that the recent news and developments in the field of cybersecurity are now closely followed and discussed not just by IT experts, but by the general public around the world.

Inevitably, just like any other sensational topic, cybersecurity has attracted politicians. And whenever politics and technology are brought together, the resulting mix of macabre and comedy is so potent that it will make every security expert cringe. Let’s just have a look at the few of the most recent examples.

After the notorious hack of Sony Pictures Entertainment last November, which was supposedly carried out by a group of hackers demanding not to release a comedy movie about a plot to assassinate Kim Jong-Un, United States intelligence agencies were quick to allege that the attack was sponsored by North Korea. For some time, it was strongly debated whether a cyber-attack constitutes an act of war and whether the US should retaliate with real weapons.

Now, every information security expert knows that attributing hacking attacks is a long and painstaking process. In fact, the only known case of a cyber-attack more or less reliably attributed to a state agency until now is Stuxnet, which after several years of research has been found out to be a product of US and Israeli intelligence teams. In case of the Sony hack, many security researchers around the world have pointed out that it was most probably an insider job having no relation to North Korea at all. Fortunately, cool heads in the US military have prevailed, but the thought that next time such an attack can be quickly attributed to a nation without nuclear weapons is still quite chilling…

Another repercussion of the Sony hack has been the ongoing debate about the latest cybersecurity ‘solutions’ the US and UK governments have come up with this January. Among other crazy ideas, these proposals include introducing mandatory backdoors into every security tool and banning certain types of encryption completely. Needless to say, all this is served under the pretext of fighting terrorism and organized crime, but is in fact aimed at further expanding government capabilities of spying on their own citizens.

Unfortunately, just like any other technology plan devised by politicians, it won’t just not work, but will have disastrous consequences for the whole society, including ruining people’s privacy, making every company’s IT infrastructure more vulnerable to hacking attacks (exploiting the same government-mandated backdoors), blocking significant part of academic research, not to mention completely destroying businesses like security software vendors or cloud service providers. Sadly, even in Germany, the country where privacy is considered an almost sacred right, the government is engaged in similar activities as well.

Speaking about Germany, the latest, somewhat more lighthearted example of politicians’ inability to cope with cybersecurity comes from the Bundestag, the German federal parliament. After another crippling cyber-attack on its network in May, which allowed hackers to steal large amount of data and led to a partial shutdown of the network, the head of Germany’s Federal Office for Information Security has come up with a great idea. Citing concerns for mysterious Russian hackers still lurking in the network, it has been announced that the existing infrastructure including over 20,000 computers has to be completely replaced. Leaving aside the obvious question – are the same people that designed the old network really able to come up with a more secure one this time? – one still cannot but wonder whether millions needed for such an upgrade could be better spent somewhere else. In fact, my first thought after reading the news was about President Erdogan’s new palace in Turkey. Apparently, he just had to move to a new 1,150-room presidential palace simply because the old one was infested by cockroaches. It was very heartwarming to hear the same kind of reasoning from a German politician.

Still, any security expert cannot but continue asking more specific questions. Was there an adequate incident and breach response strategy in place? Has there been a training program for user security awareness? Were the most modern security tools deployed in the network? Was privileged account management fine-grained enough to prevent far-reaching exploitation of hijacked administrator credentials? And, last but not the least: does the agency have budget for hiring security experts with adequate qualifications for running such a critical environment?

Unfortunately, very few details about the breach are currently known, but judging by the outcome of the attack, the answer for most of these questions would be “no”. German government agencies are also known for being quite frugal with regards to IT salaries, so the best experts are inevitably going elsewhere.

Another question that I cannot but think about is what if the hackers have utilized one of the zero-day vulnerability exploits that the German intelligence agency BND is known to have purchased for their own covert operations? That would be a perfect example of “karmic justice”.

Speaking of concrete advice, KuppingerCole provides a lot of relevant research documents. You should probably start with the recently published free Leadership Brief: 10 Security Mistakes That Every CISO Must Avoid and then dive deeper into specific topics like IAM & Privilege Management in the research area of our website. Our live webinars, as well as recordings from past events can also provide a good introduction into relevant security topics. If you are looking for further support, do not hesitate to talk to us directly!


Google+

Who will become the Google, Facebook or Apple of Life Management Platforms?

Jun 09, 2015 by Dave Kearns

A Life Management Platform (LMP) allows individuals to access all relevant information from their daily life and manage its lifecycle, in particular data that is sensitive and typically paper-bound today, like bank account information, insurance information, health information, or the key number of their car.

Three years ago, at EIC 2012, one of the major topics was Life Management Platforms (LMPs), which was described as “a concept which goes well beyond the limited reach of most of today’s Personal Data Stores and Personal Clouds. It will fundamentally affect the way individuals share personal data and thus will greatly influence social networks, CRM (Customer Relationship Management), eGovernment, and many other areas.”

In talking about LMPs, Martin Kuppinger wrote: “Life Management Platforms will change the way individuals deal with sensitive information like their health data, insurance data, and many other types of information – information that today frequently is paper-based or, when it comes to personal opinions, only in the mind of the individuals. They will enable new approaches for privacy and security-aware sharing of that information, without the risk of losing control of that information. A key concept is “informed pull” which allows consuming information from other parties, neither violating the interest of the individual for protecting his information nor the interest of the related party/parties.” (Advisory Note: Life Management Platforms: Control and Privacy for Personal Data - 70608)

It’s taken longer than we thought, but the fundamental principle that a person should have direct control of the information about themselves is finally taking hold. In particular, the work of the User Managed Access (UMA) group through the Kantara Initiative should be noted.

Fueled by the rise of cloud services (especially personal, public cloud services) and the explosive growth in the Internet of Things (IoT) which all leads to the concept we’ve identified as the Internet of Everything and Everyone (IoEE), Life Management Platforms – although not known by that name – are beginning to take their first, hesitant baby steps with everyone from the tech guru to the person in the street. The “platform” tends to be a smart, mobile device with a myriad of applications and services (known as “apps”) but the bottom line is that the data, the information, is, at least nominally, under control of the person using that platform. And the real platform is the cloud-based services, fed and fueled by public, standard Application Programming Interfaces (APIs) which provide the data for that mobile device everyone is using.

Social media, too, has had an effect. Using Facebook login, for example, to access other services people are learning to look closely at what those services are requesting (“your timeline, list of friends, birthday”) and, especially, what the service won’t do (“the service will not post on your behalf”). There’s still work to be done there, as the conditions are not negotiable yet but must be either accepted or rejected – but more flexible protocols will emerge to cover that. There’s also, of course, the fact that Facebook itself “spies” on your activity. Slowly, grudgingly, that is changing – but we’re not there yet. The next step is for enterprises to begin to provide the necessary tools that will enable the casual user to more completely control their data – and the release of their data to others – while protecting their privacy. Google (via Android), Apple (via IOS) and even Microsoft (thru Windows Mobile) are all in a position to become the first mover in this area – but only if they’re ready to fundamentally change their business model or complement their business models by an alternative approach. Indeed, some have taken tentative steps in that direction, while others seem to be headed in the opposite direction. Google and Facebook (and Microsoft, via Bing) do attempt to monetize your data. Apple tends to tell you what you want, then not allow you to change it.

But there are suggestions that users may be willing to pay for more control over their information- either in cash, or in licensing its re-use under strict guidelines. So who will step up? We shouldn’t ignore Facebook, of course, but – without a mobile operating system – they are at a disadvantage compared to the other three. And maybe, lurking in the wings, there’s an as yet undiscovered (or overlooked – yes, there are some interesting approaches) vendor ready to jump in and seize the market. After all, that’s what Google did (surprising Yahoo!) and Facebook did (supplanting MySpace) so there is precedent for a well-designed (i.e., using Privacy by Design principles) start-up to sweep this market. Someone will, we’re convinced of that. And just as soon as we’ve identified the key player, we’ll let you know so you can be prepared.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

Life Management Platforms: Players, Technologies, Standards

Jun 09, 2015 by Alexei Balaganski

When KuppingerCole outlined the concept of Life Management Platforms several years ago, the perspective of numerous completely new business models based on user-centric management of personal data may have seemed a bit too farfetched to some. Although the very idea of customers being in control of their digital lives has been actively promoted for years by the efforts of ProjectVRM and although even back then the public demand for privacy was already strong, the interest in the topic was still largely academic.

Quite a lot has changed during these years. Explosive growth of mobile devices and cloud services has significantly altered the way businesses communicate with their partners and customers. Edward Snowden’s revelations have made a profound impression on the perceived importance of privacy. User empowerment is finally no longer an academic concept. The European Identity and Cloud Conference 2015 featured a whole track devoted to user managed identity and access, which provided an overview of recent developments as well as notable players in this field.

Qiy Foundation, one of the veteran players (in 2012, we have recognized them as the first real implementation of the LMP concept) has presented their newest developments and business partnerships. They were joined by Meeco, a new project centered around social channels and IoT devices, which has won this year’s European Identity and Cloud Award.

Such industry giants as Microsoft and IBM has presented their latest research in the field of user-managed identity as well. Both companies are doing extensive research targeted on technologies implementing the minimal disclosure principle fundamental for the Life Management Platform concept. Both Microsoft’s U-Prove and IBM’s Identity Mixer projects are aimed at giving users cryptographically certified, yet open and easy to use means of disclosing their personal information to online service providers in a controlled and privacy-enhancing manner. Both implement a superset of traditional Public Key Infrastructure functionality, but instead of having a single cryptographic public key, users can have an independent pseudonymized key for each transaction, which makes tracking impossible, yet still allows to verify any subset of personal information user may choose to share with a service provider.

Qiy Foundation, having the advantage of a very early start, already provides their own design and reference implementation of the whole stack of protocols and legal frameworks for an entire LMP ecosystem. Their biggest problem - and in fact the biggest obstacle for the whole future development in this area - is the lack of interoperability with other projects. However, as the LMP track and workshop at the EIC 2015 have shown, all parties working in this area are clearly aware of this challenge and are closely following each other’s developments.

In this regard, the role of Kantara Initiative cannot be overestimated. Not only this organization has been developing UMA, an important protocol for user-centric access management, privacy and consent, they are also running the Trust Framework Provider program, which ensures that various trust frameworks around the world are aligned with government regulations and each other. Still, looking at the success of the FIDO Alliance in the field of strong authentication, we cannot but hope to see in the nearest future some kind of a body uniting major players in the LMP field, driven by the shared vision and understanding that interoperability is the most critical factor for future developments.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+


top
KuppingerCole Blog
By:
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Register now
Spotlight
Operational Technology / Industry 4.0
Industry 4.0 is the German government’s strategy to promote the computerization of the manufacturing industry. This strategy foresees that industrial production in the future will be based on highly flexible mass production processes that allow rich customization of products.
KuppingerCole Services
KuppingerCole offers clients a wide range of reports, consulting options and events enabling aimed at providing companies and organizations with a clear understanding of both technology and markets.
Links
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on Google+

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing
Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2015 KuppingerCole