English   Deutsch   Русский   中文    

KuppingerCole Blog

Really! Stop Your Employees Using Smart Phones!

Mar 23, 2015 by Amar Singh

Why Not Just Switch off every piece of electric device and live in a cave. 

I am on the record on several occasions for coming out in support of the UK government’s cyber initiatives including the Ten Steps to Cyber Security (Ten Steps) and their more recent Cyber Essentials.So, I was a bit surprised when a business owner asked if he should backtrack on his recent “smart phone for all” bonus for his employees. When I asked him why he mentioned an article he had just read in the Telegraph, titled “Spooks tell business: Consider stripping staff of smart phones to avoid cyber attacks”. 

Cliches, oh Cliches.

The same article then adds the typical line about your staff are the ‘weakest link’ cliche. Oh let’s not forget the bit about being blackmailed by spies! What better way to draw attention to an article than to use an attention grabbing headline! Even when it’s not quite accurate and somewhat misleading. What’s even more displeasing is the way the article tries to impress the reader by implying that this information has been “seen” instead of mentioning that the Ten Steps is publicly available and accessible to every business. In fact what the article is referring to is but an updated and revised version of the UK Government’s advice that was first issued in 2012.So ditching the phone stop cyber attacks, right? Put simply: No.

Why? You may ask.

  • Most people are not going to ditch their smart phones. I know I will not.
  • In fact most now carry multiple smart devices including a tablet, a phone, and more recently smart wearables like watches.
  • Any organisation that have a forward thinking revenue generating strategy will already have adopted a mobile first strategy.
  • Just a few days ago the much loved and sometimes loathed Uber was named the most valuable transport company in the world even though it does not own any vehicles of its own. Could it be because it has a mobile first strategy?
  • Cyber attackers will simply find some other way to attack a business. They could even consider trying to revert back to the good old ways of targeting your laptops and desktop computers!

To be fair to the government they appear to have taken a sensible and I would argue risk based approach. Below is an excerpt of what they say Consider the balance between system usability and security. Yes there is the bit of external drives like USB sticks that have been the cause of many a hack and sleepless nights for security teams. I discuss the approach to this headache further down.

Next, Humans, you guessed it, will be Humans!

It’s getting very tiring, borderline exhausting having to hear that staff, who happen to be mostly humans for now, are to blame for all cyber security woes. This needs to stop. Stop declaring the human as the primary problem. Yes, you and I, us humans that is, are part of the problem but being flippant about is not the way to solve this problem.

Again, the government have taken a balanced approach and do not bang on the “it’s your staff’s fault” pronouncements. At least that’s how I have read it. Here is what one para from the Top Ten document set says: Without exception, all users should be trained on the secure use of their mobile device for the locations they will be working in. To me that sounds more like - “You businesses out there - spend some money and educate and train all your users” I concur.

Yes, Mobile is Insecure, but…

Mobile working is insecure. But any device, including your new TV and old laptop are insecure as long as they are switched on! Mobile working has several benefits that both employees and organisations recognise. So accept the facts and have a plan to prevent, detect and respond.

The Top Ten document contains some good advice that I would encourage all to read and understand. In the meantime I strongly recommend every business owner to:

  • Stop blaming the employee for all your cyber security problems
  • Support the employee with the necessary technology to ensure that ‘mistakes’ cannot happen easily.
  • Yes there is sufficient technology available today that can help prevent and detect cyber attacks.
  • Some technologies to consider are automatic VPN connectors, micro virtualisation technologies, encryption technologies. Please engage KC for more information on how we can help you..
  • What the government is actually saying is be pragmatic, understand the risks, and educate the users.
  • Last, but not least, accept the facts, review the threats specific to your company and understand the risk and have a plan to prevent, detect and respond.

Finally! Cut the Government a Break! Seriously.

To be fair to the government it is quite hard producing a document that fits every organisation’s risk profile. The analogy of one size fits all come to mind.In my own customer dealings I have had more senior board members and business owners ask me about cyber security as a result of the UK government’s efforts to make cyber security a board issue. Finally, Please take a risk based approach and spend some time understanding the threats and those attackers that would want to target your company. Cyber or not, this is common sense threat and risk management. It’s no point spending on technology and preparing for spies monitoring your employees if you, for example, are producing regular cleaning products. In such a company it would make more sense if effort and time was spent on preventing insiders leaking financial or human resource data. That’s what I recommend and that’s actually what the government is trying to say.

You can read about the UK Government’s Top Ten Steps to Cyber Security here.


Google+

Just say it! User Experience Trumps Security!

Mar 23, 2015 by Amar Singh

I was about to file The Register’s mobile security article into my “just another article on mobiles and security” when I noticed what I believe to be a half-witted quote.

So, in context. The Register published an article titled “Banks defend integrity of passcode-less TouchID login”. The banks and the quote in question are from the Royal Bank of Scotland (RBS) and NatWest.

What’s the half-witted quote then?

I will address the first two statements for this blog piece.

We do everything we can to make banking secure for our customers and we've tested this to make sure it was safe before launch. Other banking institutions across the world are also using this technology with their customers.

Where is the proof that the above statement is true? The banks could have chosen to have the BSI Kitemark Secure Digital Transaction. Barclays appears to be the only bank that has some of its products approved by the BSI (you can check this on the BSI site)

API spoofing and access to data held in the secure keychain is only possible on a jail-broken iPhone. We strongly advise customers against tampering with the security of their phone.

Really! Blaming it on jail-broken iPhones and users. Most non technical customers would not, in my opinion, know if their iPhone is jail broken or not. In addition the banks are appearing to acknowledge that there is a problem by admitting jail-broken phones are susceptible. So why not configure their app to check for and block installations on jail-broken iPhones?

Also maybe the banks and their outsourcer should have read the recent Mobile Threat Assessment report from FireEye that discusses the increasing ease by which hackers can bypass Apple’s strict review process and invoke risky private APIs. This on non jail-broken iPhones! (the report is titled OUT OF POCKET: A Comprehensive Mobile Threat Assessment of 7 Million iOS and Android Apps)

Be Nice to the Banks.. Come on. Surely they know what they are doing, right?

Let’s give the banks the benefit of the doubt for a minute. They value their customer’s right? During their countless requirements workshops, user experience would have been at the forefront of all their requirements. Right?

“What would our users want?” may have been one of their primary questions during their multiple brain storming sessions. Surely security would have come up during these discussions, right?

So, what about security?

Now I know banks, like most organisations, have to balance security versus cost. Banks have a risk appetite and tolerance and must make trade-offs when it comes to security versus usability. The 4 digit pin is a great example. I get that view and in many cases agree with that approach.

I am guessing there must have been some trade-off with this Touch ID based app too. They must have made assumptions that there will be those who will hack and abuse the system for monetary gain. However, I am guessing, with their compute and brain power, they would have calculated the likelihood and the financial impact to be negligible. The risk acceptable and within their appetite.

So why not come out with one of the first Touch ID only banking apps!

On the other hand it could just be that no one actually thought about security! Maybe because they wrongly assumed that Apple products are super secure or they simply forgot about it altogether.

What’s truly disappointing is that the bank had an opportunity to get both user experience and security right without necessarily sacrificing either. Sadly, it seems, security was again a second thought.


Google+

De-Mail: Now with End-to-end Encryption?

Mar 10, 2015 by Alexei Balaganski

In case you don’t know (and unless you live in Germany, you most probably don’t), De-Mail is an electronic communications service maintained by several German providers in accordance with German E-Government initiative and the De-Mail law declaring this as a secure form of communication. The purpose of the service is to complement traditional postal mail for the exchange of legal documents between citizens, businesses and government organizations.

Ever since its original introduction in 2012, De-Mail has been struggling to gain acceptance of German public. According to the latest report, only around 1 million of private citizens have registered for the service, which is way below original plans and not enough by far to reach the “critical mass”. That is actually quite understandable, since for a private person the service doesn’t bring much in comparison with postal mail (in fact, it even makes certain things, such as legally declining to receive a letter, no longer possible). Major points of criticism of the service include incompatibility with regular e-mail and other legal electronic communications services, privacy concerns regarding the personal information collected during identification process, as well as insufficient level of security.

Now the German government is attempting once more to address the latter problem by introducing end-to-end encryption. Their plan is to rely on OpenPGP standard, which will be introduced by all cooperating providers (Deutsche Telekom, Mentana-Claimsoft and United Internet known for its consumer brands GMX and Web.de) in May. According to Thomas de Maizière, Germany’s Federal Minister of the Interior, adding PGP support will provide an easy and user-friendly way of increasing the security of De-Mail service. Reaction from security experts and public, however, wasn’t particularly enthusiastic.

Apparently, to enable this new functionality, users would have to install a browser plugin. The solution is based on an open source JavaScript OpenPGP implementation and is currently available for Chrome and Firefox browsers only. According to publicly available statistics, this leaves over 60% of all German internet users out of luck, since their browsers are not supported. Even bigger problem is lack of support for mobile Apps or desktop mail clients.

Unfortunately, no integration of the plugin into De-Mail user directory is offered, which means that users are supposed to tackle the biggest challenge of any end-to-end encryption solution – secure and convenient key exchange – completely on their own. In this regard, De-Mail looks no better than any other conventional email service, since PGP encryption is already supported by many mail applications in a completely provider-agnostic manner.

Another issue is supposed ease of use of the new encryption solution. In fact, De-Mail has already been offering encryption based on S/MIME, but it couldn’t get enough traction because “it was too complicated”. However, if you compare the efforts necessary for secure PGP key exchange, it can hardly be considered an easier alternative.

Finally, there is a fundamental question with many possible legal consequences: how does one combine end-to-end encryption with the requirement for the third party (the state) to be able to verify its legitimacy? In fact, the very same de Maizière is known for opposing encryption and advocating the necessity for intelligence agencies to monitor all communications.

In any case, De-Mail is here to stay, at least as long it is actively supported by the government. However, I have serious doubts that attempts like this will have any noticeable impact on its popularity. Legal issues aside, the only proper way of implement end-to-end communications security is not to try to slap another layer on top of the aging e-mail infrastructure, but to implement new protocols designed with security in mind from the very beginning. And the most reasonable way to do that is not to try to reinvent the wheel on your own, but to look for existing developments like, for example, Dark Mail Technical Alliance. What the industry needs is a cooperatively developed standard for encrypted communications, similar to what FIDO alliance has managed to achieve for strong authentication.

Reconciling conflicting views on encryption within the government would also help a lot. Pushing for NSA-like mass surveillance of all internet communications and advocating the use of backdoors and exploits by the same people that now promise increased security and privacy of government services isn’t going to convince either security experts or the general public.


Google+

Migrating IT Infrastructure to the Cloud

Mar 10, 2015 by Mike Small

Much has been written about “DevOps” but there are other ways for organizations to benefit from the cloud. Moving all or part of their existing IT infrastructure and applications could provide savings in capital and, in many cases, increase security.

The cloud has provided an enormous opportunity for organizations to create new markets, to experiment and develop new applications without the need for upfront investment in hardware and to create disposable applications for marketing campaigns. This approach is generally known as DevOps; where the application is developed and deployed into operation in an iterative manner which is made possible by an easily expansible cloud infrastructure.

While DevOps has produced some remarkable results, it doesn’t help with the organization’s existing IT infrastructure. There are many reasons why an organization could benefit from moving some of their existing IT systems to the cloud. Cost is one but there are others including the need to constantly update hardware and to maintain a data centre. Many small organizations are limited to operating in premises that are not suitable as a datacentre; for example in offices over a shopping mall.  Although the organization may be wholly dependent upon their IT systems they may have no control over sprinkler systems, power, telecommunications, and even guaranteed 24x7 access to the building. They may be a risk of theft as well as fire, and incidents outside of their control. These are all factors which are well taken care of by cloud service providers (CSP) hosted in Tier III data centres.

However moving existing IT systems and applications to the cloud is not as simple. These legacy applications may be dependent upon very specific characteristics of the existing infrastructure such as IP address ranges or a particular technology stack which may be difficult to reproduce in the standard cloud environments. It is also important for customers to understand the sensitivity of the systems and data that they are moving to the cloud and the risks that these may be exposed to. Performing a cloud readiness risk assessment is an essential pre-requisite for an organization planning to use cloud services. Many of the issues around this relate to regulation and compliance and are described in KuppingerCole Analysts' View on Compliance Risks for Multinationals.

However it was interesting to hear of a US based CSP dinCloud that is focussing on this market. dinCloud first brought a hosted virtual desktop to the market. They have now expanded their offering to include servers, applications and IT infrastructure. dinCloud claim that their “Business Provisioning” service can help organizations to quickly and easily migrate all or part of their entire existing infrastructure to cloud.

This is a laudable aim; dinCloud claims some successes in the US and intend to expand worldwide. However, some of the challenges that they will face in Europe are the same as those currently faced by all US based CSPs – a lack of trust. Some of this has arisen through the Snowden revelations, the ongoing court case, where Microsoft in Ireland is being required to hand over emails to the US authorities, is fanning these flames. On top of this the EU privacy regulations, which are already strict, face being strengthened; and in some countries certain kinds of data must remain within the country. These challenges are discussed in Martin Kuppinger’s blog Can EU customers rely on US Cloud Providers?

This is an interesting initiative but to succeed in Europe dinCloud will need to win the trust of their potential customers. This will mean expanding their datacentre footprint into the EU/EEA and providing independent evidence of their security and compliance. When using a cloud service a cloud customer has to trust the CSP; independent certification, balanced contracts taking specifics of local regulations and requirements into account, and independent risk assessments are the best way of allowing the customer to verify that trust.


Google+

Facebook profile of the German Federal Government thwarts efforts to improve data protection

Mar 05, 2015 by Martin Kuppinger

There is a certain irony that the federal government has almost simultaneously launched a profile on Facebook with the change of the social network’s terms of use. While the Federal Minister of Justice, Heiko Maas, is backing up consumer organizations with their warnings of Facebook, the Federal Government has taken the first step in setting up its own Facebook profile.

With the changes in the terms of use, Facebook has massively expanded its ability to analyze the data of users. Data is also stored which is left behind by users on pages outside of Facebook for use in targeted advertising and possibly other purposes. On the other hand, the user has the possibility of better managing personal settings for his/her own privacy. The bottom line: it remains clear that Facebook is collecting even more data in a hard to control manner.

Like Federal Minister of Justice Maas says, "Users do not know which data is being collected or how it is being used."

For this reason alone, it is difficult to understand why the Federal Government is taking this step right at this moment. After all, it has been able to do its work so far without Facebook.

With its Facebook profile, the Federal Government is ensuring that Facebook is, for example, indirectly receiving information on the political interests and preferences of the user. Since it is not clear just how this information could be used today or in the future, it is a questionable step.

If one considers the Facebook business model, it can also have an imminent negative impact. Facebook's main source of income is from targeted advertising based on the information that the company has collected on its users. With the additional information that will be available via the Federal Government's Facebook profile, for example, interest groups can, in the future, selectively advertise on Facebook to track their goals.

Here it is apparent, as with many businesses, that the implications of commercial Facebook profiles are frequently not understood. On the one hand, there is the networking with interested Facebook users. Their value is often overrated - these are not customers, not leads and NOT voters, but at best people with a more or less vague interest. On the other hand, there is information that a company, a government, a party or anyone else with a Facebook profile discloses to Facebook: Who is interested in my products, my political opinions (and which ones) or for my other statements on Facebook?

The Facebook business model is exactly that - to monetize this information - today more than ever before with the new business terms. For a company, this means that the information is also available to the competition. You could also say that Facebook is the best possibility of informing the competition about a company's (more or less interested) followers. In marketing, but also in politics, one should understand this correlation and weigh whether it is worth paying the implicit price for the added value in the form of data that is interesting to competitors.

Facebook may be "in" - but it is in no way worth it for every company, every government, every party or other organization.

End users have to look closely at the new privacy settings and limit them as much as possible if they intend to stay on Facebook. In the meantime, a lot of the communication has moved to other services like WhatsApp, so now is definitely the time to reconsider the added value of Facebook. And sometimes, reducing the amount of communication and information that reaches one is also added value.

The Federal Government should in any case be advised to consider the actual benefits of its Facebook presence. 50,000 followers are not 50,000 voters by any means - the importance of this number is often massively overrated. The Federal Government has to be clear about the contradiction between its claim to strong data protection rules and its actions. To go to Facebook now is not even fashionable any more - it is plainly the wrong step at the wrong time.

According to KuppingerCole, marketing managers in companies should also exactly analyze which price they are paying for the anticipated added value of a Facebook profile - one often pays more while the actual benefits are much less. Or has the number of customers increased accordingly in the last fiscal year because of 100,000 followers? A Facebook profile can definitely have its uses. But you should always check carefully whether there is truly added value.


Google+

Organization, Security and Compliance for the IoT

Mar 03, 2015 by Mike Small

The Internet of Things (IoT) provides opportunities for organizations to get closer to their customers and to provide products and services that are more closely aligned to their needs. It provides the potential to enhance the quality of life for individuals, through better access to information and more control over their environment. It makes possible more efficient use of infrastructure by more precise control based on detailed and up to data information. It will change the way goods are manufactured by integrating manufacturing machinery, customers and partners allowing greater product customization as well as optimizing costs, processes and logistics.

However the IoT comes with risks the US Federal Trade Commission recently published a report of a workshop they held on this subject. This report, which is limited in its scope to IoT devices sold or used by consumers, identifies three major risks. These risks are enabling unauthorised access and misuse of personal information, facilitating attacks on other systems and creating risks to personal safety. In KuppingerCole’s view the wider risks are summarized in the following figure:

Organizations adopting this technology need to be aware of and manage these risks. As with most new technologies there is often a belief that there is a need to create a new organizational structure. In fact it is more important to ensure that the existing organization understands and addresses the potential risks as well as the potential rewards.

Organizations should take a well governed approach to the IoT by clearly defining the business objectives for its use and by setting constraints. The IoT technology used should be built to be trustworthy and should be used in a way that is compliant with privacy laws and regulations. Finally the organization should be able to audit and assure the organization’s use of the IoT.

The benefits from the IoT come from the vast amount of data that can be collected, analysed and exploited. Hence the challenges of Big Data governance security and management are inextricably linked with the IoT. The data needs to be trustworthy and it should be possible to confirm both its source and integrity. The infrastructure used for the acquisition, storage and analysis of this data needs to be secured; yet the IoT is being built using many existing protocols and technology that are weak and vulnerable.

The devices which form part of the IoT must be designed manufactured, installed and configured to be trustworthy. The security built into these devices for the risks identified today needs to be extensible to be proof against future threats since many of these devices will have lives measured in decades. There are existing low power secure technologies and standards that have been developed for mobile communications and banking, and these should be appropriately adopted, adapted and improved to secure the devices.

Trust in the devices is based on trust in their identities and so these identities need to be properly managed. There are a number of challenges relating to this area but there is no general solution.

Organizations exploiting data from the IoT should do this in a way that complies with laws and regulations. For personal information particular care should be given to aspects such as ensuring informed consent, data minimisation and information stewardship. There is a specific challenge to ensure that users understand and accept that the ownership of the device does not imply complete “ownership” of data. It is important that the lifecycle of data from the IoT properly managed from creation or acquisition to disposal. An organization should have a clear policy which identifies which data needs to be kept, why it needs to be kept and for how long. There should also be a clear policy for the deletion of data that is not retained for compliance or regulatory reasons.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

So what do we mean by “Internet of Things” and what do we need to get right?

Mar 03, 2015 by Graham Williamson

The phase “Internet of Things” (IoT) was coined to describe the wide range of devices coming on the market with an interface that allows them to be connected to another device or network. There is no question that the explosion in the number of such devices is soon going to change our lives for ever. We are going to be monitoring more, controlling more and communicating more. The recent FTC Staff report indicates there will be 25 billion devices attached to networks this year and 50 billion in 5 years’ time.

It’s generally agreed that there are several categories in the IoT space:

  • Smart appliances:
these are devices that monitor things, actuate things or communicate data. Included in this category are remote weather stations, remote lighting controllers or car that communicate status to receivers at service centres.
  • Wearables:
these devices typically monitor something e.g. pedometers or heart monitors and transmit the data to a close-by device such as a smartphone on which there is an app that either passively reports the data or actively transmits it to a repository for data analysis purposes.
  • Media devices:
these are typically smartphones or tablets that need one or more connections to external devices such as a Bluetooth speaker or a network connected media repository.

By far the largest category is the smart appliance. For instance, in the building industry it is now normal to have hundreds of IP devices in a building feeding information back to the building information system for HVAC control, security monitoring and physical access devices. This has significantly reduced building maintenance costs for security and access control, and has significantly reduced energy costs by automating thermostat control and even anticipate weather forecast impacts.

In his book “Abundance: The Future is Better than You Think” Peter Diamandis paints a picture of an interconnected world with unprecedented benefits for society. He is convinced that within a few years we will have devices that, with a small blood sample a saliva swab, will provide a better medical diagnosis than many doctors.

So what’s the problem?
For most connected devices there are no concerns. Connecting a smartphone to a Bluetooth speaker is simplicity itself and, other than annoying neighbours within earshot, there is simply no danger or security consideration. But for other devices there are definite concerns and significant danger in poorly developed and badly managed interfaces. If a device has an application interface that can modify a remote device the interface must be properly designed with appropriate protection built in. There is now a body of knowledge on how such application programmable interfaces (APIs) should be constructed and constrained and initiatives are being commenced to provide direction on security issues.

For instance, if a building information system can open a security door based on an input from a card swipe reader, the API had better require digital signing and possibly encryption to ensure the control can’t be spoofed. If a health monitor can make an entry in the user’s electronic health record database the API needs to ensure only the appropriate record can be changed.

Another issue is privacy. What if my car that communicates its health to my local garage? That’s of great benefits because I should get better service. But what if the driver’s name and address is also communicated, let alone their credit card details? Social media has already proven that the public at large is notoriously bad at protecting their privacy; it’s up to the industry to avoid innovation that on the surface looks beneficial and benign, but in reality is leading us down a dangerous slippery slope to a situation in which hackers can exploit vulnerabilities.

What can we do?
The onus is on suppliers of IoT to ensure the design of their systems is both secure and reliable. This means they must mandate standards for developers to adhere to in using the APIs of their devices or systems. It is important that developers know the protocols to be used and the methods that can be employed to send data or retrieve results.

For example:

  • Smart appliances should use protocols such as OAuth (preferably three-legged for a closed user-group) to ensure properly authentication of the user or device to the application being accessed.
  • Building information systems should be adequately protected with an appropriate access control mechanism; two-factor authentication should be the norm and no generic accounts should be allowed.
  • Systems provided to the general public should install with a basic configuration that does not collect or transmit personally identifiable information.
  • APIs must be fully documented with a description, data schemas, authentication scopes and methods supported; clearly indicating safe and idempotent methods in web services environments.
  • Organisations installing systems with APIs must provide a proper software development environment with full development, test, pre-production and production environments. Testing should include both functional and volume testing with a defined set of regression tests.

Conclusion
The promise of IoT is immense. We can now attach a sensor or actuator to just about anything. We can communicate with it via NFC, Bluetooth, Wi-Fi or 3G technology. We can watch, measure and control our world. This will save money because we can shut things off remotely to save energy, improve safety beacuse we will be notified more quickly when an event occurs, and save time because we can communicate service detail accurately and fully.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

Internet of Opportunities

Mar 03, 2015 by Alexei Balaganski

For a topic so ubiquitous, so potentially disruptive and so overhyped in the media in the recent couple of years, the concept of the Internet of Things (IoT) is surprisingly difficult to describe. Although the term itself has appeared in the media nearly a decade ago, there is still no universally agreed definition of what IoT actually is. This, by the way, is a trait it shares with its older cousin, the Cloud.

On the very basic level, however, it should be possible to define IoT as a network of physical objects (“things”) capable of interacting and exchanging information with each other as well as with their owners, operators or other people. The specifics of these communications vary between definitions, but it’s commonly agreed that any embedded smart devices that communicate over the existing Internet infrastructure can be considered “things”. This includes both consumer products, such as smart medical devices, home automation systems, or wearables, and enterprise devices ranging from simple RFID tags to complex industrial process monitoring systems. However, general-purpose computers, mobile phones and tablets are traditionally excluded, although they, of course, are used to monitor or control other “things”.

Looking at this definition, one may ask what exactly is new and revolutionary about IoT? After all, industrial control systems have existed for decades, healthcare institutions have been using smart implanted devices like pacemakers and insulin pumps for years, and even smart household appliances are nothing new. This is true: individual technologies that make IoT possible have existed for several decades and even the concept of “ubiquitous internet” dates back to 1999. However, it’s the relatively recent combination of technology, business and media influences that has finally made IoT on of the hottest conversation topics.

First, continuously decreasing technology costs and growing Internet penetration have made connected devices very popular. Adding an embedded networking module to any device is cheap, yet it can potentially unlock completely new ways of interaction with other devices, creating new business value for manufacturers. Second, massive proliferation of mobile devices encourages people to look for new ways of using them to monitor and control various aspects of their life and work. As for enterprises, the proverbial Computing Troika is forcing them to evolve beyond their perimeter, to become more agile and connected, and the IT is responding by creating new technologies and standards (such as big data analytics, identity federation or even cloud computing) to support these new interactions.

It is its scale and interoperability that fundamentally differentiate the Internet of Things from existing isolated networks of various embedded devices. And this scale is truly massive. Extrapolating the new fashion of making each and every device connected, it is estimated that by 2020, the number of “things” in the world will surpass 200 billion and the IoT market will be worth nearly $9 trillion. Although the industry is facing a lot of potential obstacles on their way to that market, including lack of standards, massive security and privacy-related implications, as well as the need to develop a mature application ecosystem, the business opportunities are simply too lucrative to pass.

Practically every industry is potentially impacted by the IoT revolution, including automotive, healthcare, manufacturing, energy and utilities, transportation, financial, retail and others. Numerous use cases demonstrate that adoption of IoT as a part of business processes can bring generate immediate business value by improving process optimization, providing better intelligence and more efficient planning, enabling real-time reaction to various needs and opportunities and improving customer service.

In addition to various improvements of business processes, IoT enables a huge number of completely new consumer services, from life changing to trivial but “nice to have” ones. One doesn’t need to explain how a doctor’s ability to monitor patient’s vital signs can reduce mortality and improve quality of life or how a connected vehicle improves road safety. IoT benefits don’t end there, and it’s up to manufacturers to introduce completely new kinds of smart devices and persuade consumers that these devices will make their life fundamentally better (this has already worked well for wearable devices, for example).

Of course, IoT market doesn’t just include manufacturers of “things” themselves. Supporting and orchestrating such a huge global infrastructure introduces quite a lot of technological challenges. Obviously, manufacturers of networking hardware will play a major role, and it’s no wonder that companies like Intel or Cisco are among the major IoT proponents. However, being able to address other challenges like providing global-scale identity services for billions of transactions per minute can open up huge business opportunities, and vendors are already moving in to grab an attractive position in this market. Another example of a technology that’s expected to get a substantial boost from IoT is Big Data Analytics, because IoT is all about collecting large amounts of information from sensors, which then needs to be organized and used to make decisions.

Interestingly enough, most of current large-scale IoT deployments seem to be driven not by enterprises, but by government-backed projects. The concept of “smart city”, where networks of sensors are continuously monitoring environmental conditions, managing public transportation and so on, has attracted interest in many countries around the world. Such systems naturally integrate with existing eGovernment solutions; they also enable new business opportunities for various merchants and service companies that can plug directly into the global city network.

In any case, whether you represent a hardware vendor, a manufacturing, a service or an IT company, there is one thing about the Internet of Things you cannot afford: ignore it. The revolution is coming, and although we still have to solve many challenges and address many new risks, the future is full of opportunities.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Google+

Gemalto feels secure after attack - the rest of the world does not

Feb 25, 2015 by Martin Kuppinger

In today’s press conference regarding the last week’s publications on a possible compromise of SIM cards from Gemalto by the theft of keys the company has confirmed security incidents during the time frame mentioned in the original report. It’s difficult to say, however, whether their other security products have been affected, since significant parts of the attack, especially in the really sensitive part of their network, did not leave any substantial traces. Gemalto therefore makes a conclusion that there were no such attacks.

According to the information published last week, back in 2010 a joint team of NSA and GCHQ agents has carried out a large-scale attack on Gemalto and its partners. During the attack, they have obtained secret keys that are integrated into SIM cards on the hardware level. Having the keys, it’s possible to decrypt mobile phone calls as well as create copies of these SIM cards and impersonate their users on the mobile provider networks. Since Gemalto, according to their own statements, produces 2 billion cards each year, and since many other companies have been affected as well, we are facing a possibility that intelligence agencies are now capable of global mobile communication surveillance using simple and nonintrusive methods.

It’s entirely possible that Gemalto is correct with their statement that there is no evidence for such a theft. Too much time has passed since the attack and a significant part of the logs from the affected network components and servers, which are needed for the analysis of such a complex attack, are probably already deleted. Still, this attack, just like the theft of so called “seeds” from RSA in 2011, makes it clear that manufacturers of security technologies have to monitor and upgrade their own security continuously in order to minimize the risks. Attack scenarios are becoming more sophisticated – and companies like Gemalto have to respond.

Gemalto itself recognizes that more has to be done for security and incident analysis: "Digital security is not static. Today's state of the art technologies lose their effectiveness over time as new research and increasing processing power make innovative attacks possible. All reputable security products must be re-designed and upgraded on a regular basis". In other words, one can expect that the attacks were at least partially successful - not necessarily against Gemalto itself, but against their customers and other SIM card manufacturers. There is no reason to believe that new technologies are secure. According to the spokesperson for the company, Gemalto is constantly facing attacks and outer layers of their protection have been repeatedly breached. Even if Gemalto does maintain a very high standard in security, the constant risks of new attack vectors and stronger attackers should not be underestimated.

Unfortunately, no concrete details were given during the press conference, what changes to their security practices are already in place and what are planned, other than a statement regarding continuous improvement of these practices. However, until the very concept of a “universal key”, in this case the encryption key on a SIM card, is fundamentally reconsidered, such keys will remain attractive targets both for state and state-sponsored attackers and for organized crime.

Gemalto considers the risk for the secure part of their infrastructure low. Sensitive information is apparently kept in isolated networks, and no traces of unauthorized access to these networks have been found. However, the fact that there were no traces of attacks does not mean that there were no attacks.

Gemalto has also repeatedly pointed out that the attack has only affected 2G network SIMs. There is, however, no reason to believe that 3G and 4G networks must be safer, especially not against massive attacks of intelligence agencies. Another alarming sign is that, according to Gemalto, certain mobile service providers are still using insecure transfer methods. Sure, they are talking about “rare exceptions”, but it nevertheless means that unsecured channels still exist.

The incident at Gemalto has once again demonstrated that the uncontrolled actions of intelligence agencies in the area of cyber security poses a threat not only to fundamental constitutional principles such as privacy of correspondence and telecommunications, but to the economy as well. The image of companies like Gemalto and thus their business success and enterprise value are at risk from such actions.

Even more problematic is that the knowledge of other attackers is growing with each published new attack vector. Stuxnet and Flame have long been well analyzed. It can be assumed that the intelligence agencies of North Korea, Iran and China, as well as criminal groups have studied them long ago. The act can be compared to leaking of atomic bomb designs, with a notable difference: you do not need plutonium, just a reasonably competent software developer to build your own bomb. Critical infrastructures are thus becoming more vulnerable.

In this context, one should also consider the idea of German state and intelligence agencies to procure zero-day exploits in order to carry out investigations of suspicious persons’ computers. Zero-day attacks are called that way because code to exploit a newly discovered vulnerability is available before the vendor even becomes aware of the problem, because they literally have zero days to fix it. In reality, this means that attackers are able to exploit a vulnerability long before anyone else discovers it. Now, if government agencies are keeping the knowledge about such vulnerabilities to create their own malware, they are putting the public and the businesses in a great danger, because one can safely assume that they won’t be the only ones having that knowledge. After all, why would sellers of such information make their sale only once?

With all due respect for the need for states and their intelligence agencies to respond to the threat of cyber-crime, it is necessary to consider two potential problems stemming from this approach. On one hand, it requires a defined state control over this monitoring, especially in light of the government’s new capability of nationwide mobile network monitoring in addition to already available Internet monitoring. On the other hand, government agencies finally need to understand the consequences of their actions: by compromising the security of IT systems or mobile communications, they are opening a Pandora's Box and causing damage of unprecedented scale.


Google+

Operational Technology: Safety vs. Security – or Safety and Security?

Feb 24, 2015 by Martin Kuppinger

In recent years, the area of “Operational Technology” – the technology used in manufacturing, in Industrial Control Systems (ICS), SCADA devices, etc. – has gained the attention of Information Security people. This is a logical consequence of the digital transformation of businesses as well as concepts like the connected (or even hyper-connected) enterprise or “Industry 4.0”, which describes a connected and dynamic production environment. “Industry 4.0” environments must be able to react to customer requirements and other changes by better connecting them. More connectivity is also seen between industrial networks and the Internet of Things (IoT). Just think about smart meters that control local power production that is fed into large power networks.

However, when Information Security people start talking about OT Security there might be a gap in common understanding. Different terms and different requirements might collide. While traditional Information Security focuses on security, integrity, confidentiality, and availability, OT has a primary focus on aspects such as safety and reliability.

Let’s just pick two terms: safety and security. Safety is not equal to security. Safety in OT is considered in the sense of keeping people from harm, while security in IT is understood as keeping information from harm. Interestingly, if you look up the definitions in the Merriam-Webster dictionary, they are more or less identical. Safety there is defined as “freedom from harm or danger: the state of being safe”, while security is defined as “the state of being protected or safe from harm”. However, in the full definition, the difference becomes clear. While safety is defined as “the condition of being safe from undergoing or causing hurt, injury, or loss”, security is defined as “measures taken to guard against espionage or sabotage, crime, attack, or escape”.

It is a good idea to work on a common understanding of terms first, when people from OT security and IT security start talking. For decades, they were pursuing their separate goals in environments with different requirements and very little common ground. However, the more these two areas become intertwined, the more conflicts occur between them – which can be best illustrated when comparing their views on safety and security.

In OT, there is a tendency to avoid quick patches, software updates etc., because they might result in safety or reliability issues. In IT, staying at the current release level is mandatory for security. However, patches occasionally cause availability issues – which stands in stark contrast to the core OT requirements. In this regard, many people from both sides consider this a fundamental divide between OT and IT: the “Safety vs. Security” dichotomy.

However, with more and more connectivity (even more in the IoT than in OT), the choice between safety and security is no longer that simple. A poorly planned change (even as simple as an antivirus update) can introduce enough risk of disruption of an industrial network that OT experts will refuse even to discuss it: “people may die because of this change”. However, in the long term, not making necessary changes may lead to an increased risk of a deliberate disruption by a hacker. A well-known example of such a disruption was the Stuxnet attack in Iran back in 2007. Another much more recent event occurred last year in Germany, where hackers used malware to get access to a control system of a steel mill, which they then disrupted to such a degree that it could not be shut down and caused massive physical damage (but, thankfully, no injuries or death of people).

When looking in detail at many of the current scenarios for connected enterprises and – in consequence – connected OT or even IoT, this conflict between safety and security isn’t an exception; every enterprise is doomed to face it sooner or later. There is no simple answer to this problem, but clearly, we have to find solutions and IT and OT experts must collaborate much more closely than they are (reluctantly) nowadays.

One possible option is limiting access to connected technology, for instance, by defining it as a one-way road, which enables information flow from the industrial network, but establishes an “air gap” for incoming changes. Thus, the security risk of external attacks is mitigated.

However, this doesn’t appear to be a long-term solution. There is increasing demand for more connectivity, and we will see OT becoming more and more interconnected with IT. Over time, we will have to find a common approach that serves both security and safety needs or, in other words, both OT security and IT security.


Google+


top
KuppingerCole Blog
By:
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Register now
Spotlight
Internet of Things
It is its scale and interoperability that fundamentally differentiate the Internet of Things from existing isolated networks of various embedded devices. And this scale is truly massive. Extrapolating the new fashion of making each and every device connected, it is estimated that by 2020, the number of “things” in the world will surpass 200 billion and the IoT market will be worth nearly $9 trillion.
KuppingerCole EXTEND
KC EXTEND shows how the integration of new external partners and clients in your IAM can be done while at the same time the support of the operational business is ensured.
Links
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on Google+

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing

 GenericIAM
Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2015 KuppingerCole