Offering Windows 10 as a free upgrade was definitely a smart marketing decision for Microsoft. Everyone is talking about the new Windows and everyone is eager to try it. Many of my friends and colleagues have already installed it, so I didn’t hesitate long myself and upgraded my desktop and laptop at the first opportunity.
Overall, the upgrade experience has been quite smooth. I’m still not sure whether I find all visual changes in Windows 10 positive, but hey, nothing beats free beer! I also realize that much more has been changed “under the hood”; including numerous security features Microsoft has promised to deliver in their new operating system. Some of those features (like built-in Information Rights Management functions or support for FIDO Alliance specifications for strong authentication) many consumers will probably not notice for a long time if ever, so that’s a topic for another blog post. There are several things however, which everyone will face immediately after upgrading, and not everyone will be happy with the way they are.
The most prominent consumer-facing security change in Windows 10 is probably Microsoft’s new browser – Microsoft Edge. Developed as a replacement for aging Internet Explorer, it contains several new productivity features, but also eliminates quite a few legacy technologies (like ActiveX, browser toolbars or VB Script), which were a constant source of multiple vulnerabilities. Just by switching to Edge from Internet Explorer, users are automatically protected from several major malware vectors. It does, however, include built-in PDF and Flash plugins, so it’s potentially still vulnerable to the two biggest known web security risks. It is possible to disable Flash Player under “Advanced settings” in the Edge app, which I would definitely recommend. Unfortunately, after upgrading, Windows changes your default browser to Edge, so make sure you change it back to your favorite one, like Chrome or Firefox.
Another major change that in theory should greatly improve Windows security is the new Update service. In Windows 10, users can no longer choose which updates to download – everything is installed automatically. Although this will greatly reduce the window of opportunity for an attacker to exploit a known vulnerability, an unfortunate side effect of this is that sometimes your computer will be rebooted automatically when you’re away from it. To prevent this, you must choose “Notify to schedule restart” under advanced update options – this way you’ll at least be able to choose a more appropriate time for a reboot. Another potential problem are traffic charges: if you’re connecting to the Internet over a mobile hotspot, updates can quickly eat away your monthly traffic limit. To prevent this, you should mark that connection as “metered” under “Advanced options” in the network settings.
Windows Defender, which is the built-in antivirus program already included in earlier Windows versions, has been updated in a similar way: in Windows 10, users can no longer disable it with standard controls. After 15 minutes of inactivity, antivirus protection will be re-enabled automatically. Naturally, this greatly improves anti-malware protection for users not having a third party antivirus program installed, but quite many users are unhappy with this kind of “totalitarianism”, so the Internet is full of recipes on how to block the program completely. Needless to say, this is not recommended for most users, and the only proper way of disabling Windows Defender is installing a third party product that provides better anti-malware protection. A popular site AV Comparatives maintains a list of security products compatible with Windows 10.
Since most anti-malware products utilize various low level OS interfaces to operate securely, they are known to be affected the most by the Windows upgrade procedure. Some will be silently uninstalled during the upgrade, others will simply stop working. Sometimes, an active antivirus may even block the upgrade process or cause cryptic error messages. It is therefore important to uninstall anti-malware products before the upgrade and reinstall them afterwards (provided, of course, that they are known to be compatible with the new Windows, otherwise now would be a great time to update or switch your antivirus). This will ensure that the upgrade will be smooth and won’t leave your computer unprotected.
What a surprising coincidence: on the same day we were preparing our Leadership Compass on API Security Management for publication, Amazon has announced their own managed service for creating, publishing and securing APIs – Amazon API Gateway. Well, it’s already too late to make changes in our Leadership Compass, but the new service is still worth having a look, hence this blog post.
Typically for Amazon, the solution is fully managed and based on AWS cloud infrastructure, meaning that there is no need to set up any physical or virtual machines or configure resources. The solution is tightly integrated with many other AWS services and is built directly into the central AWS console, so you can start creating or publishing APIs in minutes. If you already have existing backend services running on AWS infrastructure, such as EC2 or RDS, you can expose them to the world as APIs literally with a few mouse clicks. Even more compelling is the possibility to use AWS Lambda service to create completely managed “serverless” APIs without any need to worry about resource allocation or scaling.
In fact, this seems to be the primary focus of the solution. Although it is possible to manage external API endpoints, this is only mentioned in passing in the announcement: the main reason for releasing the service seems to be providing a native API management solution for AWS customers, which until now had to manage their APIs themselves or rely on third-party solutions.
Again typically for Amazon, the solution they delivered is a lean and no-frills service without all the fancy features of an enterprise API gateway, but, since it is based on the existing AWS infrastructure and heavily integrates with other well-known services from Amazon, with guaranteed scalability and performance, extremely low learning curve and, of course, low prices.
For API traffic management, Amazon CloudFront is used, with a special API caching mechanism added for increased performance. This ensures high scalability and availability for the APIs, as well as reasonable level of network security such as SSL encryption or DDoS protection. API transformation capabilities, however, are pretty basic, only XML to JSON conversion is supported.
To authorize access to APIs, the service integrates with AWS Identity and Access Management, as well as with Amazon Cognito, providing the same IAM capabilities that are available to other AWS services. Again, the gateway provides basic support for OAuth and OpenID Connect, but lacks the broad support for authentication methods typical for enterprise-grade solutions.
Analytics capabilities are provided by Amazon CloudWatch service, meaning that all API statistics are available in the same console as all other AWS services.
There seems to be no developer portal functionality provided with the service at the moment. Although it is possible to create API keys for third-party developers, there is no self-service for that. In this regard, the service does not seem to be very suitable for public APIs.
To summarize it, Amazon API Gateway is definitely not a competitor for existing enterprise API gateways like products from CA Technologies, Axway or Forum Systems. However, as a native replacement for third-party managed services (3scale, for example), it has a lot of potential and, with Amazon’s aggressive pricing policies, it may very well threaten their market positions.
Currently, Amazon API Gateway is available in selected AWS regions, so it’s possible to start testing it today. According to the first reports from developers, there are still some kinks to iron out before the service becomes truly usable, but I’m pretty sure that it will quickly become popular among existing AWS customers and may even be a deciding factor for companies to finally move their backend services to the cloud (Amazon cloud, of course).
With the number of high-profile security breaches growing rapidly, more and more large corporations, media outlets and even government organizations are falling victim to hacking attacks. These attacks are almost always widely publicized, adding insult to already substantial injury for the victims. It’s no surprise that the recent news and developments in the field of cybersecurity are now closely followed and discussed not just by IT experts, but by the general public around the world.
Inevitably, just like any other sensational topic, cybersecurity has attracted politicians. And whenever politics and technology are brought together, the resulting mix of macabre and comedy is so potent that it will make every security expert cringe. Let’s just have a look at the few of the most recent examples.
After the notorious hack of Sony Pictures Entertainment last November, which was supposedly carried out by a group of hackers demanding not to release a comedy movie about a plot to assassinate Kim Jong-Un, United States intelligence agencies were quick to allege that the attack was sponsored by North Korea. For some time, it was strongly debated whether a cyber-attack constitutes an act of war and whether the US should retaliate with real weapons.
Now, every information security expert knows that attributing hacking attacks is a long and painstaking process. In fact, the only known case of a cyber-attack more or less reliably attributed to a state agency until now is Stuxnet, which after several years of research has been found out to be a product of US and Israeli intelligence teams. In case of the Sony hack, many security researchers around the world have pointed out that it was most probably an insider job having no relation to North Korea at all. Fortunately, cool heads in the US military have prevailed, but the thought that next time such an attack can be quickly attributed to a nation without nuclear weapons is still quite chilling…
Another repercussion of the Sony hack has been the ongoing debate about the latest cybersecurity ‘solutions’ the US and UK governments have come up with this January. Among other crazy ideas, these proposals include introducing mandatory backdoors into every security tool and banning certain types of encryption completely. Needless to say, all this is served under the pretext of fighting terrorism and organized crime, but is in fact aimed at further expanding government capabilities of spying on their own citizens.
Unfortunately, just like any other technology plan devised by politicians, it won’t just not work, but will have disastrous consequences for the whole society, including ruining people’s privacy, making every company’s IT infrastructure more vulnerable to hacking attacks (exploiting the same government-mandated backdoors), blocking significant part of academic research, not to mention completely destroying businesses like security software vendors or cloud service providers. Sadly, even in Germany, the country where privacy is considered an almost sacred right, the government is engaged in similar activities as well.
Speaking about Germany, the latest, somewhat more lighthearted example of politicians’ inability to cope with cybersecurity comes from the Bundestag, the German federal parliament. After another crippling cyber-attack on its network in May, which allowed hackers to steal large amount of data and led to a partial shutdown of the network, the head of Germany’s Federal Office for Information Security has come up with a great idea. Citing concerns for mysterious Russian hackers still lurking in the network, it has been announced that the existing infrastructure including over 20,000 computers has to be completely replaced. Leaving aside the obvious question – are the same people that designed the old network really able to come up with a more secure one this time? – one still cannot but wonder whether millions needed for such an upgrade could be better spent somewhere else. In fact, my first thought after reading the news was about President Erdogan’s new palace in Turkey. Apparently, he just had to move to a new 1,150-room presidential palace simply because the old one was infested by cockroaches. It was very heartwarming to hear the same kind of reasoning from a German politician.
Still, any security expert cannot but continue asking more specific questions. Was there an adequate incident and breach response strategy in place? Has there been a training program for user security awareness? Were the most modern security tools deployed in the network? Was privileged account management fine-grained enough to prevent far-reaching exploitation of hijacked administrator credentials? And, last but not the least: does the agency have budget for hiring security experts with adequate qualifications for running such a critical environment?
Unfortunately, very few details about the breach are currently known, but judging by the outcome of the attack, the answer for most of these questions would be “no”. German government agencies are also known for being quite frugal with regards to IT salaries, so the best experts are inevitably going elsewhere.
Another question that I cannot but think about is what if the hackers have utilized one of the zero-day vulnerability exploits that the German intelligence agency BND is known to have purchased for their own covert operations? That would be a perfect example of “karmic justice”.
Speaking of concrete advice, KuppingerCole provides a lot of relevant research documents. You should probably start with the recently published free Leadership Brief: 10 Security Mistakes That Every CISO Must Avoid and then dive deeper into specific topics like IAM & Privilege Management in the research area of our website. Our live webinars, as well as recordings from past events can also provide a good introduction into relevant security topics. If you are looking for further support, do not hesitate to talk to us directly!
When KuppingerCole outlined the concept of Life Management Platforms several years ago, the perspective of numerous completely new business models based on user-centric management of personal data may have seemed a bit too farfetched to some. Although the very idea of customers being in control of their digital lives has been actively promoted for years by the efforts of ProjectVRM and although even back then the public demand for privacy was already strong, the interest in the topic was still largely academic.
Quite a lot has changed during these years. Explosive growth of mobile devices and cloud services has significantly altered the way businesses communicate with their partners and customers. Edward Snowden’s revelations have made a profound impression on the perceived importance of privacy. User empowerment is finally no longer an academic concept. The European Identity and Cloud Conference 2015 featured a whole track devoted to user managed identity and access, which provided an overview of recent developments as well as notable players in this field.
Qiy Foundation, one of the veteran players (in 2012, we have recognized them as the first real implementation of the LMP concept) has presented their newest developments and business partnerships. They were joined by Meeco, a new project centered around social channels and IoT devices, which has won this year’s European Identity and Cloud Award.
Such industry giants as Microsoft and IBM has presented their latest research in the field of user-managed identity as well. Both companies are doing extensive research targeted on technologies implementing the minimal disclosure principle fundamental for the Life Management Platform concept. Both Microsoft’s U-Prove and IBM’s Identity Mixer projects are aimed at giving users cryptographically certified, yet open and easy to use means of disclosing their personal information to online service providers in a controlled and privacy-enhancing manner. Both implement a superset of traditional Public Key Infrastructure functionality, but instead of having a single cryptographic public key, users can have an independent pseudonymized key for each transaction, which makes tracking impossible, yet still allows to verify any subset of personal information user may choose to share with a service provider.
Qiy Foundation, having the advantage of a very early start, already provides their own design and reference implementation of the whole stack of protocols and legal frameworks for an entire LMP ecosystem. Their biggest problem - and in fact the biggest obstacle for the whole future development in this area - is the lack of interoperability with other projects. However, as the LMP track and workshop at the EIC 2015 have shown, all parties working in this area are clearly aware of this challenge and are closely following each other’s developments.
In this regard, the role of Kantara Initiative cannot be overestimated. Not only this organization has been developing UMA, an important protocol for user-centric access management, privacy and consent, they are also running the Trust Framework Provider program, which ensures that various trust frameworks around the world are aligned with government regulations and each other. Still, looking at the success of the FIDO Alliance in the field of strong authentication, we cannot but hope to see in the nearest future some kind of a body uniting major players in the LMP field, driven by the shared vision and understanding that interoperability is the most critical factor for future developments.
This article has originally appeared in the KuppingerCole Analysts' View newsletter.
After a long list of high-profile security breaches that culminated in the widely publicized Sony Pictures Entertainment hack last November, everyone has gradually become used to this type of news. If anything, they only confirm the fact that security experts have known for years: the struggle between hackers and corporate security teams is fundamentally asymmetrical. Regardless of its size and budgets, no company is safe from such attacks simply because a security team has to cover all possible attack vectors, and a hacker needs just a single overlooked one.
Another important factor is the ongoing trend in the IT industry of rapidly growing interconnectivity and gradual erosion of network perimeters caused by adoption of cloud and mobile services, with trends such as “Industry 4.0”, i.e. connected manufacturing, and IoT with billions of connected devices adding to this erosion. All this makes protecting sensitive corporate data increasingly difficult and this is why the focus of information security is now shifting from protecting the perimeter towards real-time security intelligence and early detection of insider threats within corporate networks. Firewalls still play a useful role in enterprise security infrastructures, but, to put it bluntly, the perimeter is dead.
Having that in mind, the latest news regarding a hack of the French television network TV5Monde last Wednesday look even more remarkable. Apparently, not just their web site and social media accounts were taken over by hackers calling themselves “Cybercaliphate” and claiming allegiance to the Islamic State, they also managed to disrupt their TV broadcasting equipment for several hours. Political implications of the hack aside, the first thing in the article linked above that attracted my attention was the statement of the network’s director Yves Bigot: “At the moment, we’re trying to analyse what happened: how this very powerful cyber-attack could happen when we have extremely powerful and certified firewalls.”
Now, we all know that analyzing and attributing a cyber-attack is a very difficult and time-consuming process, so it’s still too early to judge whether the attack was indeed carried out by a group of uneducated jihadists from a war-torn Middle-Eastern region or is was a job of a hired professional team, but one thing that’s immediately clear is that it has nothing to do with firewalls. The technical details of the attack are still quite sparse, but according to this French-language publication, the hackers utilized a piece of malware written in Visual Basic to carry out their attack. In fact, it’s a variation of a known malware that is detected by many antivirus products and its most probable delivery vectors could be an unpatched Java vulnerability or even an infected email message. Surely, the hackers probably needed quite a long time to prepare their attack, but they are obviously not highly-skilled technical specialists and were not even good enough at hiding their tracks.
In fact, it would be completely safe to say that the only people to blame for the catastrophic results of the hack are TV5Monde’s own employees. After deploying their “extremely powerful firewalls” they seemingly didn’t pay much attention to protecting their networks from insider threats. According to this report, they went so far as to put sticky notes with passwords on walls and expose them on live TV!
We can also assume with certain confidence that their other security practices were equally lax. For example, the fact that all their social media accounts were compromised simultaneously probably indicates that the same credentials were used for all of them (or at least that the segregation of duties principle isn’t a part of their security strategy). And, of course, complete disruption of their TV service is a clear indication that their broadcasting infrastructure simply wasn’t properly isolated from their corporate network.
We will, of course, be waiting for additional details and new developments to be published, but it is already clear that the case of Sony hack apparently wasn’t as educational for TV5Monde as security experts have probably hoped. Well, some people just need to learn from their own mistakes. You, however, don't have to.
The first thing every organization’s security team has to realize is that the days of perimeter security are over. The number of possible attack vectors on corporate infrastructure and data has increased dramatically, and the most critical ones (like compromised privileged accounts) are actually working from within the network. Combined with much stricter compliance regulations, this means that not having a solid information security strategy can have dramatic financial and legal consequences.
For a quick overview of top 10 security mistakes with potentially grave consequences I recommend having a look at the appropriately titled KuppingerCole’s Leadership Brief: 10 Security Mistakes That Every CISO Must Avoid published just a few days ago. And of course, you’ll find much more information on our website in form of research documents, blog posts and webinar recording.
In case you don’t know (and unless you live in Germany, you most probably don’t), De-Mail is an electronic communications service maintained by several German providers in accordance with German E-Government initiative and the De-Mail law declaring this as a secure form of communication. The purpose of the service is to complement traditional postal mail for the exchange of legal documents between citizens, businesses and government organizations.
Ever since its original introduction in 2012, De-Mail has been struggling to gain acceptance of German public. According to the latest report, only around 1 million of private citizens have registered for the service, which is way below original plans and not enough by far to reach the “critical mass”. That is actually quite understandable, since for a private person the service doesn’t bring much in comparison with postal mail (in fact, it even makes certain things, such as legally declining to receive a letter, no longer possible). Major points of criticism of the service include incompatibility with regular e-mail and other legal electronic communications services, privacy concerns regarding the personal information collected during identification process, as well as insufficient level of security.
Now the German government is attempting once more to address the latter problem by introducing end-to-end encryption. Their plan is to rely on OpenPGP standard, which will be introduced by all cooperating providers (Deutsche Telekom, Mentana-Claimsoft and United Internet known for its consumer brands GMX and Web.de) in May. According to Thomas de Maizière, Germany’s Federal Minister of the Interior, adding PGP support will provide an easy and user-friendly way of increasing the security of De-Mail service. Reaction from security experts and public, however, wasn’t particularly enthusiastic.
Unfortunately, no integration of the plugin into De-Mail user directory is offered, which means that users are supposed to tackle the biggest challenge of any end-to-end encryption solution – secure and convenient key exchange – completely on their own. In this regard, De-Mail looks no better than any other conventional email service, since PGP encryption is already supported by many mail applications in a completely provider-agnostic manner.
Another issue is supposed ease of use of the new encryption solution. In fact, De-Mail has already been offering encryption based on S/MIME, but it couldn’t get enough traction because “it was too complicated”. However, if you compare the efforts necessary for secure PGP key exchange, it can hardly be considered an easier alternative.
Finally, there is a fundamental question with many possible legal consequences: how does one combine end-to-end encryption with the requirement for the third party (the state) to be able to verify its legitimacy? In fact, the very same de Maizière is known for opposing encryption and advocating the necessity for intelligence agencies to monitor all communications.
In any case, De-Mail is here to stay, at least as long it is actively supported by the government. However, I have serious doubts that attempts like this will have any noticeable impact on its popularity. Legal issues aside, the only proper way of implement end-to-end communications security is not to try to slap another layer on top of the aging e-mail infrastructure, but to implement new protocols designed with security in mind from the very beginning. And the most reasonable way to do that is not to try to reinvent the wheel on your own, but to look for existing developments like, for example, Dark Mail Technical Alliance. What the industry needs is a cooperatively developed standard for encrypted communications, similar to what FIDO alliance has managed to achieve for strong authentication.
Reconciling conflicting views on encryption within the government would also help a lot. Pushing for NSA-like mass surveillance of all internet communications and advocating the use of backdoors and exploits by the same people that now promise increased security and privacy of government services isn’t going to convince either security experts or the general public.
For a topic so ubiquitous, so potentially disruptive and so overhyped in the media in the recent couple of years, the concept of the Internet of Things (IoT) is surprisingly difficult to describe. Although the term itself has appeared in the media nearly a decade ago, there is still no universally agreed definition of what IoT actually is. This, by the way, is a trait it shares with its older cousin, the Cloud.
On the very basic level, however, it should be possible to define IoT as a network of physical objects (“things”) capable of interacting and exchanging information with each other as well as with their owners, operators or other people. The specifics of these communications vary between definitions, but it’s commonly agreed that any embedded smart devices that communicate over the existing Internet infrastructure can be considered “things”. This includes both consumer products, such as smart medical devices, home automation systems, or wearables, and enterprise devices ranging from simple RFID tags to complex industrial process monitoring systems. However, general-purpose computers, mobile phones and tablets are traditionally excluded, although they, of course, are used to monitor or control other “things”.
Looking at this definition, one may ask what exactly is new and revolutionary about IoT? After all, industrial control systems have existed for decades, healthcare institutions have been using smart implanted devices like pacemakers and insulin pumps for years, and even smart household appliances are nothing new. This is true: individual technologies that make IoT possible have existed for several decades and even the concept of “ubiquitous internet” dates back to 1999. However, it’s the relatively recent combination of technology, business and media influences that has finally made IoT on of the hottest conversation topics.
First, continuously decreasing technology costs and growing Internet penetration have made connected devices very popular. Adding an embedded networking module to any device is cheap, yet it can potentially unlock completely new ways of interaction with other devices, creating new business value for manufacturers. Second, massive proliferation of mobile devices encourages people to look for new ways of using them to monitor and control various aspects of their life and work. As for enterprises, the proverbial Computing Troika is forcing them to evolve beyond their perimeter, to become more agile and connected, and the IT is responding by creating new technologies and standards (such as big data analytics, identity federation or even cloud computing) to support these new interactions.
It is its scale and interoperability that fundamentally differentiate the Internet of Things from existing isolated networks of various embedded devices. And this scale is truly massive. Extrapolating the new fashion of making each and every device connected, it is estimated that by 2020, the number of “things” in the world will surpass 200 billion and the IoT market will be worth nearly $9 trillion. Although the industry is facing a lot of potential obstacles on their way to that market, including lack of standards, massive security and privacy-related implications, as well as the need to develop a mature application ecosystem, the business opportunities are simply too lucrative to pass.
Practically every industry is potentially impacted by the IoT revolution, including automotive, healthcare, manufacturing, energy and utilities, transportation, financial, retail and others. Numerous use cases demonstrate that adoption of IoT as a part of business processes can bring generate immediate business value by improving process optimization, providing better intelligence and more efficient planning, enabling real-time reaction to various needs and opportunities and improving customer service.
In addition to various improvements of business processes, IoT enables a huge number of completely new consumer services, from life changing to trivial but “nice to have” ones. One doesn’t need to explain how a doctor’s ability to monitor patient’s vital signs can reduce mortality and improve quality of life or how a connected vehicle improves road safety. IoT benefits don’t end there, and it’s up to manufacturers to introduce completely new kinds of smart devices and persuade consumers that these devices will make their life fundamentally better (this has already worked well for wearable devices, for example).
Of course, IoT market doesn’t just include manufacturers of “things” themselves. Supporting and orchestrating such a huge global infrastructure introduces quite a lot of technological challenges. Obviously, manufacturers of networking hardware will play a major role, and it’s no wonder that companies like Intel or Cisco are among the major IoT proponents. However, being able to address other challenges like providing global-scale identity services for billions of transactions per minute can open up huge business opportunities, and vendors are already moving in to grab an attractive position in this market. Another example of a technology that’s expected to get a substantial boost from IoT is Big Data Analytics, because IoT is all about collecting large amounts of information from sensors, which then needs to be organized and used to make decisions.
Interestingly enough, most of current large-scale IoT deployments seem to be driven not by enterprises, but by government-backed projects. The concept of “smart city”, where networks of sensors are continuously monitoring environmental conditions, managing public transportation and so on, has attracted interest in many countries around the world. Such systems naturally integrate with existing eGovernment solutions; they also enable new business opportunities for various merchants and service companies that can plug directly into the global city network.
In any case, whether you represent a hardware vendor, a manufacturing, a service or an IT company, there is one thing about the Internet of Things you cannot afford: ignore it. The revolution is coming, and although we still have to solve many challenges and address many new risks, the future is full of opportunities.
This article has originally appeared in the KuppingerCole Analysts' View newsletter.
Even almost two years after Edward Snowden made off with a cache of secret NSA documents, the gradual ongoing publication of these materials, complemented by independent research from information security experts has provided a unique insight into the extent of global surveillance programs run by the US intelligence agencies and their partners from various European countries. Carefully timed, they’ve provided an exciting and at the same time deeply disturbing reading for both IT experts and the general public.
In the recent period, it looked as if the trickle of news regarding our friends from NSA had almost dried out, but apparently, this was just “calm before the storm”. First, just a few days ago Kaspersky Lab published their extensive report on the “Equation group”, a seemingly omnipotent international group of hackers active for over a decade and known to utilize extremely sophisticated hacking tools, including the ability to infect hard drive firmware. Technical details of these tools reveal many similarities with Stuxnet and Flame, both now known to have been developed in collaboration with NSA. It was later confirmed by a former NSA employee that the agency indeed possesses and widely utilizes this technology for collecting their intelligence.
And even before the IT security community was able to regain its collective breath, The Intercept, the publication run by Edward Snowden’s closest collaborators, has unveiled an even bigger surprise. Apparently, back in 2010, American and British intelligence agencies were able to carry out a massive scale breach of mobile phone encryption in a joint operation targeting telecommunication companies and SIM card manufacturers.
If we are to believe the report, they have managed to penetrate the network of Gemalto, world’s largest manufacturer, shipping over 2 billion SIM cards yearly. Apparently, they not only resorted to hacking, but also ran a global surveillance operation on Gemalto employees and partners. In the end, they managed to obtain copies of secret keys embedded into SIM cards that enable mobile phone identification in providers’ networks, as well as encryption of phone calls. Having these keys, NSA and GCHQ are, in theory, able to easily intercept and decrypt any call made from a mobile phone, as well as impersonate any mobile device with a copy of its SIM card. As opposed to previously known surveillance methods (like setting up a fake cell tower), this method is completely passive and undetectable. By exploiting deficiencies of GSM encryption protocols, they are also able to decrypt any previously recorded call, even from years ago.
Since Gemalto doesn’t just produce SIM cards, but various other kinds of security chips, there is a substantial chance that these could have been compromised as well. Both Gemalto and its competitors, as well as other companies working in the industry, are now fervently conducting internal investigations to determine the extent of the breach. It’s worth noting that according to Gemalto’s officials, they hadn’t noticed any indications of the breach back then.
A side note: just another proof that even security professionals need better security tools to stay ahead of the intruders.
Now, what lesson should security experts, as well as ordinary people learn from this? First and foremost, everyone should understand that in the ongoing fight against information security threats everyone is basically on their own. Western governments, which supposedly should be protecting their citizens against international crime, are revealed to be conducting the same activities on a larger and more sophisticated scale (after all, intelligence agencies possess much bigger budgets and legal protection). Until now, all attempts to limit the intelligence agencies’ powers have been largely unsuccessful. The governments even go as far as to lie outright about the extent of their surveillance operations to protect them.
Another, more practical consideration is that the only solutions we can still more or less count on are complete end-to-end encryption systems where the whole information chain is controlled by users themselves, including secure management of encryption keys. Before practical quantum computers become available, breaking a reasonably strong encryption key is still much more difficult than stealing it. For any other communication channel, you should significantly reconsider your risk policies.
At KuppingerCole, we have been following the progress of FIDO alliance for quite some time. Since their specifications for scalable and interoperable strong authentication have been published last year, FIDO has already had several successful deployments in collaboration with such industry giants as Samsung, Google and Alibaba. However, their probably biggest breakthrough been announced just a few days ago by none other than Microsoft. According to their announcement, Microsoft’s upcoming Windows 10 will include support for FIDO standards to enable strong and password-free authentication for a number of consumer and enterprise applications.
We knew, of course, that Microsoft has been working on implementing a new approach to identity protection and access control in their next operating system. Moving away from passwords towards stronger and more secure forms of authentication has been declared on of their top priorities for Windows 10. Of course, solutions like smartcards and OTP tokens have existed for decades, however, in the modern heterogeneous and interconnected world, relying on traditional enterprise PKI infrastructures or limiting ourselves by a single vendor solution is obviously impractical. Therefore, a new kind of identity is needed, which would work equally well for traditional enterprises and in consumer and web scenarios.
Now, unless you’ve been entirely avoiding all news from Microsoft in the recent years, you should have probably already guessed their next move. Embracing an open standard to allow third party manufacturers to develop compatible biometric devices and providing a common framework for hardware and software developers to build additional security into their products instead of building another “walled garden” isn’t just a good business decision, it’s the only sensible strategy.
Microsoft has joined FIDO alliance as a board member back in December 2013. Since then, they have been actively contributing to the development of FIDO specifications. Apparently, a significant part of their designs will be included in the FIDO 2.0 specification, which will then be incorporated into the Windows 10 release. Unfortunately, it’s a bit too early to talk about specific details of that contribution, since FIDO 2.0 specifications are not yet public.
However, it is already possible to get a peek of some of the new functionality in action. Current Windows 10 Technical Preview is already providing several integration scenarios for Windows Sign-in, Azure Active Directory and a handful of major SaaS services like Microsoft’s own Office 365 and partners like Salesforce, Citrix and Box. Using Azure Active Directory, it’s already possible to achieve end-to-end strong two-factor authentication completely without passwords. Windows 10 release will add support for on-premise Active Directory integration as well as integration with consumer cloud services.
And, of course, since this authentication framework will be built upon an open standard, third party developers will be able to quickly integrate it with their products and services, security device manufacturers will be able to bring a wide array of various (and interoperable) strong authentication solutions to the market and enterprise users will finally be able to forget the words “vendor lock-in”. If this isn’t a win-win situation, I don’t know what is.
It is estimated by the International Telecommunication Union that the total number of mobile devices in the world has already exceeded the number of people. Mobile devices are becoming increasingly advanced as well. In fact, modern smartphones are as powerful as desktop computers, but “know” much more about their owners: current and past location, contents of their private text messages, photos and other sensitive information, as well as their online banking credentials and other financial data. They are also always connected to the Internet and thus are especially vulnerable to hacking and malware exploits.
Growing adoption of cloud services has brought its own share of privacy concerns: more and more sensitive data is now managed by third parties, so users are losing visibility and control over their information. However, it is social computing that has made the most profound impact on our society. Ultimately, it has led to a significant erosion of public expectation of privacy and made nearly impossible to undo accidental sharing of private information. Some people have gone as far as to claim that privacy is no longer relevant. This, of course, cannot be further from reality: various studies clearly indicate that users value their privacy and strongly object to sharing of their personal data with third parties without consent. However, many users still do not have a clear understanding as to what extent mobile devices can affect their privacy.
With mobile technologies becoming more sophisticated, general public awareness about the associated risks simply cannot keep up with them. Every day, mobile users can easily fall victim to another new method of tracking, stalking or privacy abuse. Stolen personal information has become a valuable product on the black market. It includes not just financial or medical information, but and kind of PII that can be used as a key to your other assets. It’s not just hackers that are after this kind of loot: telecommunications providers, search engines and social network operators are collecting as much of this information about their users as possible to use it for targeted advertising or just to resell it to third parties. And, after Snowden, do we even need to mention government agencies?
For enterprise IT departments, growing adoption of mobile devices has brought their own share of headaches. One of the biggest current challenges for the IT industry is undoubtedly the Bring Your Own Device (BYOD) problem. While technological challenges of the problem are massive, a proper BYOD strategy must address privacy issues as well. Many organizations may easily overlook them, because issues like liability for leaked or lost private data from company-managed devices still vary per country; they are often considered to be in the grey area of current laws and regulations. These regulations are changing, however, and to stay on the safe side companies should always carefully study and address legal aspects of their mobile device policies: a mistake can cost you a fortune. KuppingerCole provides this kind of expertise as well.
However, regulations alone cannot solve the fundamental cause of so many privacy-related problems of current mobile platforms. As mentioned earlier, modern smartphones and tablets have the same computing power as desktop computers. Yet, both consumers and device manufacturers still fail to realize that mobile devices need at least the same level of protection against malware and hackers as traditional computers.
Modern mobile platforms are based on Unix-like operating systems, incorporating various low-level security features like hardware isolation or code signing. Yet, they are still far behind desktop or server systems when it comes to more sophisticated security tools like firewalls or application control. Even worse, no modern mobile platform includes any built-in vendor-neutral security APIs that would allow 3rd party developers to create such tools. Although there are several solutions available on the market now (like Samsung KNOX), they are all limited to a small number of supported devices and have their own security issues.
Modern mobile platforms are much more closed than desktop operating systems, and this is a source of privacy-related concerns as well. Consider a typical situation for iOS: we learn about data leaks or other violations in a standard app, and it takes months for Apple to even acknowledge the problem, let alone to release a patch for it. The open nature of Android’s ecosystem, on the other hand, leads to platform fragmentation and often vendors simply stop supporting old devices completely. Despite of their differences, the result is still the same: because of fundamental deficiencies in their platforms, both vendors fail to provide adequate means of protecting user’s privacy.
Thus, it is clear that long-term solutions to these problems require a major paradigm shift. Privacy cannot be protected by government regulations or “bolt on” security products – it has to become an integral part of any mobile platform and application. Unfortunately, this stands in stark contrast to the goals of many hardware and software vendors, with only a few already realizing the business value behind “privacy by design”. To break the current trend of hoarding as much personal information as possible, consumers, enterprises and government regulators have to join their efforts and bring everyone to a clear realization that long-term losses from violating customers’ trust will always be greater than short-term gains.
For more information and concrete recommendations to enterprises, mobile device manufacturers and application developers please refer to KuppingerCole’s Advisory Note “Dealing with privacy risks in mobile environments”.
This article has originally appeared in the KuppingerCole Analysts' View newsletter.