English   Deutsch   Русский   中文    

Blog posts by Alexei Balaganski

Building APIs without programming? New tools from CA make it possible

Nov 25, 2015 by Alexei Balaganski

Last week, CA Technologies has announced several new products in their API Management portfolio. The announcement was made during their annual CA World event, which took place November 16-20 in Las Vegas. This year, the key topic of the event has been Application Economy, so it is completely unsurprising that API management was a big part of the program. After all, APIs are one of the key technologies that drive the “digital transformation”, helping companies to stay agile and competitive, enable new business models and open up new communication channels with partners and customers.

Whether the companies are leveraging APIs to accelerate their internal application development, expose their business competence to new markets or to adopt new technologies like software-defined computing infrastructures, they are facing a lot of complex challenges and have to rely on third-party solutions to manage their APIs. The API Management market, despite its relatively young age, has matured quickly, and CA Technologies has become one of the leading players there. In fact, just a few months ago KuppingerCole has recognized CA as the overall leader in the Leadership Compass on API Security Management.

However, even a broad range of available solutions for publishing, securing, monitoring or monetizing APIs does not change the fact that before a backend service can be exposed as an API, it has to be implemented – that is, a team of skilled software developers is still required to bring your corporate data or intelligence into the API economy. Although quite a number of approaches exist to make the developer’s job as easy and efficient as possible (sometimes even eliminating the need for a standalone backend, like the AWS Lambda service), business persons are still unable to participate in this process on their own.

Well, apparently, CA is going to change that. The new CA Live API Creator is a solution that’s aiming at eliminating programming from the process of creating data-driven APIs. For a lot of companies, joining the API economy means the need to unlock their existing data stores and make their enterprise data available for consumption through standard APIs. For these use cases, CA offers a complete solution to create REST endpoints that expose data from multiple SQL and NoSQL data sources using a declarative data model and a graphical point-and-click interface. By eliminating the need to write code or SQL statements manually, the company claims tenfold time-to-market improvement and 40 times more concise logic rules. Most importantly, however, business persons no longer need to involve software developers – the process seems to be easy and straightforward enough for them to manage on their own.

For more advanced scenarios, these APIs can be extended with declarative business rules, JavaScript event processing and fine-grained access control. Naturally, the product can also integrate with other products from CA’s API Management portfolio. The whole solution can be flexibly deployed as a virtual appliance, software in various app servers or directly in the cloud.

CA Live API Creator consists of three components:

  1. Database Explorer, which provides interactive access to the enterprise data across SQL and NoSQL data sources directly from a browser. With this tool, users can not just browse and search, but also manage this information and even create “back office apps” with graphical forms for editing the data across multiple tables.
  2. API Creator, the actual tool for creating data-driven APIs using a point-and-click GUI. It provides the means for designing data models, defining logical rules, managing access control and so on, all without the need to write application code or SQL statements. It’s worth stressing that it’s not a GUI-based code generator – the solution is based on an object model, which is directly deployed to the API server.
  3. The aforementioned API Server is responsible for execution of APIs, event processing and other runtime logic. It connects to the existing data sources and serves client requests to REST-based API endpoints.

Although the product hasn’t been released yet (will become available in December), and although it should be clearly understood that it’s by nature not an universal solution for all possible API use cases, we can already see a lot of potential. The very idea of eliminating software developers from the API publishing process is pretty groundbreaking, and if CA delivers on their promises to make the tool easy enough for business people, it will become a valuable addition to the company’s already first-class API management portfolio.


Real-Time Security Intelligence Market Overview

Nov 03, 2015 by Alexei Balaganski

With the ever-growing number of new security threats and continued deterioration of traditional security perimeters, demand for new security analytics tools that can detect those threats in real time is growing rapidly. Real-Time Security Intelligence solutions are going to redefine the way existing SIEM tools are working and finally provide organizations with clearly ranked actionable items and highly automated remediation workflows.

Various market analysts predict that security analytics solutions will grow into a multibillion market within the next five years. Many vendors, big and small, are now rushing to bring their products to this market in anticipation of its potential. However, the market is still far from reaching the stage of maturity. First, the underlying technologies have not themselves reached full maturity yet, with areas like machine learning and threat intelligence still being constantly developed. Second, very few vendors possess enough intellectual property or resources to integrate all these technologies into a single universal solution.

In a sense, RTSI segment is the frontier of the overall market for information security solutions. When selecting the tools most appropriate for their requirements, customers thus have to be especially careful and should not take vendors’ claims for granted. Support for different data sources, scope of anomaly detection and usability in general may vary significantly.

Although we should expect that in a few years, the market will settle and the broad range of products with various scopes of functionality available today will eventually converge to a reasonable number, today we are still far from that. While some vendors are deciding for evolutionary development of their existing products, others opt for strategic acquisitions. At the same time, smaller companies or even startups are bringing their niche products to the market, aiming for customers looking for point solutions for their most critical problems. The resulting multitude of solutions makes them quite difficult to compare and even harder to predict in which direction the market will evolve. We can however name a few notable vendors from different strata of the RTSI market to at least give you an idea where to start looking.

First, large vendors currently offering “traditional” SIEM solutions are obviously interested in bringing their products up to date with the latest technological developments. This includes IBM Security with their QRadar SIEM and Guardium products with significantly improved analytics capabilities, RSA Security Analytics platform, NetIQ Sentinel or smaller vendors like Securonix or LogRythm.

Another class of vendors are companies coming from the field of cybersecurity. Their products are focusing more on detection and prevention of external and internal threats, and by integrating big data analytics and their own or 3rd party sources of threat intelligence they naturally evolve into RTSI solutions that are leaner and easier to deploy than traditional SIEMs and are targeted at smaller organizations. Notable examples here could be CyberArk with Privileged Threat Analytics as a part of their Privileged Account Security solution, Hexis Cyber Solutions with their HawkEye G and AP analytics platforms or AlienVault with Unified Security Management offering. Another important, yet much less represented aspect of security intelligence is user behavior analytics with vendors like BalaBit with Blindspotter tool recently added to their portfolio or Gurucul providing a number of specialized analytics solutions in that area.

Besides bigger vendors, numerous startups with products usually concentrating on a single source of analytics information like network traffic analysis, endpoint security or mobile security analytics. Their solutions are usually targeted towards small and medium businesses and, although limited in their functional scope, rely more on ease of deployment, simplicity of user interface and quality of support service to win their potential customers. For small companies without sufficient security budgets or expert teams, these products can be a blessing, because they quickly address their most critical security problems. To name just a few vendors here: Seculert with their cloud-based analytics platform, Cybereason with an unorthodox approach towards endpoint security analytics, Cynet with their rapidly deployed integrated solution, Logtrust with a focus on log analysis or Fortscale with a cloud-based solution for detecting malicious users.

Surely, such a large number of different solutions makes RTSI market quite difficult to analyze and predict. On the other hand, almost any company will probably be able to find a product that’s tailored specifically for their requirements. It’s vital however that they should look for complete solutions with managed services and quality support, not just for another set of tools.


The Glorious Return of the Albanian Virus

Sep 23, 2015 by Alexei Balaganski

When I first read about the newly discovered kind of OS X and iOS malware called XcodeGhost, quite frankly, the first thing that came to my mind was: “That’s the Albanian virus!” In case you don’t remember the original reference, here’s what it looks like:

I can vividly imagine a conversation among hackers, which would go like this:

- Why do we have to spend so much effort on planting our malware on user devices? Wouldn’t it be great if someone would do it for us?

- Ha-ha, do you mean the Albanian virus? Wait a second, I’ve got an idea!

Unfortunately, it turns out that the situation isn’t quite that funny and in fact poses a few far-reaching questions regarding the current state of iOS security.

What is XcodeGhost anyway? In short, it’s Apple’s official developer platform Xcode for creating OS X and iOS software, repackaged by yet unknown hackers to include malicious code. Any developer, who would download this installer and use it to compile an iOS app, would automatically include this code into their app, which is then submitted to the App Store and distributed to all users automatically as a usual update. According to Palo Alto Networks, which published a series of reports on XcodeGhost, this malware is able to collect information from mobile devices and send them to a command and control server. It would also try to phish for user’s credentials or steal their passwords from the clipboard.

Still, the most remarkable is that quite a few legitimate and popular iOS apps from well-known developers (mostly based in China) became infected and were successfully published in the App Store. Although it baffles me why a seasoned developer would download Xcode from a file-sharing site instead of getting it for free directly from Apple, the list of victims includes Tencent, creators of the hugely popular app WeChat that has over 600 million users. In total, around 40 apps in the App Store have been found to contain the malicious code. Update: another report by FireEye identifies over 4000 affected apps.

Unfortunately, there is practically nothing that iOS users can do at the moment to prevent this kind of attack. Surely, they should uninstall any of the apps that are known to contain this malicious code, but how many have not yet been discovered? We can also safely assume that other hackers will follow with their own implementations of this new concept or concentrate on attacking other components of the development chain.

Apple’s position on antivirus apps for iOS has been consistent for years: they are unnecessary and create a wrong impression. In fact, none of the apps remaining in the App Store under a name “Antivirus” is actually capable of detecting malware: there are no interfaces in iOS, which would allow them to function. In this regard, user’s safety is entirely in Apple’s hands. Even if they upgrade the App Store to include better malware detection in submitted apps and incorporate stronger integrity checks into Xcode, can we be sure that there will be no new outbreaks of this kind of malware? After several major security bugs like Heartbleed or Poodle in core infrastructures discovered recently (and yes, I do consider Apple Store a critical infrastructure, too), how many more times does the industry have to fall on its face to finally start thinking “security first”?


Windows 10: new anti-malware features and challenges

Aug 19, 2015 by Alexei Balaganski

Offering Windows 10 as a free upgrade was definitely a smart marketing decision for Microsoft. Everyone is talking about the new Windows and everyone is eager to try it. Many of my friends and colleagues have already installed it, so I didn’t hesitate long myself and upgraded my desktop and laptop at the first opportunity.

Overall, the upgrade experience has been quite smooth. I’m still not sure whether I find all visual changes in Windows 10 positive, but hey, nothing beats free beer! I also realize that much more has been changed “under the hood”; including numerous security features Microsoft has promised to deliver in their new operating system. Some of those features (like built-in Information Rights Management functions or support for FIDO Alliance specifications for strong authentication) many consumers will probably not notice for a long time if ever, so that’s a topic for another blog post. There are several things however, which everyone will face immediately after upgrading, and not everyone will be happy with the way they are.

The most prominent consumer-facing security change in Windows 10 is probably Microsoft’s new browser – Microsoft Edge. Developed as a replacement for aging Internet Explorer, it contains several new productivity features, but also eliminates quite a few legacy technologies (like ActiveX, browser toolbars or VB Script), which were a constant source of multiple vulnerabilities. Just by switching to Edge from Internet Explorer, users are automatically protected from several major malware vectors. It does, however, include built-in PDF and Flash plugins, so it’s potentially still vulnerable to the two biggest known web security risks. It is possible to disable Flash Player under “Advanced settings” in the Edge app, which I would definitely recommend. Unfortunately, after upgrading, Windows changes your default browser to Edge, so make sure you change it back to your favorite one, like Chrome or Firefox.

Another major change that in theory should greatly improve Windows security is the new Update service. In Windows 10, users can no longer choose which updates to download – everything is installed automatically. Although this will greatly reduce the window of opportunity for an attacker to exploit a known vulnerability, an unfortunate side effect of this is that sometimes your computer will be rebooted automatically when you’re away from it. To prevent this, you must choose “Notify to schedule restart” under advanced update options – this way you’ll at least be able to choose a more appropriate time for a reboot. Another potential problem are traffic charges: if you’re connecting to the Internet over a mobile hotspot, updates can quickly eat away your monthly traffic limit. To prevent this, you should mark that connection as “metered” under “Advanced options” in the network settings.

Windows Defender, which is the built-in antivirus program already included in earlier Windows versions, has been updated in a similar way: in Windows 10, users can no longer disable it with standard controls. After 15 minutes of inactivity, antivirus protection will be re-enabled automatically. Naturally, this greatly improves anti-malware protection for users not having a third party antivirus program installed, but quite many users are unhappy with this kind of “totalitarianism”, so the Internet is full of recipes on how to block the program completely. Needless to say, this is not recommended for most users, and the only proper way of disabling Windows Defender is installing a third party product that provides better anti-malware protection. A popular site AV Comparatives maintains a list of security products compatible with Windows 10.

Since most anti-malware products utilize various low level OS interfaces to operate securely, they are known to be affected the most by the Windows upgrade procedure. Some will be silently uninstalled during the upgrade, others will simply stop working. Sometimes, an active antivirus may even block the upgrade process or cause cryptic error messages. It is therefore important to uninstall anti-malware products before the upgrade and reinstall them afterwards (provided, of course, that they are known to be compatible with the new Windows, otherwise now would be a great time to update or switch your antivirus). This will ensure that the upgrade will be smooth and won’t leave your computer unprotected. 


Amazon enters another market with their API Gateway

Jul 15, 2015 by Alexei Balaganski

What a surprising coincidence: on the same day we were preparing our Leadership Compass on API Security Management for publication, Amazon has announced their own managed service for creating, publishing and securing APIs – Amazon API Gateway. Well, it’s already too late to make changes in our Leadership Compass, but the new service is still worth having a look, hence this blog post.

Typically for Amazon, the solution is fully managed and based on AWS cloud infrastructure, meaning that there is no need to set up any physical or virtual machines or configure resources. The solution is tightly integrated with many other AWS services and is built directly into the central AWS console, so you can start creating or publishing APIs in minutes. If you already have existing backend services running on AWS infrastructure, such as EC2 or RDS, you can expose them to the world as APIs literally with a few mouse clicks. Even more compelling is the possibility to use AWS Lambda service to create completely managed “serverless” APIs without any need to worry about resource allocation or scaling.

In fact, this seems to be the primary focus of the solution. Although it is possible to manage external API endpoints, this is only mentioned in passing in the announcement: the main reason for releasing the service seems to be providing a native API management solution for AWS customers, which until now had to manage their APIs themselves or rely on third-party solutions.

Again typically for Amazon, the solution they delivered is a lean and no-frills service without all the fancy features of an enterprise API gateway, but, since it is based on the existing AWS infrastructure and heavily integrates with other well-known services from Amazon, with guaranteed scalability and performance, extremely low learning curve and, of course, low prices.

For API traffic management, Amazon CloudFront is used, with a special API caching mechanism added for increased performance. This ensures high scalability and availability for the APIs, as well as reasonable level of network security such as SSL encryption or DDoS protection. API transformation capabilities, however, are pretty basic, only XML to JSON conversion is supported.

To authorize access to APIs, the service integrates with AWS Identity and Access Management, as well as with Amazon Cognito, providing the same IAM capabilities that are available to other AWS services. Again, the gateway provides basic support for OAuth and OpenID Connect, but lacks the broad support for authentication methods typical for enterprise-grade solutions.

Analytics capabilities are provided by Amazon CloudWatch service, meaning that all API statistics are available in the same console as all other AWS services.

There seems to be no developer portal functionality provided with the service at the moment. Although it is possible to create API keys for third-party developers, there is no self-service for that. In this regard, the service does not seem to be very suitable for public APIs.

To summarize it, Amazon API Gateway is definitely not a competitor for existing enterprise API gateways like products from CA Technologies, Axway or Forum Systems. However, as a native replacement for third-party managed services (3scale, for example), it has a lot of potential and, with Amazon’s aggressive pricing policies, it may very well threaten their market positions.

Currently, Amazon API Gateway is available in selected AWS regions, so it’s possible to start testing it today. According to the first reports from developers, there are still some kinks to iron out before the service becomes truly usable, but I’m pretty sure that it will quickly become popular among existing AWS customers and may even be a deciding factor for companies to finally move their backend services to the cloud (Amazon cloud, of course).


Why Cybersecurity and Politics Just Don’t Mix Well

Jun 12, 2015 by Alexei Balaganski

With the number of high-profile security breaches growing rapidly, more and more large corporations, media outlets and even government organizations are falling victim to hacking attacks. These attacks are almost always widely publicized, adding insult to already substantial injury for the victims. It’s no surprise that the recent news and developments in the field of cybersecurity are now closely followed and discussed not just by IT experts, but by the general public around the world.

Inevitably, just like any other sensational topic, cybersecurity has attracted politicians. And whenever politics and technology are brought together, the resulting mix of macabre and comedy is so potent that it will make every security expert cringe. Let’s just have a look at the few of the most recent examples.

After the notorious hack of Sony Pictures Entertainment last November, which was supposedly carried out by a group of hackers demanding not to release a comedy movie about a plot to assassinate Kim Jong-Un, United States intelligence agencies were quick to allege that the attack was sponsored by North Korea. For some time, it was strongly debated whether a cyber-attack constitutes an act of war and whether the US should retaliate with real weapons.

Now, every information security expert knows that attributing hacking attacks is a long and painstaking process. In fact, the only known case of a cyber-attack more or less reliably attributed to a state agency until now is Stuxnet, which after several years of research has been found out to be a product of US and Israeli intelligence teams. In case of the Sony hack, many security researchers around the world have pointed out that it was most probably an insider job having no relation to North Korea at all. Fortunately, cool heads in the US military have prevailed, but the thought that next time such an attack can be quickly attributed to a nation without nuclear weapons is still quite chilling…

Another repercussion of the Sony hack has been the ongoing debate about the latest cybersecurity ‘solutions’ the US and UK governments have come up with this January. Among other crazy ideas, these proposals include introducing mandatory backdoors into every security tool and banning certain types of encryption completely. Needless to say, all this is served under the pretext of fighting terrorism and organized crime, but is in fact aimed at further expanding government capabilities of spying on their own citizens.

Unfortunately, just like any other technology plan devised by politicians, it won’t just not work, but will have disastrous consequences for the whole society, including ruining people’s privacy, making every company’s IT infrastructure more vulnerable to hacking attacks (exploiting the same government-mandated backdoors), blocking significant part of academic research, not to mention completely destroying businesses like security software vendors or cloud service providers. Sadly, even in Germany, the country where privacy is considered an almost sacred right, the government is engaged in similar activities as well.

Speaking about Germany, the latest, somewhat more lighthearted example of politicians’ inability to cope with cybersecurity comes from the Bundestag, the German federal parliament. After another crippling cyber-attack on its network in May, which allowed hackers to steal large amount of data and led to a partial shutdown of the network, the head of Germany’s Federal Office for Information Security has come up with a great idea. Citing concerns for mysterious Russian hackers still lurking in the network, it has been announced that the existing infrastructure including over 20,000 computers has to be completely replaced. Leaving aside the obvious question – are the same people that designed the old network really able to come up with a more secure one this time? – one still cannot but wonder whether millions needed for such an upgrade could be better spent somewhere else. In fact, my first thought after reading the news was about President Erdogan’s new palace in Turkey. Apparently, he just had to move to a new 1,150-room presidential palace simply because the old one was infested by cockroaches. It was very heartwarming to hear the same kind of reasoning from a German politician.

Still, any security expert cannot but continue asking more specific questions. Was there an adequate incident and breach response strategy in place? Has there been a training program for user security awareness? Were the most modern security tools deployed in the network? Was privileged account management fine-grained enough to prevent far-reaching exploitation of hijacked administrator credentials? And, last but not the least: does the agency have budget for hiring security experts with adequate qualifications for running such a critical environment?

Unfortunately, very few details about the breach are currently known, but judging by the outcome of the attack, the answer for most of these questions would be “no”. German government agencies are also known for being quite frugal with regards to IT salaries, so the best experts are inevitably going elsewhere.

Another question that I cannot but think about is what if the hackers have utilized one of the zero-day vulnerability exploits that the German intelligence agency BND is known to have purchased for their own covert operations? That would be a perfect example of “karmic justice”.

Speaking of concrete advice, KuppingerCole provides a lot of relevant research documents. You should probably start with the recently published free Leadership Brief: 10 Security Mistakes That Every CISO Must Avoid and then dive deeper into specific topics like IAM & Privilege Management in the research area of our website. Our live webinars, as well as recordings from past events can also provide a good introduction into relevant security topics. If you are looking for further support, do not hesitate to talk to us directly!


Life Management Platforms: Players, Technologies, Standards

Jun 09, 2015 by Alexei Balaganski

When KuppingerCole outlined the concept of Life Management Platforms several years ago, the perspective of numerous completely new business models based on user-centric management of personal data may have seemed a bit too farfetched to some. Although the very idea of customers being in control of their digital lives has been actively promoted for years by the efforts of ProjectVRM and although even back then the public demand for privacy was already strong, the interest in the topic was still largely academic.

Quite a lot has changed during these years. Explosive growth of mobile devices and cloud services has significantly altered the way businesses communicate with their partners and customers. Edward Snowden’s revelations have made a profound impression on the perceived importance of privacy. User empowerment is finally no longer an academic concept. The European Identity and Cloud Conference 2015 featured a whole track devoted to user managed identity and access, which provided an overview of recent developments as well as notable players in this field.

Qiy Foundation, one of the veteran players (in 2012, we have recognized them as the first real implementation of the LMP concept) has presented their newest developments and business partnerships. They were joined by Meeco, a new project centered around social channels and IoT devices, which has won this year’s European Identity and Cloud Award.

Such industry giants as Microsoft and IBM has presented their latest research in the field of user-managed identity as well. Both companies are doing extensive research targeted on technologies implementing the minimal disclosure principle fundamental for the Life Management Platform concept. Both Microsoft’s U-Prove and IBM’s Identity Mixer projects are aimed at giving users cryptographically certified, yet open and easy to use means of disclosing their personal information to online service providers in a controlled and privacy-enhancing manner. Both implement a superset of traditional Public Key Infrastructure functionality, but instead of having a single cryptographic public key, users can have an independent pseudonymized key for each transaction, which makes tracking impossible, yet still allows to verify any subset of personal information user may choose to share with a service provider.

Qiy Foundation, having the advantage of a very early start, already provides their own design and reference implementation of the whole stack of protocols and legal frameworks for an entire LMP ecosystem. Their biggest problem - and in fact the biggest obstacle for the whole future development in this area - is the lack of interoperability with other projects. However, as the LMP track and workshop at the EIC 2015 have shown, all parties working in this area are clearly aware of this challenge and are closely following each other’s developments.

In this regard, the role of Kantara Initiative cannot be overestimated. Not only this organization has been developing UMA, an important protocol for user-centric access management, privacy and consent, they are also running the Trust Framework Provider program, which ensures that various trust frameworks around the world are aligned with government regulations and each other. Still, looking at the success of the FIDO Alliance in the field of strong authentication, we cannot but hope to see in the nearest future some kind of a body uniting major players in the LMP field, driven by the shared vision and understanding that interoperability is the most critical factor for future developments.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


The New Meaning of “Hacking your TV”

Apr 13, 2015 by Alexei Balaganski

After a long list of high-profile security breaches that culminated in the widely publicized Sony Pictures Entertainment hack last November, everyone has gradually become used to this type of news. If anything, they only confirm the fact that security experts have known for years: the struggle between hackers and corporate security teams is fundamentally asymmetrical. Regardless of its size and budgets, no company is safe from such attacks simply because a security team has to cover all possible attack vectors, and a hacker needs just a single overlooked one.

Another important factor is the ongoing trend in the IT industry of rapidly growing interconnectivity and gradual erosion of network perimeters caused by adoption of cloud and mobile services, with trends such as “Industry 4.0”, i.e. connected manufacturing, and IoT with billions of connected devices adding to this erosion. All this makes protecting sensitive corporate data increasingly difficult and this is why the focus of information security is now shifting from protecting the perimeter towards real-time security intelligence and early detection of insider threats within corporate networks. Firewalls still play a useful role in enterprise security infrastructures, but, to put it bluntly, the perimeter is dead.

Having that in mind, the latest news regarding a hack of the French television network TV5Monde last Wednesday look even more remarkable. Apparently, not just their web site and social media accounts were taken over by hackers calling themselves “Cybercaliphate” and claiming allegiance to the Islamic State, they also managed to disrupt their TV broadcasting equipment for several hours. Political implications of the hack aside, the first thing in the article linked above that attracted my attention was the statement of the network’s director Yves Bigot: “At the moment, we’re trying to analyse what happened: how this very powerful cyber-attack could happen when we have extremely powerful and certified firewalls.”

Now, we all know that analyzing and attributing a cyber-attack is a very difficult and time-consuming process, so it’s still too early to judge whether the attack was indeed carried out by a group of uneducated jihadists from a war-torn Middle-Eastern region or is was a job of a hired professional team, but one thing that’s immediately clear is that it has nothing to do with firewalls. The technical details of the attack are still quite sparse, but according to this French-language publication, the hackers utilized a piece of malware written in Visual Basic to carry out their attack. In fact, it’s a variation of a known malware that is detected by many antivirus products and its most probable delivery vectors could be an unpatched Java vulnerability or even an infected email message. Surely, the hackers probably needed quite a long time to prepare their attack, but they are obviously not highly-skilled technical specialists and were not even good enough at hiding their tracks.

In fact, it would be completely safe to say that the only people to blame for the catastrophic results of the hack are TV5Monde’s own employees. After deploying their “extremely powerful firewalls” they seemingly didn’t pay much attention to protecting their networks from insider threats. According to this report, they went so far as to put sticky notes with passwords on walls and expose them on live TV!

We can also assume with certain confidence that their other security practices were equally lax. For example, the fact that all their social media accounts were compromised simultaneously probably indicates that the same credentials were used for all of them (or at least that the segregation of duties principle isn’t a part of their security strategy). And, of course, complete disruption of their TV service is a clear indication that their broadcasting infrastructure simply wasn’t properly isolated from their corporate network.

We will, of course, be waiting for additional details and new developments to be published, but it is already clear that the case of Sony hack apparently wasn’t as educational for TV5Monde as security experts have probably hoped. Well, some people just need to learn from their own mistakes. You, however, don't have to.

The first thing every organization’s security team has to realize is that the days of perimeter security are over. The number of possible attack vectors on corporate infrastructure and data has increased dramatically, and the most critical ones (like compromised privileged accounts) are actually working from within the network. Combined with much stricter compliance regulations, this means that not having a solid information security strategy can have dramatic financial and legal consequences.

For a quick overview of top 10 security mistakes with potentially grave consequences I recommend having a look at the appropriately titled KuppingerCole’s Leadership Brief: 10 Security Mistakes That Every CISO Must Avoid published just a few days ago. And of course, you’ll find much more information on our website in form of research documents, blog posts and webinar recording.




De-Mail: Now with End-to-end Encryption?

Mar 10, 2015 by Alexei Balaganski

In case you don’t know (and unless you live in Germany, you most probably don’t), De-Mail is an electronic communications service maintained by several German providers in accordance with German E-Government initiative and the De-Mail law declaring this as a secure form of communication. The purpose of the service is to complement traditional postal mail for the exchange of legal documents between citizens, businesses and government organizations.

Ever since its original introduction in 2012, De-Mail has been struggling to gain acceptance of German public. According to the latest report, only around 1 million of private citizens have registered for the service, which is way below original plans and not enough by far to reach the “critical mass”. That is actually quite understandable, since for a private person the service doesn’t bring much in comparison with postal mail (in fact, it even makes certain things, such as legally declining to receive a letter, no longer possible). Major points of criticism of the service include incompatibility with regular e-mail and other legal electronic communications services, privacy concerns regarding the personal information collected during identification process, as well as insufficient level of security.

Now the German government is attempting once more to address the latter problem by introducing end-to-end encryption. Their plan is to rely on OpenPGP standard, which will be introduced by all cooperating providers (Deutsche Telekom, Mentana-Claimsoft and United Internet known for its consumer brands GMX and Web.de) in May. According to Thomas de Maizière, Germany’s Federal Minister of the Interior, adding PGP support will provide an easy and user-friendly way of increasing the security of De-Mail service. Reaction from security experts and public, however, wasn’t particularly enthusiastic.

Apparently, to enable this new functionality, users would have to install a browser plugin. The solution is based on an open source JavaScript OpenPGP implementation and is currently available for Chrome and Firefox browsers only. According to publicly available statistics, this leaves over 60% of all German internet users out of luck, since their browsers are not supported. Even bigger problem is lack of support for mobile Apps or desktop mail clients.

Unfortunately, no integration of the plugin into De-Mail user directory is offered, which means that users are supposed to tackle the biggest challenge of any end-to-end encryption solution – secure and convenient key exchange – completely on their own. In this regard, De-Mail looks no better than any other conventional email service, since PGP encryption is already supported by many mail applications in a completely provider-agnostic manner.

Another issue is supposed ease of use of the new encryption solution. In fact, De-Mail has already been offering encryption based on S/MIME, but it couldn’t get enough traction because “it was too complicated”. However, if you compare the efforts necessary for secure PGP key exchange, it can hardly be considered an easier alternative.

Finally, there is a fundamental question with many possible legal consequences: how does one combine end-to-end encryption with the requirement for the third party (the state) to be able to verify its legitimacy? In fact, the very same de Maizière is known for opposing encryption and advocating the necessity for intelligence agencies to monitor all communications.

In any case, De-Mail is here to stay, at least as long it is actively supported by the government. However, I have serious doubts that attempts like this will have any noticeable impact on its popularity. Legal issues aside, the only proper way of implement end-to-end communications security is not to try to slap another layer on top of the aging e-mail infrastructure, but to implement new protocols designed with security in mind from the very beginning. And the most reasonable way to do that is not to try to reinvent the wheel on your own, but to look for existing developments like, for example, Dark Mail Technical Alliance. What the industry needs is a cooperatively developed standard for encrypted communications, similar to what FIDO alliance has managed to achieve for strong authentication.

Reconciling conflicting views on encryption within the government would also help a lot. Pushing for NSA-like mass surveillance of all internet communications and advocating the use of backdoors and exploits by the same people that now promise increased security and privacy of government services isn’t going to convince either security experts or the general public.


Internet of Opportunities

Mar 03, 2015 by Alexei Balaganski

For a topic so ubiquitous, so potentially disruptive and so overhyped in the media in the recent couple of years, the concept of the Internet of Things (IoT) is surprisingly difficult to describe. Although the term itself has appeared in the media nearly a decade ago, there is still no universally agreed definition of what IoT actually is. This, by the way, is a trait it shares with its older cousin, the Cloud.

On the very basic level, however, it should be possible to define IoT as a network of physical objects (“things”) capable of interacting and exchanging information with each other as well as with their owners, operators or other people. The specifics of these communications vary between definitions, but it’s commonly agreed that any embedded smart devices that communicate over the existing Internet infrastructure can be considered “things”. This includes both consumer products, such as smart medical devices, home automation systems, or wearables, and enterprise devices ranging from simple RFID tags to complex industrial process monitoring systems. However, general-purpose computers, mobile phones and tablets are traditionally excluded, although they, of course, are used to monitor or control other “things”.

Looking at this definition, one may ask what exactly is new and revolutionary about IoT? After all, industrial control systems have existed for decades, healthcare institutions have been using smart implanted devices like pacemakers and insulin pumps for years, and even smart household appliances are nothing new. This is true: individual technologies that make IoT possible have existed for several decades and even the concept of “ubiquitous internet” dates back to 1999. However, it’s the relatively recent combination of technology, business and media influences that has finally made IoT on of the hottest conversation topics.

First, continuously decreasing technology costs and growing Internet penetration have made connected devices very popular. Adding an embedded networking module to any device is cheap, yet it can potentially unlock completely new ways of interaction with other devices, creating new business value for manufacturers. Second, massive proliferation of mobile devices encourages people to look for new ways of using them to monitor and control various aspects of their life and work. As for enterprises, the proverbial Computing Troika is forcing them to evolve beyond their perimeter, to become more agile and connected, and the IT is responding by creating new technologies and standards (such as big data analytics, identity federation or even cloud computing) to support these new interactions.

It is its scale and interoperability that fundamentally differentiate the Internet of Things from existing isolated networks of various embedded devices. And this scale is truly massive. Extrapolating the new fashion of making each and every device connected, it is estimated that by 2020, the number of “things” in the world will surpass 200 billion and the IoT market will be worth nearly $9 trillion. Although the industry is facing a lot of potential obstacles on their way to that market, including lack of standards, massive security and privacy-related implications, as well as the need to develop a mature application ecosystem, the business opportunities are simply too lucrative to pass.

Practically every industry is potentially impacted by the IoT revolution, including automotive, healthcare, manufacturing, energy and utilities, transportation, financial, retail and others. Numerous use cases demonstrate that adoption of IoT as a part of business processes can bring generate immediate business value by improving process optimization, providing better intelligence and more efficient planning, enabling real-time reaction to various needs and opportunities and improving customer service.

In addition to various improvements of business processes, IoT enables a huge number of completely new consumer services, from life changing to trivial but “nice to have” ones. One doesn’t need to explain how a doctor’s ability to monitor patient’s vital signs can reduce mortality and improve quality of life or how a connected vehicle improves road safety. IoT benefits don’t end there, and it’s up to manufacturers to introduce completely new kinds of smart devices and persuade consumers that these devices will make their life fundamentally better (this has already worked well for wearable devices, for example).

Of course, IoT market doesn’t just include manufacturers of “things” themselves. Supporting and orchestrating such a huge global infrastructure introduces quite a lot of technological challenges. Obviously, manufacturers of networking hardware will play a major role, and it’s no wonder that companies like Intel or Cisco are among the major IoT proponents. However, being able to address other challenges like providing global-scale identity services for billions of transactions per minute can open up huge business opportunities, and vendors are already moving in to grab an attractive position in this market. Another example of a technology that’s expected to get a substantial boost from IoT is Big Data Analytics, because IoT is all about collecting large amounts of information from sensors, which then needs to be organized and used to make decisions.

Interestingly enough, most of current large-scale IoT deployments seem to be driven not by enterprises, but by government-backed projects. The concept of “smart city”, where networks of sensors are continuously monitoring environmental conditions, managing public transportation and so on, has attracted interest in many countries around the world. Such systems naturally integrate with existing eGovernment solutions; they also enable new business opportunities for various merchants and service companies that can plug directly into the global city network.

In any case, whether you represent a hardware vendor, a manufacturing, a service or an IT company, there is one thing about the Internet of Things you cannot afford: ignore it. The revolution is coming, and although we still have to solve many challenges and address many new risks, the future is full of opportunities.

This article has originally appeared in the KuppingerCole Analysts' View newsletter.


Author info

Alexei Balaganski
Senior Analyst
Profile | All posts
KuppingerCole Blog
KuppingerCole Select
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live training sessions.
Register now
RTSI asnd Future SOC
Statistics show that most data breaches are detected by agents outside of the organization rather than internal security tools. Real Time Security Intelligence (RTSI) seeks to remedy this.
KuppingerCole CLASS
Trusted Independent Advice in CLoud ASSurance including a detailed analysis of the Cloud Assurance management tasks in your company.
 KuppingerCole News

 KuppingerCole on Facebook

 KuppingerCole on Twitter

 KuppingerCole on Google+

 KuppingerCole on YouTube

 KuppingerCole at LinkedIn

 Our group at LinkedIn

 Our group at Xing
Imprint       General Terms and Conditions       Terms of Use       Privacy policy
© 2003-2015 KuppingerCole