Two weeks have passed since the day the Heartbleed Bug has been revealed to the world, and people around the world are still analyzing the true scale of the disaster. We’ve learned quite a lot during these two weeks:

  • After Cloudflare initially expressed doubt that the bug can really leak SSL private keys, they were quickly proven wrong by security researchers. Unfortunately, there is no way to avoid reissuing and revoking all existing SSL certificates;
  • A week ago, Bloomberg has reported that NSA may have known about the vulnerability for years and used it to gather critical intelligence. The agency has of course quickly denied having any previous knowledge about the bug. Yet many researchers have pointed out that NSA’s long standing practice of storing information it cannot decrypt for later analysis would theoretically allow it to steal a web site’s private key after April 7th and use it to crack the old data from its archives. Unfortunately, only a small fraction of web servers currently uses the “forward secrecy” technique that would have rendered this type of attack impossible.
  • Although, thanks to a well-organized awareness campaign (giving the bug a catchy name and a logo was a brilliant idea, attracting attention of people outside of IT to the problem), most major web services have acted quickly, patching the vulnerability; however, not all of them were agile enough. For example, the web site of Russian Railways remained unpatched for a week after the bug discovery, which reportedly allowed hackers to steal over 200.000 credit card numbers.
Anyway, it will probably take more time to reveal all possible ramifications of the Heartbleed incident, but I believe that I can already express the most important lesson I have learned from it:

The Internet is unfortunately not as safe and reliable as many people, even among IT experts, tend to believe, and only a joint effort can fix it.

Sure, we’ve known about data leaks, malware attacks, phishing, etc. for years. However, there is a fundamental difference between being hacked because of ignoring security best practices and being hacked because our security tools are flawed. One thing is forgetting to lock your door before leaving, the other is locking it every time and one day discovering that the lock can be opened with a fingernail. An added insult in this particular case was that people still using outdated OpenSSL versions were not affected by the bug.

As I wrote in my previous post, the Hearbleed bug has exposed a major flaw in the claim that Open Source software is inherently more secure because anyone can inspect its source code and find vulnerabilities. This claim does not just come from hardcore OSS evangelists; for example, BSI, Germany’s Federal Office for Information Security, is known for promoting Open Source software as a solution for security problems.

Although I believe that in theory this claim is still valid, in reality nobody will do a security audit just out of curiosity. Even the project developers themselves, often outnumbered and underfunded, cannot be blindly expected to maintain the security standard high enough. It’s obvious that a major intervention is needed to improve the situation: both as financial support from corporations that use open source software in their product and as more strict government regulations.

In fact, the Heartbleed accident may have acted as a catalyst: Germany’s SPD party is currently pushing for government support for Open Source security, above all in form of formal security audit carried out by BSI and funded by the government. TrueCrypt, another widely used OSS encryption software, is currently undergoing a public audit supported by crowdfunding. I can only hope that corporations will follow the suit.

For software developers (both commercial and OSS) an important lesson should be not to rely blindly on third party libraries, but treat them as a part of critical infrastructure, just like network channels and electrical grid. Security analysis and patch management must become a critical part of development strategy for every software and especially hardware vendor. And of course, no single approach for security is ever going to be reliable enough – you should always aim for layered solutions combining different approaches.

One of those approaches, unfortunately neglected by many software vendors, is hardening applications using static code analysis. Surely, antimalware tools, firewalls and other network security tools are an important part of any security strategy, but one has to understand that all of them are inherently reactive. The only truly proactive approach to application security is making applications themselves more reliable.

Perhaps unsurprisingly, code analysis tools, such as the solution from Checkmarx or more specialized tools my colleague reviewed earlier, don’t get much hype in media, but there have been amazing advances in this area since the years when I’ve personally been involved in large software development projects. Perhaps they deserve a separate blog post.