The attacks on SSL certificate authorities such as DigiNotar or GlobalSign threaten significant aspects of SSL-based security on the Internet. They also demonstrate yet again that security concepts should be multi-layered and never have a “single point of failure”.

In late August it emerged that Dutch SSL certificate authority DigiNotar, a subsidiary of the VASCO Group, had been the subject of a successful attack in which an attacker, presumably from Iran, hacked into DigiNotar’s certificate authority (CA). Claims have meanwhile surfaced that the CA was insufficiently secured.

Now – when an attack succeeds, the existence of a vulnerability is obvious. The question, however, is whether it is even possible to sufficiently secure a CA. The problem is that certificates issued by this CA can no longer be considered trustworthy, as they may be false.

The usual verification process, which uses either CRLs (Certificate Revocation Lists), i.e. lists of invalid certificates, or sometimes OCSP (Online Certificate Status Protocol), is fairly weak: CRLs are not always up to date, and OCSP is often not used.
The first consequence of this attack is that Microsoft, Apple and various Linux distributions have removed DigiNotar from their lists of trustworthy root certificate authorities. This means that certificates issued by DigiNotar are no longer accepted, and will – assuming appropriate system and browser settings – trigger warning messages.

Because the websites of, for example, the Dutch government are secured via DigiNotar, this of course creates a number of problems, especially in the Netherlands. The resulting financial loss is almost impossible to estimate when all the follow-up costs of switching to other certificates, support for access problems, and the like are included.

But there is nothing surprising about the DigiNotar case. It matches the RSA attack that recently caused a furore when sensitive, secret information for authentication via RSA SecurID tokens was hacked at RSA.

In both cases, attackers were able to overcome the central element of the security concept – in one case the CA, in the other the central storage of security information at RSA. In both cases there was a “single point of failure”; once this has been breached, security is non-existent.

This means that we have to fundamentally reconsider what security on the Internet might look like in future. The concept of CAs has long since reached its limits – and now that fact has become widely apparent. In the case of RSA, the best response is “versatile authentication”.

This approach uses and combines various authentication solutions with a wide range of different mechanisms. The authentication method varies depending on security requirements, user groups and other circumstances, removing the dependency on a particular mechanism.

Security must however also be designed in such a way that it no longer depends on a single entity. One question is what form such a solution could take on the Internet. Internally, it means that security concepts must be multi-layered.

Most companies have long recognised that in addition to increasingly porous firewalls, further security levels are required to protect internal IT. These multi-layered models must be rigorously designed and implemented. The aim is explicitly not to use as many technologies as possible, but to specifically implement several layers of security.

Options include firewalls, system, database and application security, Endpoint security solutions (as part of an overall concept, not as an isolated solution) or Information Rights Management (IRM). The main objective is to ensure that a successful attack on a single system is not sufficient to compromise the entire security set-up. A “single point of failure” is even less affordable in security than it is elsewhere in IT.