Blog posts by Mike Small

Grizzly Steppe – what every organization needs to do

On December 29th, the FBI together with CERT finally released a Joint Analysis Report on the cyber-attacks on the US Democratic Party during the US presidential election.  Every organization, whether they are based in the US or not, would do well to read this report and to ensure that their organization takes account of its recommendations.  Once released into the wild – the tools and techniques and processes (TTPs) used by state actors are quickly taken up and become widely used by other adversaries. 

This report is not a formal indictment of a crime as was the case with the alleged hacking of US companies by the Chinese filed in 2014.  It is however important cyber threat intelligence.

Threat intelligence is a vital part of cyber-defence and cyber-incident response, providing information about the threats, TTPs, and devices that cyber-adversaries employ; the systems and information that they target; and other threat-related information that provides greater situational awareness.  This intelligence needs to be timely, relevant, accurate, specific and actionable.  This report provides such intelligence.

The approaches described in the report are not new.  They involve several phases and some have been observed using targeted spear-phishing campaigns leveraging web links to a malicious website that installs code.  Once executed, the code delivers Remote Access Tools (RATs) and evades detection using a range of techniques.  The malware connects back to the attackers who then use the RAT tools to escalate privileges, search active directory accounts, and exfiltrate email through encrypted connections.

Another attack process uses internet domains with names that closely resemble those of targeted organizations and trick potential victims into entering legitimate credentials.  A fake webmail site that collects user credentials when they log in is a favourite.  This time, a spear-phishing email tricked recipients into changing their passwords through a fake webmail domain. Using the harvested credentials, the attacker was able to gain access and steal content.

Sharing Threat Intelligence is a vital part of cyber defence and OASIS recently made available three foundational specifications for the sharing of threat intelligence.  These are described in Executive View: Emerging Threat Intelligence Standards - 72528.  Indicators of Compromise (IOCs) associated with the cyber-actors are provided using these standards (STIX) as files accompanying the report.

There are several well-known areas of vulnerability that are consistently used by cyber-attackers.  These are easy to fix but are, unfortunately, still commonly found in many organizations’ IT systems.  Organizations should take immediate steps to detect and remove these from their IT systems:

The majority of these attacks exploit human weaknesses in the first stage.  While technical measures can and should be improved, it is also imperative to provide employees, associates and partners training on how to recognize and respond to these threats.

The report describes a set of recommended mitigations and best practices.  Organizations should consider these recommendations and takes steps to implement them without delay.  KuppingerCole provides extensive research on securing IT systems and on privilege management in particular. 

What Value Certification?

In the past weeks, there have been several press releases from CSPs (Cloud Service Providers) announcing new certifications for their services.  In November, BSI announced that Microsoft Azure had achieved Cloud Security Alliance (CSA) STAR Certification. On December 15th, Amazon Web Services (AWS) announced that it had successfully completed the assessment against the compliance standard of the Bundesamt für Sicherheit in der Informationstechnik (BSI), the Cloud Computing Compliance Controls Catalogue (C5).

What value do these certifications bring to the customer of these services?

The first value is compliance. A failure by the cloud customer to comply with laws and industry regulations in relation to the way data is stored or processed in the cloud could be very expensive.  Certification that the cloud service complies with a relevant standard provides assurance that data will be processed in a way that is compliant.

The second value is assurance.  The security, compliance and management of the cloud service is shared between the CSP and the customer.  Independent certification provides reassurance that the CSP is operating the service according to the best practices set out in the standard.  This does not mean that there is no risk that something could go wrong – it simply demonstrates that the CSP is implementing the best practices to reduce the likelihood of problems and to mitigate their effects should they occur.

There are different levels of assurance that a CSP can provide – these include:

CSP Assertion – the CSP describes the steps they take.  This value of this level of assurance depends upon the customer’s trust in the CSP.

Contractual assurance – the contract for the service provides specific commitments concerning the details of the service provided.  The value of this commitment is determined by the level of liability specified in the contract under circumstances where the CSP is in default as well as the cost and difficulties in its enforcement.

Independent validation – the cloud service has been evaluated by an independent third party that provides a certificate or attestation.  Examples of this include some forms of Service Organization Control (SOC) reports using the standards SSAE 16 or ISAE 3402.  The value of this depends upon the match between the scope of the evaluation and the customer’s requirements as well as its how frequently the validation is performed.

Independent testing – the service provided has been independently tested to demonstrate that it conforms to the claims made by the CSP.  This extends the assessment to include measuring the effectiveness of the controls.  Examples include SOC 2 type II reports as well as some levels of certification with the Payment Card Industry data security Standard (PCI-DSS).  The value of this depends upon the match between the scope of the evaluation and the customer’s requirements as well as how frequently the testing is performed.

The latter of these – Independent testing – is what customers should be looking for.  However, it is important that the customer asks the following questions:

1)      What is the scope of the certification?  Does it cover the whole service delivered or just parts of it – like the data centre?

2)      How does the standard compare with the customer’s own internal controls?  Are the controls in the standard stronger or weaker?

3)      Is the standard relevant to the specific use of the cloud service by the customer?  Many CSPs now offer an “alphabet soup” of certifications.  Many of these certifications only apply to certain geographies or certain industries.

4)      How well is your side of cloud use governed?  Security and compliance of the use of cloud services is a shared responsibility.  Make sure that you understand what your organization is responsible for and that you meet these responsibilities.

For more information on this subject see: Executive View: Using Certification for Cloud Provider Selection - 71308 - KuppingerCole

AWS re:Invent 2016 Blog

In the last week of November I attended the AWS re:Invent conference in Las Vegas – this was an impressive event with around 32,000 attendees. There were a significant number of announcements at this event; many were essentially more of the same but bigger, better based on what their customers were asking for. It is clear that AWS is going from strength to strength. AWS announced many faster compute instances with larger amounts of memory optimized for various specific tasks. This may seem boring - but these announcements were received with rapturous applause from the audience. This is the AWS bread and butter and just what many customers are looking for. The value of these improvements is that a customer can switch their workload onto one of these new instances without the need to specify, order, pay for, and await delivery of new hardware as they would have had to do for on premise equipment. Continuing on that theme - James Hamilton, VP & Distinguished Engineer – described the work that AWS does behind the scenes to deliver their services. The majority of AWS traffic runs on a private network (except in China) this guarantees: improved latency, packet loss and overall quality, avoids capacity conflicts and gives AWS greater operational control. AWS designs and manages its own network routers, its own custom compute nodes to optimize power versus space and even its own custom power utility controls to cater for rare power events.

You may think - well so what? The reason why this matters is that an AWS customer gets all of this included in the service that they receive. These are time consuming processes that the customer would otherwise have to manage for their on premise IT facilities. Furthermore these processes need specialized skills that are in short supply. In the opening keynote at the conference, AWS CEO Andy Jassy compared AWS with the “legacy software vendors”. He positioned these vendors as locking their customers into long term, expensive contracts. In comparison he described how AWS allows flexibility and works to save customers’ money through price reductions and customer reviews.

However, to get the best out of AWS services, just like most IT technology, you need to exploit proprietary functionality. Once you use proprietary features it becomes more difficult to migrate from that technology. Mr. Jassy also gave several examples of how customers had been able to migrate around 13,000 proprietary database workloads to the AWS database services. While this shows the care that AWS has put into its database services it also slightly contradicts the claim that customers are being locked-in to proprietary software.

Continuing on the theme of migration – while AWS is still strong among the “born on the cloud” startups and for creating new applications, organizations are increasingly looking to migrate existing workloads. This has not always been straightforward since any differences between the on premise IT and the AWS environment can make changes necessary. The announcements previously made at VM World that a VMWare service will be offered on AWS will be welcomed by many organizations. This will allow the many customers using VMWare and the associated vSphere management tools to migrate their

workloads to AWS and while continuing to manage the hybrid cloud / on premise IT using the tools they are already using.

Another problem related to migration is that of transferring data. Organizations wishing to move their workloads to the cloud need to move their data and, for some, this can be a significant problem. The practical bandwidth of communications networks can be the limiting factor and the use of physical media introduces security problems. In response to these problems, AWS have created a storage device that can be used to physically transfer Terabytes of data securely. This first of these devices, the “AWS Snowball”, was announced at AWS last year and has now been improved and upgraded to the “AWS Snowball Edge”. However, the highlight of the conference was the announcement of the “AWS Snowmobile”. This is system mounted in a shipping container carried on a transport truck that can be used to securely transfer Exabytes of data. Here is a ‘photo that I took of one of these that was driven into the conference hall.

So, is this just an eye-catching gimmick? Not so according to the first beta customer.  The customer’s on premise datacenter was bursting at the seams and could no longer support their expanding data based business.  They wanted to move to the AWS cloud but it was not practical to transfer the amount of data they had over a network and they needed an alternative secure and reliable method.  AWS Snowmobile provided exactly the answer to this need.

Last but not least, security -  at the event AWS announced AWS Shield.  This is a managed Distributed Denial of Service (DDoS) protection service that safeguards web applications running on AWS.   The value of this was illustrated in an interesting talk SAC327 – “No More Ransomware: How Europol, the Dutch Police, and AWS Are Helping Millions Deal with Cybercrime”.  This talk described a website set up to help victims of Ransomware attacks recover their data.  Not surprisingly, this site has come under sustained attacks from cyber-criminals. The fact that this site has withstood these attacks is a confirmation that AWS can be used to create and securely host applications, and that AWS Shield can add an extra layer of protection.

In conclusion, this event demonstrates that AWS is going from strength to strength.  Its basic value proposition of providing cost effective, flexible and secure IT infrastructure remains strong and continues to be attractive.  AWS is developing services to become more Hybrid Cloud and enterprise friendly while extending its services upwards to include middleware and intelligence in response to customer demand.  

For KuppingerCole’s opinion on cloud services see our research reports Cloud Reports - KuppingerCole

Democratized Security

At the AWS Enterprise Security Summit in London on November 8th, Stephen Schmidt, CISO at AWS gave a keynote entitled “Democratized Security”.  What is Democratized Security and does it really exist? 

Well, to quote Humpty Dumpty from the book Alice in Wonderland “When I use a word it means just what I choose it to mean—neither more nor less."  So, what Mr. Schmidt meant by this phrase may or may not be what other people would understand it to mean.  This is my interpretation.

The word democracy originates in ancient Greece and where it meant the rule of the common people.  It described the opposite of the rule by an elite.  More recently, the “democratization of technology” has come to mean the process whereby sophisticated technology becomes accessible to more and more people.  In the 1990s, Andrew Feenberg described a theory for democratizing technological design. He argued for what he calls “democratic rationalization” where participants intervene in the technological design process to shape it toward their own ends.

How does this relate to cloud services?  Cloud services are easily accessible to a wide range of customers from individual consumers to large organizations.  These services survive and prosper by providing the functionality that their customers value at a price that is driven down by their scale.  Intense competition means that they need to be very responsive to their customers’ demands.  Cloud computing has made extremely powerful IT services available at an incredibly low cost in comparison with the traditional model, where the user had to invest in the infrastructure, the software and the knowledge before they could event start.

What about security? There have been many reports of cyber-attacks, data breaches and legal government data intercepts impacting on some consumer cloud services (not AWS).  The fact that many of these services still survive seems to indicate that individual consumers are not overly concerned.   Organizations however have a different perspective – they do care about security and compliance.  They are subject to a wide range of laws and regulations that define how and where data can be processed with significant penalties for failure.  Providers of cloud services that are aimed at organizations have a very strong incentive to provide the security and compliance that this market demands.

Has the security elite been eliminated?  The global nature of the internet and cyber-crime has made it extremely difficult for the normal guardians – the government and the law – to provide protection.  Even worse, the attempts by governments to use data interception to meet the challenges of global crime and terrorism have made them suspects.  The complexity of the technical challenges around cyber-threats make it impractical for all but the largest organizations to build and operate their own cyber-defences.  However, the cloud service provider has the necessary scale to afford this.  So, the cloud service providers can be thought of as representing a new security elite – albeit one that is subject to the market demands for the security of their services.

With democracy comes responsibility.  In relation to security this means that the cloud customer must take care of the aspects under their control.  Many, but not all, of the previously mentioned consumer data breaches involved factors under the customers’ control, like weak passwords.  For organizations using cloud services the customer must understand the sensitivity of their data and ensure that it is appropriately processed and protected.  This means taking a good governance approach to assure that the cloud services used meet these requirements.

Cloud services now provide a wide range of individuals and organizations with access to IT technology and services that were previously beyond their reach.  While the main driving force behind cloud services has been their functionality; security and compliance are now top of the agenda for organizational customers.  The cloud can be said to be democratizing security because organizations will only choose those services that meet their requirements in this area.  In this world, the cloud service providers have become the security elite through their scale, knowledge and control.  The cloud customer can choose which provider to use based on their trust in this provider to deliver what they need.

For more information see KuppingerCole’s research in this area: Reports - Cloud Security.

Be careful not to DROWN

On March 1st OpenSSL published a security advisory CVE-2016-0800, known as “DROWN”. This is described as a cross-protocol attack on TLS using SSLv2 and is classified with a High Severity. The advice given by OpenSSL is:

“We strongly advise against the use of SSLv2 due not only to the issues described below, but to the other known deficiencies in the protocol as described at https://tools.ietf.org/html/rfc6176

This vulnerability illustrates how vigilant organizations need to be over the specific versions of software that they use. However, this is easier said than done. Many organizations have a website or application that was built by a third party. The development may have been done some time ago and used what were the then current versions of readily available Open Source components. The developers may or may not have a contract to keep the package they developed up to date.

The application or website may be hosted on premise or externally; wherever it is hosted, the infrastructure upon which it runs also needs to be properly managed and kept up to date. OpenSSL is part of the infrastructure upon which the website runs. While there may be some reasons for continuing to use SSLv2 for compatibility, there is no possible excuse for reusing SSL Private Keys between websites. It just goes against all possible security best practices.

It may be difficult to believe but I have heard auditors report that when they ask “what does that server do?” they get the response “I don’t know – it’s always been here and we never touch it”. The same can be true of VMs in the cloud which get created, used and then forgotten (except by the cloud provider who keeps on charging for them).

So as vulnerabilities are discovered, there may be no process to take action to remediate the operational package. The cyber criminals just love this. They can set up an automated process to externally scan to find where known vulnerabilities exist unpatched and exploit the results at their leisure.

There are two basic lessons from this:

  1. Most code contains exploitable errors and its evolution generally leads to a deterioration in its quality over time unless there are very stringent controls over change. It is attractive to add functionality but increase in size and complexity leads to more vulnerabilities. Sometimes it is useful to go back to first principles and recode using a stringent approach.
    I provided an example of this in my blog AWS Security and Compliance Update. AWS has created a replacement for OpenSSL TLS - S2N Open Source implementation of TLS. S2N replaces the 500,000 lines code in OpenSSL with approximately 6,000 lines of audited code. This code has been contributed to Open Source and is available from S2N GitHub Repository.

  2. Organizations need to demand maintenance as part of the development of code by third parties. This is to avoid the need to maintain out of date infrastructure components for compatibility.
    The infrastructure, whether on premise or hosted, should be kept up to date. This will require change management processes to ensure that changes do not impact on operation. This should be supported by regular vulnerability scanning of operational IT systems using one of the many tools available together with remediation of the vulnerabilities detected.

IT systems need to have a managed lifecycle. It is not good enough to develop, deploy and forget.

ISO/IEC 27017 was it worth the wait?

On November 30th, 2015 the final version of the standard ISO/IEC 27017 was published.  This standard provides guidelines for information security controls applicable to the provision and use of cloud services.  This standard has been some time in gestation and was first released as a draft in spring 2015.  Has the wait been worth it?  In my opinion yes.

The gold standard for information security management is ISO/IEC 27001 together with the guidance given in ISO/IEC 27002.  These standards remain the foundation but the guidelines are largely written on the assumption that an organization’s processes its own information.  The increasing adoption of managed IT and cloud services, where responsibility for security is shared, is challenging this assumption.  This is not to say that these standards and guidelines are not applicable to the cloud, rather it is that they need interpretation in a situation where the information is being processed externally.  ISO/IEC 27017 and ISO/IEC 27018 standards provide guidance to deal with this.

ISO/IEC 27018, which was published in 2014, establishes controls and guidelines for measures to protect Personally Identifiable Information for the public cloud computing environment.  The guidelines are based on those specified in ISO/IEC 27002 with controls objectives extended to include the requirements needed to satisfy privacy principles in ISO/IEC 29100.  These are easily mapped onto the existing EU privacy principles.  This standard is extremely useful to help an organization assure compliance when using a public cloud service to process personally identifiable information.  Under these circumstances the cloud customer is the Data Controller and, under current EU laws, remains responsible for processing breaches of the Data Processor.  To provide this level of assurance some cloud service providers have obtained independent certification of their compliance with this standard.

The new ISO/IEC 27017 provides guidance that is much more widely applicable to the use of cloud services.  Specific guidance is provided for 37 of the existing ISO/IEC 27002 controls; separate but complementary guidance is given for the cloud service customer and the cloud service provider.  This emphasizes the shared responsibility for security of cloud services.  This includes the need for the cloud customer to have policies for the use of cloud services and for the cloud service provider to provide information to the customer.

For example, as regards restricting access (ISO 27001 control A.9.4.1) the guidance is:

  • The cloud service customer should ensure that access to information in the cloud service can be restricted in accordance with its access control policy and that such restrictions are realized.
  • The cloud service provider should provide access controls that allow the cloud service customer to restrict access to its cloud services, its cloud service functions and the cloud service customer data maintained in the service.

 In addition, the standard includes 7 additional controls that are relevant to cloud services.  These new controls are numbered to fit with the relevant existing ISO/IEC 27002 controls; these extended controls cover:

  • Shared roles and responsibilities within a cloud computing environment
  • Removal and return of cloud service customer assets
  • Segregation in virtual computing environments
  • Virtual machine hardening
  • Administrator's operational security
  • Monitoring of Cloud Services
  • Alignment of security management for virtual and physical networks

 

In summary ISO/IEC 27017 provides very useful guidance providers and KuppingerCole recommends that this guidance should be followed by cloud customers and cloud service providers. While it is helpful for cloud service providers to have independent certification that they comply with this standard, this does not remove the responsibility from the customer for ensuring that they also follow the guidance.

KuppingerCole has conducted extensive research into cloud service security and compliance, cloud service providers as well as engaging with cloud service customers.  This research has led to a deep understanding of the real risks around the use of cloud service and how to approach these risks to safely gain the potential benefits.  We have created services, workshops and tools designed to help organizations to manage their adoption of cloud services in a secure and compliant manner while preserving the advantages that these kinds of IT service bring.

Why Governance Matters to IT Security

MetricStream, a US company that supplies Governance, Risk and Compliance applications, held their GRC Summit in London on November 11th and 12th.  Governance is important to organizations because of the increasing burden of regulations and laws upon their operations.  It is specifically relevant to IT security because these regulations touch upon the data held in the IT systems.  It is also highly relevant because of the wide range of IT service delivery models in use today.

Organizations using IT services provided by a third party (for example a cloud service provider) no longer have control over the details of how that service is delivered.  This control has been delegated to the service provider.  However the organization will likely remain responsible for the data being processed and held in a way that is compliant.  This is the challenge that governance can address and why governance of IT service provision is becoming so important.

The distinction between governance and management is clearly defined in COBIT 5. Governance ensures that business needs are clearly defined and agreed and that they are satisfied in an appropriate way.  Governance sets priorities and the way in which decisions are made; it monitors performance and compliance against agreed objectives.  Governance is distinct from management in that management plans, builds, runs and monitors activities in alignment with the direction set by the governance body to achieve the objectives.  This is illustrated for cloud services in the figure below.

Governance provides an approach to IT security that can be applied consistently across the many different IT service delivery models.  By focussing on the business objectives and monitoring outcomes it decouples the activities involved in providing the service from those concerned with its consumption.  Most large organization have a complex mix of IT services provided in different ways: on premise managed internally, on premise managed by a third party, hosted services and cloud services.  Governance provide a way for organizations to ensure that IT security and compliance can be directed, measured and compared across this range of delivery models in a consistent way.

Since this specification and measurement process can involve large amounts of data from a wide variety of sources it helps to use a common governance framework (such as COBIT 5) and technology platform such as the MetricStream GRC Platform.  This platform provides centralized storage of and access to risk and compliance data, and a set of applications that allow this data to be consumed from a wide variety of sources and the results shared through a consistent user interface available on different devices.

The need for this common platform and integrated approach was described at the event by Isabel Smith Director Corporate Internal Audit at Johnson & Johnson.  Ms Smith explained that the problem of an integrated approach is particularly important because Johnson and Johnson has more than 265 operating companies located in 60 countries around the world with more than 125,000 employees.  These operating companies have a wide degree of autonomy to allow them to meet the local needs.  However the global organization must comply with regulations ranging from financial, such as Sarbanes Oxley, to those relating to health care and therapeutic products. Using the common platform enabled Johnson and Johnson to achieve a number of benefits including: getting people across the organization to use a common language around compliance and risk, to streamline and standardize policy and controls and obtain an integrated view of control tests results.

In conclusion organizations need to take a governance led approach to IT security across the heterogeneous IT service delivery models in use today.  Many of these are outside the direct control of the customer organization and their use places control of the service and infrastructure in the hands of a third party.   A governance based approach allows trust in the service to be assured through a combination of internal processes, standards and independent assessments.  Adopting a common governance framework and technology platform are important enablers for this.

AWS Security and Compliance Update

Security is a common concern of organizations adopting cloud services and so it was interesting to hear from end users at the AWS Summit in London on November 17th how some organizations have addressed these concerns.

Financial services is a highly regulated industry with a strong focus on information security.  At the event Allan Brearley, Head of Transformation Services at Tesco Bank, described the challenges they faced exploiting cloud services to innovate and reduce cost, while ensuring security and compliance.  The approach that Tesco Bank took, which is the one recommended in KuppingerCole Advisory Note: Selecting your Cloud Provider, is to identify and engage with the key stakeholders.  According to Mr Brearley it is important adopt a culture to satisfy all of the stakeholders’ needs all of the time.

In the UK the government has a cloud first strategy. Government agencies using cloud services must follow the Cloud Security Principles, first issued by UK Communications- Electronics Security Group’s (CESG) in 2014.  These describe the need to take a risk based approach to ensure suitability for purpose.   Rob Hart of the UK DVSA (Driver & Vehicle Standards Agency), that is responsible for road safety in UK, described the DVSA’s journey to the adoption of AWS cloud services.  Mr Hart described that the information being migrated to the cloud was classified according to UK government guidelines as “OFFICIAL”.  That is equivalent to commercially sensitive or Personally Identifiable Information.  The key to success, according to Mr Hart, was to involve the Information Security Architects from the very beginning.  This was helped by these architects being in the same office as the DVSA cloud migration team.

AWS has always been very open that the responsibility for security is shared between AWS and the customer.  AWS publish their “Shared Responsibility Model” which distinguishes between the aspects of security that AWS are responsible for, and those for which the customer is responsible. 

Over the past months AWS has made several important announcements around the security and compliance aspects of their services.  There are too many to cover in here and so I have chosen 3 around compliance and 3 around security.  Firstly announcements around compliance include:

  • ISO/IEC 27018:2014 – AWS has published a certificate of compliance with this ISO standard which provides a code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors.

  • UK CESG Cloud Security Principles.  In April 2015 AWS published a whitepaper to assist organisations using AWS for United Kingdom (UK) OFFICIAL classified workloads in alignment with CESG Cloud Security Principles.

  • Security by Design – In October 2015 AWS published a whitepaper describing a four-phase approach for security and compliance at scale across multiple industries.  This points to the resources available to AWS customers to implement security into the AWS environment, and describes how to validate controls are operating.

Several new security services were also announced at AWS re:Invent in October.  The functionality provided by these services is not unique however it is tightly integrated with AWS services and infrastructure.  Therefore these services provide extra benefits to a customer that is prepared to accept the risk of added lock-in.  Three of these include:

  • Amazon Inspector – this service, which is in preview, scans applications running on EC2 for a wide range of known vulnerabilities. It includes a knowledge base of rules mapped to common security compliance standards (e.g. PCI DSS) as well as up to date known vulnerabilities.

  • AWS WAF Web Application Firewall – this is a Web Application Firewall that can detect suspicious network traffic.  It helps to protect web applications from attack by blocking common web exploits like SQL injection and cross-site scripting.

  •  S2N Open Source implementation of TLS – This is a replacement created by AWS for the commonly used OpenSSL (which contained the “Heartbleed” vulnerability).  S2N replaces the 500,000 lines code in OpenSSL with approximately 6,000 lines of audited code.  This code has been contributed to Open Source and is available from S2N GitHub Repository.

AWS has taken serious steps to help customers using its cloud services to do so in a secure manner and to assure that they remain compliant with laws and industry regulations.  The customer experiences presented at the event confirm that AWS’s claims around security and compliance are supported in real life.  KuppingerCole recommends that customers using AWS services should make full use of the security and compliance functions and services provided by AWS.

Building a Cyber Defence Centre: IBM’s rules for success

According to GCHQ, the number of cyber-attacks threatening UK national security have doubled in the past 12 months. How can organizations protect themselves against this growing threat especially when statistics show that most data breaches are only discovered some time after the attack took place? One important approach is to create a Cyber Defence Centre to implement and co-ordinate the activities needed to protect, detect and respond to cyber-attacks.

The Cyber Defence Centre has evolved from the SOC (Security Operation Centre). It supports the processes for enterprise security monitoring, defence, detection and response to cyber based threats. It exploits Real Time Security Intelligence (RTSI) to detect these threats in real time or in near real time to enable action to be taken before damage is done. It uses techniques taken from big data and business intelligence to reduce that massive volume of security event data collected by SIEM to a small number of actionable alarms where there is a high confidence that there is a real threat.

A Cyber Defence Centre is not cheap or easy to implement so most organizations need help with this from an organization with real experience in this area. At a recent briefing IBM described how they have evolved a set of best practice rules based on their analysis of over 300 SOCs. These best practices include:

The first and most important of these rules is to understand the business perspective of what is at risk. It has often been the case that the SOC would focus on arcane technical issues rather than the business risk. The key objective of the Cyber Defence Centre is to protect the organization’s business critical assets. It is vital that what is business-critical is defined by the organization’s business leaders rather than the IT security group.

Many SOCs have evolved from NOCs (Network Operation Centres) – however the NOC is not a good model for cyber-defence. The NOC is organized to detect, manage and remediate what are mostly technical failures or natural disasters rather than targeted attacks. Its objective is to improve service uptime and to restore service promptly after a failure. On the other hand, the Cyber Defence Centre has to deal with the evolving tactics, tools and techniques of intelligent attackers. Its objective is to detect these attacks while at the same time protecting the assets and capturing evidence. The Cyber Defence Centre should assume that the organizational network has already been breached. It should include processes to proactively seek attacks in progress rather than passively wait for an alarm to be raised.

The Cyber Defence Centre must adopt a systematized and industrialized operating model. An approach that depends upon the individual skills is neither predictable nor scalable. The rules and processes should be designed using the same practices as for software with proper versioning and change control. The response to a class of problem needs to be worked out together with the rules on how to detect it. When the problem occurs is not a good time to figure out what to do. Measurements is critical – you can only manage what you can measure and measurement allows you to demonstrate the change levels of threats and the effectiveness of the cyber defence.

Finally, as explained by Martin Kuppinger in his blog: Your future Security Operations Center (SOC): Not only run by yourself, it is not necessary or even practical to operate all of the cyber defence activities yourself. Enabling this sharing of activities needs a clear model of how the Cyber Defence Centre will be operated. This should cover the organization and the processes as well as the technologies employed. This is essential to decide what to retain internally and to define what is outsourced an effective manner. Once again, an organization will benefit from help to define and build this operational model.

At the current state of the art for Cyber Defence, Managed Services are an essential component. This is because of the rapid evolution of threats, which makes it almost impossible for a single organization to keep up to date, and the complexity of the analysis that is required to identify how to distinguish these. This up-to-date knowledge needs to be delivered as part of the Cyber Defence Centre solution.

KuppingerCole Advisory Note: Real Time Security Intelligence provides an in-depth look at this subject.

Real Time Security Intelligence (RTSI)

Organizations depend upon the IT systems and the information that they provide to operate and grow. However, the information that they contain and the infrastructure upon which they depend is under attack. Statistics show that most data breaches are detected by agents outside of the organization rather than internal security tools. Real Time Security Intelligence (RTSI) seeks to remedy this.

Unfortunately, many organizations fail to take simple measures to protect against known weaknesses in infrastructure and applications. However, even those organizations that have taken these measures are subject to attack. The preferred technique of attacks is increasingly one of stealth; the attacker wants to gain access to the target organization’s systems and data without being noticed. The more time the attacker has for undetected access the more the opportunity to steal data or cause damage.

Traditional perimeter security devices like firewalls, IDS (Intrusion Detections Systems) and IPS (Intrusion Prevention Systems) are widely deployed. These tools are effective at removing certain kinds of weaknesses. They also generate alerts when suspicious events occur, however the volume of events is such that it is almost impossible to investigate each as they occur. Whilst these devices remain an essential part of the defence, for the agile business using cloud services, with mobile users and connecting directly to customers and partners, there is no perimeter and they are not sufficient.

SIEM (Security Information and Event Management) was promoted as a solution to these problems. However, in reality SIEM is a set of tools that can be configured and used to analyse event data after the fact and to produce reports for auditing and compliance purposes. While it is a core security technology, it has not been successful at providing actionable security intelligence in real time.

This has led to the emergence of a new technology Real Time Security Intelligence (RTSI). This is intended to detect threats in real time or in near real time to enable action to be taken before damage is done. It uses techniques taken from big data and business intelligence to reduce that massive volume of security event data collected by SIEM to a small number of actionable alarms where there is a high confidence that there is a real threat.

At the current state of the art for RTSI, Managed Services is an essential component. This is because of the rapid evolution of threats, which makes it almost impossible for a single organization to keep up to date, and the complexity of the analysis that is required to identify how to distinguish these. This up to date knowledge needs to be delivered as part of the RTSI solution.

The volume of threats to IT systems, their potential impact and the difficulty to detect them are the reasons why real time security intelligence has become important. However, RTSI technology is at an early stage and the problem of calibrating normal activity still requires considerable skill. It is important to look for a solution that can easily build on the knowledge and experience of the IT security community, vendors and service providers. End user organizations should always opt for solutions that include managed services and pre-configured analytics, not just for tools.

KuppingerCole Advisory Note: Real Time Security Intelligence - 71033 provides an in depth look at this subject.

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

Learn more

Cyber Defence Center

Today, the Cyber Defence Center (CDC) or Security Operations Center (SOC) is at the heart of enterprise security management. It is used to monitor and analyze security alerts coming from the various systems across the enterprise and to take actions against detected threats. However, the rapidly growing number and sophistication of modern advanced cyber-attacks make running a SOC an increasingly challenging task even for the largest enterprises with their fat budgets for IT security. The overwhelming number of alerts puts a huge strain even on the best security experts, leaving just minutes [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00