As already discussed in one of our earlier newsletters, Internet of Things as a concept is by no means new – various smart devices capable of communicating with each other and their operators have been used in manufacturing, automotive industry, healthcare and even at home. These “Things” range from popular consumer products for home automation to enterprise devices like RFID tags all the way through to industrial sensors controlling critical processes like manufacturing or power generation. There is actually very little in common between them other than the reliance on standard network protocols for communicating over the existing Internet. Oh, and the complete lack of security.
Unfortunately, for decades, security for most embedded hardware vendors has always been an afterthought. Companies designing consumer products were more interested in bringing their products to the market as fast as possible and industrial control system vendors seemingly still live in an alternate universe where industrial networks are isolated from the Internet. In our reality, however, things have already changed dramatically. Simply because of the sheer scale and interoperability (at least on the network protocol level) that define modern IoT, it introduces a substantial number of new risks and attack surfaces.
First, the vast number of IoT devices out there makes it increasingly difficult not just to control and manage them, but also to update them if a vulnerability is discovered (if the device in question supports updates at all). Also, proliferation of connected devices greatly increases the chances for hackers to compromise a less reliable device and use it to navigate around the network to attack other devices.
Another obvious challenge is that the safety issue becomes much more critical. If a medical device like a pacemaker or an insulin pump is hacked, a patient’s life is at stake, not just his health record. A compromised connected car can cause traffic accidents. An attack on a piece of industrial equipment can cause critical disruptions or lead to industrial disasters (and even if no lives are lost, financial and legal consequences will be huge anyway).
Identity and privacy implications of the IoT proliferation can be massive as well. The information that can be leaked or stolen from unprotected smart sensors is much more sensitive than, say, your email account. Health records, location and habits history, home surveillance – all this data has to be protected accordingly. Solving the identity management challenge on the global scale is a separate and very daunting task, which vendors are only beginning to tackle.
However, although security experts have long realized that IoT has no room for weak security, this mindset is yet to catch on among the IoT manufacturers. Many of them either have no expertise in security or cannot afford spending much on it (this is especially true for consumer products built upon existing commodity hardware from third party manufacturers). Lack of established standards and protocols is another inhibiting factor.
So, where do we even begin to address these problems? On one hand, it seems that IoT device manufactures are primarily responsible for making their products more secure. Security by Design and Privacy by Design must become mandatory parts of their design processes. Vendors have to incorporate security features into their solutions on all levels from device firmware to service provider infrastructures to training their employees accordingly. They also must minimize data collection and store only the information that’s required for their devices to function and ensure that all applicable privacy regulations are addressed. Finally, they must provide continuous security updates and patches for the whole lifecycle of their products. Obviously, they must be both incentivized by government agencies for complying with these requirements and punished for violating them. They should also look to join various industry groups and technology alliances to get access to the latest standards and best practices.
However, it’s also obvious that we cannot rely on the vendors alone to address this massive and multifaceted problem. Designing a proper security infrastructure for modern “hyperconnected” businesses requires a holistic approach, where various security, privacy-enhancing and identity management solutions are operating in accord, orchestrated and monitored from a central management console. Emergence of new standards and open APIs in the IoT field to support such scenarios is therefore critical. Providing flexible identity management and fine-grained access control is especially important here, and many existing IAM tools are yet to be adapted to support the sheer scale and inherently heterogeneous nature of the Internet of Things.
It is also worth stressing that solving the IoT security challenge isn’t limited by addressing technology issues. To fulfill the often conflicting requirements and expectations of all parties involved, a lot of legal and liability issues have to be solved as well. And there are many more parties involved than many expect. For connected vehicles, for example, we have to think not just about relationships between car manufacturers and drivers, but also about insurance companies, auto mechanics, environmental protection agencies and, of course, the police.
Last but not least, we always have to think about consumer’s choice and consent. Giving users control over collection and sharing of their sensitive personal data by IoT devices can be not just a great business enabler for device manufacturers, but also a strong security and privacy-enhancing factor.
In the end, the Internet of Things is here to stay. It provides a great number of new opportunities, but introduces quite a number of new risks. These risks can only be addressed by the combined effort of IoT device manufacturers, “traditional” IT security and IAM vendors, technology alliances and standards bodies, governments and end users. Only together we can ensure that “Industry 4.0” won’t one day turn into “Skynet 1.0”.
IoT (Internet of Things) and Smart Manufacturing are part of the ongoing digital transformation of businesses. IoT is about connected things, from sensors to consumer goods such as wearables. Smart Manufacturing, also sometimes titled Industry 4.0, is about bridging the gap between the business processes and the production processes, i.e. manufacturing goods.
In both areas, security is a key concern. When connecting things, both things and the central systems receiving data back from things must be sufficiently secure. When connecting business IT and operational IT (OT for Operational Technology), frequently systems that formerly have been behind an “air gap” now become directly connected. The simple rule behind all this is: “Once a system is connected, it can be attacked” – via that connection. Connecting things and moving forward to Smart Manufacturing thus inevitably is about increasing the attack surface.
Traditionally, if there is a separate security (and not only a “safety”) organization in OT, this is segregated from the (business) IT department and the Information Security and IT Security organization. For the things, there commonly is no defined security department. The logical solution when connecting everything apparently is a central security department that oversees all security – in business IT, in OT, in things. However, this is only partially correct.
Things must be constructed following the principles of security by design and privacy by design from the very beginning. Security must not be an afterthought. Notably, this also increases agility. Thus, the people responsible for implementing security must reside in the departments creating the “things”. Security must become an integral part of the organization.
For OT, there is a common gap between the safety view in OT and the security perspective of IT. However, safety and security are no dichotomy – we need to find ways of supporting both, in particular by modernizing the architecture of OT, well beyond security. Again, security has to be considered here at any stage. Thus, execution also should be an integral part of e.g. planning plants and production lines.
Notably, the same applies for IT. Security must not be an afterthought. It must move into the DNA of the entire organization. Software development, procurement, system management etc. all have to think about security as part of their daily work.
Simply said: Major parts of security must move into the line of business departments. There are some cross-functional areas e.g. around the underlying infrastructure that still need to be executed centrally (plus potentially service centers e.g. for software development etc.) – but particularly when it is about things, security must become an integral part of R&D.
On the other hand, the new organization also needs a strong central element. While the “executive” element will become increasingly decentralized, the “legislative” and “judicative” elements most be central – across all functions, i.e. business IT, OT, and IoT. With other words: Governance, setting the guidelines and governing their correct execution, is a central task that must span and cover all areas of the connected enterprise.
IoT, the Internet of Things, covers a wide range of technologies. My Fitbit e.g. is an IoT device, it connects to my smartphone which formats the data collected on my movements. Also, vehicles that communicate with diagnostic instruments and my home thermostat that I can control via the Internet are IoT gadgets.
This article, however, is concerned with a very particular type of IoT device: a sensor or actuator that is used in an industrial computer system (ICS). There are many changes occurring in the Industrial computer sector; the term Industry 4.0 has been coined as a term to describe this 4th generation disruption.
A typical ICS configuration looks like the following:
- The SCADA display unit shows the process under management in a graphic display. Operators can typically use the SCADA system to enter controls to modify the operation in real-time.
- The Control Unit is the main processing unit that attaches the remote terminal units to the SCADA system. The Control unit responds to the SCADA system commands.
The Remote Terminal Unit (RTU) is a device, such as a Programmable Logic Controller (PLC), that is used to connect one or more devices (monitors or actuators) to the control unit. It is typically positioned close to the process being managed or monitored but the RTUs may be hundreds of kilometres away from the SCADA system.
Communication links can be Ethernet for a production system, a WAN link over the Internet, a private radio link for a distributed operation or a telemetry link for equipment in a remote area without communications facilities.
So what are the main concerns regarding IoT in the ICS space? As can be seen from the above diagram there are two interfaces that need to be secured. The device to RTU and the fieldbus link between the RTO and the Control Unit.
The requirement on the device interface is for data integrity. In the past ICS vendors have relied upon proprietary unpublished interfaces i.e. security by obscurity. This is not sustainable because device suppliers are commoditising the sector and devices are increasingly becoming generic in nature. Fortunately, these devices are close to the RTU and in controlled areas in many ICS environments.
The interface to the Control Unit is typically more vulnerable. If this link is compromised the results can be catastrophic. The main requirement here is for confidentiality; the link should be encrypted if possible and this should be taken into account when selecting a communications protocol. Manufacturing applications will often use MQTT which supports encryption, electrical distribution systems will often use DNP3 which can support digital signatures, in other cases poor quality telemetry links must be used in which case a proprietary protocol may be the best option to avoid potential spoofing attacks.
One big benefit of the current developments in the ICS sector is the increasing support for security practices for operational technology. Whereas in the past there was a reliance in isolation of the ICS network, there is now an appreciation that security technology can protect sensitive systems while enjoying the benefits of accessibility. In fact, both worlds can be seen as siblings, focused on different parts of the enterprise. There already exist promising possibilities to enable this duality, e.g. this one. But understanding the technology is also important: One home automation equipment supplier released a line of sensor equipment with an embedded digital certificate with a one-year validity.
Conclusion: Despite all – yet partly unseen – benefits of connected things, there are still many pitfalls in vulnerable industrial networks and there is a massive danger of doing IoT basically wrong. The right path has still to be found and the search for the best solutions is a constant discovery process. As always, one of the best ways to success is sharing one’s experiences and knowledge with others, who are on the same journey.
Last week, CA Technologies has announced several new products in their API Management portfolio. The announcement was made during their annual CA World event, which took place November 16-20 in Las Vegas. This year, the key topic of the event has been Application Economy, so it is completely unsurprising that API management was a big part of the program. After all, APIs are one of the key technologies that drive the “digital transformation”, helping companies to stay agile and competitive, enable new business models and open up new communication channels with partners and customers.
Whether the companies are leveraging APIs to accelerate their internal application development, expose their business competence to new markets or to adopt new technologies like software-defined computing infrastructures, they are facing a lot of complex challenges and have to rely on third-party solutions to manage their APIs. The API Management market, despite its relatively young age, has matured quickly, and CA Technologies has become one of the leading players there. In fact, just a few months ago KuppingerCole has recognized CA as the overall leader in the Leadership Compass on API Security Management.
However, even a broad range of available solutions for publishing, securing, monitoring or monetizing APIs does not change the fact that before a backend service can be exposed as an API, it has to be implemented – that is, a team of skilled software developers is still required to bring your corporate data or intelligence into the API economy. Although quite a number of approaches exist to make the developer’s job as easy and efficient as possible (sometimes even eliminating the need for a standalone backend, like the AWS Lambda service), business persons are still unable to participate in this process on their own.
Well, apparently, CA is going to change that. The new CA Live API Creator is a solution that’s aiming at eliminating programming from the process of creating data-driven APIs. For a lot of companies, joining the API economy means the need to unlock their existing data stores and make their enterprise data available for consumption through standard APIs. For these use cases, CA offers a complete solution to create REST endpoints that expose data from multiple SQL and NoSQL data sources using a declarative data model and a graphical point-and-click interface. By eliminating the need to write code or SQL statements manually, the company claims tenfold time-to-market improvement and 40 times more concise logic rules. Most importantly, however, business persons no longer need to involve software developers – the process seems to be easy and straightforward enough for them to manage on their own.
CA Live API Creator consists of three components:
- Database Explorer, which provides interactive access to the enterprise data across SQL and NoSQL data sources directly from a browser. With this tool, users can not just browse and search, but also manage this information and even create “back office apps” with graphical forms for editing the data across multiple tables.
- API Creator, the actual tool for creating data-driven APIs using a point-and-click GUI. It provides the means for designing data models, defining logical rules, managing access control and so on, all without the need to write application code or SQL statements. It’s worth stressing that it’s not a GUI-based code generator – the solution is based on an object model, which is directly deployed to the API server.
- The aforementioned API Server is responsible for execution of APIs, event processing and other runtime logic. It connects to the existing data sources and serves client requests to REST-based API endpoints.
Although the product hasn’t been released yet (will become available in December), and although it should be clearly understood that it’s by nature not an universal solution for all possible API use cases, we can already see a lot of potential. The very idea of eliminating software developers from the API publishing process is pretty groundbreaking, and if CA delivers on their promises to make the tool easy enough for business people, it will become a valuable addition to the company’s already first-class API management portfolio.
Security is a common concern of organizations adopting cloud services and so it was interesting to hear from end users at the AWS Summit in London on November 17th how some organizations have addressed these concerns.
Financial services is a highly regulated industry with a strong focus on information security. At the event Allan Brearley, Head of Transformation Services at Tesco Bank, described the challenges they faced exploiting cloud services to innovate and reduce cost, while ensuring security and compliance. The approach that Tesco Bank took, which is the one recommended in KuppingerCole Advisory Note: Selecting your Cloud Provider, is to identify and engage with the key stakeholders. According to Mr Brearley it is important adopt a culture to satisfy all of the stakeholders’ needs all of the time.
In the UK the government has a cloud first strategy. Government agencies using cloud services must follow the Cloud Security Principles, first issued by UK Communications- Electronics Security Group’s (CESG) in 2014. These describe the need to take a risk based approach to ensure suitability for purpose. Rob Hart of the UK DVSA (Driver & Vehicle Standards Agency), that is responsible for road safety in UK, described the DVSA’s journey to the adoption of AWS cloud services. Mr Hart described that the information being migrated to the cloud was classified according to UK government guidelines as “OFFICIAL”. That is equivalent to commercially sensitive or Personally Identifiable Information. The key to success, according to Mr Hart, was to involve the Information Security Architects from the very beginning. This was helped by these architects being in the same office as the DVSA cloud migration team.
AWS has always been very open that the responsibility for security is shared between AWS and the customer. AWS publish their “Shared Responsibility Model” which distinguishes between the aspects of security that AWS are responsible for, and those for which the customer is responsible.
Over the past months AWS has made several important announcements around the security and compliance aspects of their services. There are too many to cover in here and so I have chosen 3 around compliance and 3 around security. Firstly announcements around compliance include:
- ISO/IEC 27018:2014 – AWS has published a certificate of compliance with this ISO standard which provides a code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors.
- UK CESG Cloud Security Principles. In April 2015 AWS published a whitepaper to assist organisations using AWS for United Kingdom (UK) OFFICIAL classified workloads in alignment with CESG Cloud Security Principles.
- Security by Design – In October 2015 AWS published a whitepaper describing a four-phase approach for security and compliance at scale across multiple industries. This points to the resources available to AWS customers to implement security into the AWS environment, and describes how to validate controls are operating.
Several new security services were also announced at AWS re:Invent in October. The functionality provided by these services is not unique however it is tightly integrated with AWS services and infrastructure. Therefore these services provide extra benefits to a customer that is prepared to accept the risk of added lock-in. Three of these include:
- Amazon Inspector – this service, which is in preview, scans applications running on EC2 for a wide range of known vulnerabilities. It includes a knowledge base of rules mapped to common security compliance standards (e.g. PCI DSS) as well as up to date known vulnerabilities.
- AWS WAF Web Application Firewall – this is a Web Application Firewall that can detect suspicious network traffic. It helps to protect web applications from attack by blocking common web exploits like SQL injection and cross-site scripting.
- S2N Open Source implementation of TLS – This is a replacement created by AWS for the commonly used OpenSSL (which contained the “Heartbleed” vulnerability). S2N replaces the 500,000 lines code in OpenSSL with approximately 6,000 lines of audited code. This code has been contributed to Open Source and is available from S2N GitHub Repository.
AWS has taken serious steps to help customers using its cloud services to do so in a secure manner and to assure that they remain compliant with laws and industry regulations. The customer experiences presented at the event confirm that AWS’s claims around security and compliance are supported in real life. KuppingerCole recommends that customers using AWS services should make full use of the security and compliance functions and services provided by AWS.
According to GCHQ, the number of cyber-attacks threatening UK national security have doubled in the past 12 months. How can organizations protect themselves against this growing threat especially when statistics show that most data breaches are only discovered some time after the attack took place? One important approach is to create a Cyber Defence Centre to implement and co-ordinate the activities needed to protect, detect and respond to cyber-attacks.
The Cyber Defence Centre has evolved from the SOC (Security Operation Centre). It supports the processes for enterprise security monitoring, defence, detection and response to cyber based threats. It exploits Real Time Security Intelligence (RTSI) to detect these threats in real time or in near real time to enable action to be taken before damage is done. It uses techniques taken from big data and business intelligence to reduce that massive volume of security event data collected by SIEM to a small number of actionable alarms where there is a high confidence that there is a real threat.
A Cyber Defence Centre is not cheap or easy to implement so most organizations need help with this from an organization with real experience in this area. At a recent briefing IBM described how they have evolved a set of best practice rules based on their analysis of over 300 SOCs. These best practices include:
The first and most important of these rules is to understand the business perspective of what is at risk. It has often been the case that the SOC would focus on arcane technical issues rather than the business risk. The key objective of the Cyber Defence Centre is to protect the organization’s business critical assets. It is vital that what is business-critical is defined by the organization’s business leaders rather than the IT security group.
Many SOCs have evolved from NOCs (Network Operation Centres) – however the NOC is not a good model for cyber-defence. The NOC is organized to detect, manage and remediate what are mostly technical failures or natural disasters rather than targeted attacks. Its objective is to improve service uptime and to restore service promptly after a failure. On the other hand, the Cyber Defence Centre has to deal with the evolving tactics, tools and techniques of intelligent attackers. Its objective is to detect these attacks while at the same time protecting the assets and capturing evidence. The Cyber Defence Centre should assume that the organizational network has already been breached. It should include processes to proactively seek attacks in progress rather than passively wait for an alarm to be raised.
The Cyber Defence Centre must adopt a systematized and industrialized operating model. An approach that depends upon the individual skills is neither predictable nor scalable. The rules and processes should be designed using the same practices as for software with proper versioning and change control. The response to a class of problem needs to be worked out together with the rules on how to detect it. When the problem occurs is not a good time to figure out what to do. Measurements is critical – you can only manage what you can measure and measurement allows you to demonstrate the change levels of threats and the effectiveness of the cyber defence.
Finally, as explained by Martin Kuppinger in his blog: Your future Security Operations Center (SOC): Not only run by yourself, it is not necessary or even practical to operate all of the cyber defence activities yourself. Enabling this sharing of activities needs a clear model of how the Cyber Defence Centre will be operated. This should cover the organization and the processes as well as the technologies employed. This is essential to decide what to retain internally and to define what is outsourced an effective manner. Once again, an organization will benefit from help to define and build this operational model.
At the current state of the art for Cyber Defence, Managed Services are an essential component. This is because of the rapid evolution of threats, which makes it almost impossible for a single organization to keep up to date, and the complexity of the analysis that is required to identify how to distinguish these. This up-to-date knowledge needs to be delivered as part of the Cyber Defence Centre solution.
KuppingerCole Advisory Note: Real Time Security Intelligence provides an in-depth look at this subject.
Microsoft and Secure Islands today announced that Microsoft is to acquire Secure Islands. Secure Islands is a provider of automated classification for documents and further technologies for protecting information. The company already has tight integration into Microsoft’s Azure Rights Management Services (RMS), a leading-edge solution for Secure Information Sharing.
After completing the acquisition, Microsoft plans full integration of Secure Islands’ technology into Azure RMS, which will further enhance the capabilities of the Microsoft product, in particular by enabling interception of data transfer from various sources on-premise and in the cloud, and by automated and, if required, manual classification.
Today’s announcement confirms Microsoft's focus and investment into the Secure Information Sharing market, with protecting information at the information source (e.g. document) itself being one of the essential elements of any Information Security strategy. Protecting what really needs to be protected – the information – obviously (and if done right) is the best strategy for Information Security, in contrast to indirect approaches such as server security or network security.
By integrating Secure Islands' capabilities directly into Microsoft Azure RMS, Microsoft now can deliver an even more comprehensive solution to its customers. Furthermore, Microsoft continues working with its Azure RMS partner ecosystem in providing additional capabilities to its customers.
There is no doubt that organizations need both a plan for what happens in case of security incidents and a way to identify such incidents. For organizations that either have high security requirements or are sufficient large, the standard way for identifying such incidents is setting up a Security Operations Center (SOC).
However, setting up a SOC is not that easy. There are a number of challenges. The three major ones (aside of funding) are:
- Integration & Processes
The list is, from our analysis, order in according to the complexity of challenges. Clearly the biggest challenge as of today is finding the right people. Security experts are rare, and they are expensive. Furthermore, for running a SOC you not only need subject matter experts for network security, SAP security, and other areas of security. In these days of a growing number of advanced attacks, you will need people who understand the correlation of events at various levels and in various systems. These are even more difficult to find.
The second challenge is integration. A SOC does not operate independently from the rest of your organization. There is a need for technical integration into Incident Management, IT GRC, and other systems such as Operations Management for automated reactions on known incidents. Incidents must be handled efficiently and in a defined way. Beyond the technical integration, there is a need for well thought-out process for incident and crisis management or, as it commonly is named, Breach & Incident Response.
The third area is technology. Such technology must be adequate for today’s challenges. Traditional SIEM (Security Information and Event Management) isn’t sufficient anymore. SIEM solutions might complement other solutions, but there needs to be a strong focus on analytics and anomaly detection. From our perspective, the overarching trend goes towards what we call RTSI - Real Time Security Intelligence. RTSI is more than just a tool, it is a combination of advanced analytical capabilities and managed services.
We see a growing demand for these solutions – I’d rather say that customers are eagerly awaiting the vendors delivering mature RTSI solutions, including comprehensive managed services. There is more demand than delivery today. Time for the vendors to act. And time for customers to move to the next level of SOCs, well beyond SIEM.
With the ever-growing number of new security threats and continued deterioration of traditional security perimeters, demand for new security analytics tools that can detect those threats in real time is growing rapidly. Real-Time Security Intelligence solutions are going to redefine the way existing SIEM tools are working and finally provide organizations with clearly ranked actionable items and highly automated remediation workflows.
Various market analysts predict that security analytics solutions will grow into a multibillion market within the next five years. Many vendors, big and small, are now rushing to bring their products to this market in anticipation of its potential. However, the market is still far from reaching the stage of maturity. First, the underlying technologies have not themselves reached full maturity yet, with areas like machine learning and threat intelligence still being constantly developed. Second, very few vendors possess enough intellectual property or resources to integrate all these technologies into a single universal solution.
In a sense, RTSI segment is the frontier of the overall market for information security solutions. When selecting the tools most appropriate for their requirements, customers thus have to be especially careful and should not take vendors’ claims for granted. Support for different data sources, scope of anomaly detection and usability in general may vary significantly.
Although we should expect that in a few years, the market will settle and the broad range of products with various scopes of functionality available today will eventually converge to a reasonable number, today we are still far from that. While some vendors are deciding for evolutionary development of their existing products, others opt for strategic acquisitions. At the same time, smaller companies or even startups are bringing their niche products to the market, aiming for customers looking for point solutions for their most critical problems. The resulting multitude of solutions makes them quite difficult to compare and even harder to predict in which direction the market will evolve. We can however name a few notable vendors from different strata of the RTSI market to at least give you an idea where to start looking.
First, large vendors currently offering “traditional” SIEM solutions are obviously interested in bringing their products up to date with the latest technological developments. This includes IBM Security with their QRadar SIEM and Guardium products with significantly improved analytics capabilities, RSA Security Analytics platform, NetIQ Sentinel or smaller vendors like Securonix or LogRythm.
Another class of vendors are companies coming from the field of cybersecurity. Their products are focusing more on detection and prevention of external and internal threats, and by integrating big data analytics and their own or 3rd party sources of threat intelligence they naturally evolve into RTSI solutions that are leaner and easier to deploy than traditional SIEMs and are targeted at smaller organizations. Notable examples here could be CyberArk with Privileged Threat Analytics as a part of their Privileged Account Security solution, Hexis Cyber Solutions with their HawkEye G and AP analytics platforms or AlienVault with Unified Security Management offering. Another important, yet much less represented aspect of security intelligence is user behavior analytics with vendors like BalaBit with Blindspotter tool recently added to their portfolio or Gurucul providing a number of specialized analytics solutions in that area.
Besides bigger vendors, numerous startups with products usually concentrating on a single source of analytics information like network traffic analysis, endpoint security or mobile security analytics. Their solutions are usually targeted towards small and medium businesses and, although limited in their functional scope, rely more on ease of deployment, simplicity of user interface and quality of support service to win their potential customers. For small companies without sufficient security budgets or expert teams, these products can be a blessing, because they quickly address their most critical security problems. To name just a few vendors here: Seculert with their cloud-based analytics platform, Cybereason with an unorthodox approach towards endpoint security analytics, Cynet with their rapidly deployed integrated solution, Logtrust with a focus on log analysis or Fortscale with a cloud-based solution for detecting malicious users.
Surely, such a large number of different solutions makes RTSI market quite difficult to analyze and predict. On the other hand, almost any company will probably be able to find a product that’s tailored specifically for their requirements. It’s vital however that they should look for complete solutions with managed services and quality support, not just for another set of tools.
Organizations depend upon the IT systems and the information that they provide to operate and grow. However, the information that they contain and the infrastructure upon which they depend is under attack. Statistics show that most data breaches are detected by agents outside of the organization rather than internal security tools. Real Time Security Intelligence (RTSI) seeks to remedy this.
Unfortunately, many organizations fail to take simple measures to protect against known weaknesses in infrastructure and applications. However, even those organizations that have taken these measures are subject to attack. The preferred technique of attacks is increasingly one of stealth; the attacker wants to gain access to the target organization’s systems and data without being noticed. The more time the attacker has for undetected access the more the opportunity to steal data or cause damage.
Traditional perimeter security devices like firewalls, IDS (Intrusion Detections Systems) and IPS (Intrusion Prevention Systems) are widely deployed. These tools are effective at removing certain kinds of weaknesses. They also generate alerts when suspicious events occur, however the volume of events is such that it is almost impossible to investigate each as they occur. Whilst these devices remain an essential part of the defence, for the agile business using cloud services, with mobile users and connecting directly to customers and partners, there is no perimeter and they are not sufficient.
SIEM (Security Information and Event Management) was promoted as a solution to these problems. However, in reality SIEM is a set of tools that can be configured and used to analyse event data after the fact and to produce reports for auditing and compliance purposes. While it is a core security technology, it has not been successful at providing actionable security intelligence in real time.
This has led to the emergence of a new technology Real Time Security Intelligence (RTSI). This is intended to detect threats in real time or in near real time to enable action to be taken before damage is done. It uses techniques taken from big data and business intelligence to reduce that massive volume of security event data collected by SIEM to a small number of actionable alarms where there is a high confidence that there is a real threat.
At the current state of the art for RTSI, Managed Services is an essential component. This is because of the rapid evolution of threats, which makes it almost impossible for a single organization to keep up to date, and the complexity of the analysis that is required to identify how to distinguish these. This up to date knowledge needs to be delivered as part of the RTSI solution.
The volume of threats to IT systems, their potential impact and the difficulty to detect them are the reasons why real time security intelligence has become important. However, RTSI technology is at an early stage and the problem of calibrating normal activity still requires considerable skill. It is important to look for a solution that can easily build on the knowledge and experience of the IT security community, vendors and service providers. End user organizations should always opt for solutions that include managed services and pre-configured analytics, not just for tools.
KuppingerCole Advisory Note: Real Time Security Intelligence - 71033 provides an in depth look at this subject.