Blog posts by Alexei Balaganski
The proverbial Computing Troika that KuppingerCole has been writing about for years does not show any signs of slowing down. The technological trio of Cloud, Mobile and Social computing, as well as their younger cousin, the Internet of Things, have profoundly changed the way our society works. Modern enterprises were quickly to adopt these technologies, which create great new business models, open up numerous communication paths to their partners and customers, and, last but not least, provide substantial cost savings. We are moving full speed ahead towards the Digital Era, and the future is full of promise. Or is it?
Unfortunately, the Digital Transformation does not only enable a whole range of business prospects, it also exposes the company’s most valuable assets to new security risks. Since those digital assets are nowadays often located somewhere in the cloud, with an increasing number of people and devices accessing them anywhere at any time, the traditional notion of security perimeter ceases to exist, and traditional security tools cannot keep up with the new sophisticated cyberattack methods.
In the recent years, the industry has come up with a new generation of security solutions, which KuppingerCole has dubbed “Real-Time Security Intelligence”. Thanks to a technological breakthrough that finally commoditized Big Data analytics technologies previously only affordable to large corporations, it became possible to collect, store, and analyze huge amounts of security data across multiple sources in real time. Various correlation algorithms have been implemented to find patterns in the data, as well as to detect anomalies, which in most cases indicate a certain kind of malicious activities.
Such security analytics solutions have been hailed (quite justifiably) by the media as the ultimate solution to most modern cybersecurity problems. Some even go as far as referring to these technologies as “machine learning” or even “artificial intelligence”. It should be noted however, that detecting patterns and anomalies in data sets has very little to do with true intelligence – in fact, if the “IQ level” of a traditional signature-based antivirus can be compared to that of an insect, then the correlation engine of a modern security analytics solution is about as “smart” as a frog catching flies.
Unfortunately, the strong artificial intelligence, comparable in skill and flexibility to a human, is still purely a subject of theoretical academic research. Its practical applications, however, are no longer a science fiction topic. To the contrary, these applied cognitive technologies have been actively developed for quite some time already, and the exponential growth of cloud computing has been a major boost for their further development in the recent years. Such technologies as computer vision, speech recognition, natural language processing or machine learning have found practical use in many industries, and cybersecurity is the most recent field where they promise to achieve a major breakthrough.
You see, the biggest problem information security is now facing has nothing to do with computers. In fact, the vast majority (over 80%) of security-related information in the world remains completely inaccessible to computers: it exists only in an unstructured form spread across tens of thousands of publications, conference presentations, forensic reports and other sources – spoken, written or visual.
Only a human can read and interpret those data sources, but we do not have nearly enough humans trained as security analysts to cope with the amount of new security information produced daily.
This is where Cognitive Security, a new practical application of existing cognitive technologies, comes into play. A cognitive security solution would be able to utilize natural language processing and machine learning methods to analyze both structured and unstructured security information the way humans do. It would be able to read texts (or even see pictures and listen to speeches) and not just recognize patterns within them, but be able to interpret and organize the information, explain its meaning, postulate hypotheses and provide reasoning based on evidence.
This may feel like science fiction to some, but the first practical cognitive security solutions are already appearing on the market. A major player and one of the pioneers in this field is undoubtedly IBM with their Watson platform. Originally created back in 2005 to compete with human players in the game of Jeopardy, over the years Watson has expanded significantly and found many practical applications in business analytics, government, legal and even healthcare services.
In May 2016, IBM has announced Watson for Cyber Security, a completely new field for their natural language processing and machine learning platform. However, IBM is definitely not a newcomer in cyber security. In fact, their own X-Force research library is being used as the primary source of security information to be fed into the specialized instance of the platform running in the cloud. Although the learning process is still in progress, the ultimate goal is to process all of those 80% of security intelligence data and make it available in structured form.
Of course, Watson for Cyber Security will never replace a human security analyst, but that is not its goal. First, making this “dark security data” accessible for automated processing by current security analytics solutions can greatly improve their efficiency as well as provide additional external threat intelligence. Second, cognitive security would provide analysts with powerful decision support tools, simplifying and speeding up their work and thus reducing the skills gap haunting the security industry today. In the future, the same cognitive technologies may be also applied to a company’s own digital assets to provide better analytics and information protection. Potentially, they may even make developing malware capable of evading detection too costly, thus turning the tide of the ongoing battle against cybercrime.
Last week, Microsoft has announced the general availability of the Azure Security Center – the company’s integrated solution for monitoring, threat detection and incident response for Azure cloud resources. Initially announced last year as a part of Microsoft’s new cross-company approach to information security, Azure Security Center has been available as a preview version since December 2015. According to Microsoft, the initial release has been used to monitor over 100 thousand cloud subscriptions and has identified over a million and a half of vulnerabilities and security threats.
So, what is it all about anyway? In short, Azure Security Center is a security intelligence service built directly into the Azure cloud platform.
- It provides security monitoring and event logging across Azure Cloud Services and Linux-based virtual machines, as well as various partner solutions;
- It enables centralized management of security policies for various resource groups, depending on business requirements or compliance regulations;
- It provides automated recommendations on addressing most common security problems, such as configuring network security groups, installing missing system updates or automatically deploying antimalware, web application firewall or other security tools in your cloud infrastructure;
- It analyzes and correlates various security events in near real-tome, fuses them with the latest threat intelligence from own and third party security intelligence feeds and generates prioritized security alerts when threats are detected;
- It provides a number of APIs, an interface to Microsoft Power BI and a SIEM connector to access and analyze security events from the Azure cloud using existing tools.
In other words, Microsoft Azure Security Center is a full-featured Real-Time Security Intelligence solution “in the cloud, for the cloud”. Sure, other SIEM and security analytics solutions provide integrations with cloud resources as well, but, being a native component of the Azure cloud infrastructure, Microsoft’s own solution has several obvious benefits, such as better integration with other Azure services, more efficient resource utilization and much lower deployment effort.
In fact, there is nothing to deploy at all – one can activate the Security Center directly in the Azure Portal. Moreover, basic security features and partner integrations are available for free; only advanced threat detection (like threat intelligence, behavior analysis, and anomaly detection) is priced per monitored resource.
With Azure Security Center now available for all Azure subscribers, offering new partner integrations (for example, vulnerability assessment by companies like Qualys) and new threat detection algorithms, there is really no reason why you should not immediately turn it on for your subscription. Even with the basic free functions, it provides a useful layer of security for the cloud infrastructure, but with the full range of behavior-based and anomaly-detection algorithms and a rich set of integration options, Azure Security Center can serve either as a center of your cloud security platform or as a means of extending your existing SIEM-based security operations center to the Azure cloud.
A couple weeks ago, just as we were busy running our European Identity & Cloud Conference, we’ve got news from IBM announcing the company’s foray into the area of Cognitive Security. And, although I’m yet to see their solution in action (closed beta starts this summer), I have to admit I rarely feel so excited about news from IT industry.
First of all, a quick reminder: the term “cognitive computing” broadly describes technologies based on machine learning and natural language processing that mimic the functions of human brains. Such systems are able to analyze vast amounts of unstructured data usually inaccessible to traditional computing platforms and not just search for answers, but create hypotheses, perform reasoning and support human decision making. This is really the closest we have come to Artificial Intelligence as seen in science fiction movies.
Although the exact definition of the term still causes much debate among scientists and marketing specialists around the world, cognitive computing solutions in the form of specialized hardware and software platforms have existed for quite some time, and the exponential growth of cloud computing has been a big boost for their further development. In fact, IBM has always been one of the leading players in this field with their Watson platform for natural language processing and machine learning.
IBM Watson was initially conceived in 2005 as a challenge to beat human players in the game of Jeopardy, and its eventual victory in a 2011 match is probably its best publicized achievement, but the platform has been used for a number of more practical applications for years, including business analytics, healthcare, legal and government services. The company continues to build an entire ecosystem around the platform, partnering with numerous companies to develop new solutions that depend on unstructured data analysis, understanding natural language and complex reasoning.
In the hindsight, the decision to utilize Watson’s cognitive capabilities for cyber security application seems completely reasonable. After all, with their QRadar Security Intelligence Platform, IBM is also one of the biggest players in this market, and expanding its scope to incorporate huge amounts of unstructured security intelligence makes a lot of sense. By tapping into various sources like analyst publications, conference presentations, forensic reports, blogs and so on, cognitive technology will provide security analysts with new powerful tools to support and augment their decision making. Providing access to the collective knowledge from tens of thousands sources constantly adapted and updated with the newest security intelligence, Watson for Cyber Security is supposed to solve the biggest problem IT security industry is currently facing – a dramatic lack of skilled workforce to cope with the ever growing number of security events.
Naturally, the primary source of knowledge for Watson is IBM’s own X-Force research library. However, the company is now teaming with multiple universities to expand the amount of collected security intelligence to feed into the specialized Watson instance running in the cloud. The ultimate goal is to unlock the estimated 80% of all security intelligence data, which is currently available only in an unstructured form.
It should be clear, of course, that this training process is still work in progress and by definition it will never end. There are also some issues to be solved, such as obvious concerns about privacy and data protection. Finally, it’s still not clear whether this new area of application will generate any substantial revenue for the company. But I’m very much looking forward to seeing Watson for Cyber Security in action!
By the way, I was somewhat disappointed to find out that Watson wasn’t actually named after Sherlock Holmes’ famous friend and assistant, but in fact after IBM’s first CEO Thomas Watson. Still, the parallels with “The Adventure of the Empty House” are too obvious to ignore :)
Yesterday at the RSA Conference, IBM has officially confirmed what’s already been a rumor for some time – the company is planning to acquire Resilient Systems for an undisclosed amount.
Resilient Systems, a relatively small privately held company based in Cambridge, MA, is well known for its Incident Response Platform, a leading solution for orchestrating and automating incident response processes. With the number of security breaches steadily growing, the focus within IT security industry is currently shifting more and more from detection and prevention towards managing the consequences of an attack that’s already happened. Such an incident response solution can provide a company with a predefined strategy for responding to various types of attacks, tailored to specific laws and industry regulations. It would then support the IT department at every step of the process, helping to get the affected infrastructure back online, address privacy concerns, solve organizational and legal issues and so on.
Despite being on the market for less than 5 years, Resilient Systems has already become a leading player in this segment, with their IRP solution being used by a variety of clients in all verticals, from mid-size businesses to Fortune 500 companies. Among other features, the product is known for its integration with multiple leading security solutions. In fact, Resilient Systems has been IBM’s partner for some time, integrating their product with IBM’s QRadar.
So, in the hindsight, the announcement doesn’t really come as a big surprise. For IBM Security, this acquisition means not just incorporating a leading incident response solution into their cyber security portfolio, but also hiring a 100 men strong team of security experts including the venerable Bruce Schneier, who’s currently serving as the Resilient Systems’ CTO. What’s in the deal for Resilient Systems is not as easy to say, since the financial details of the deal are not disclosed, but we can definitely be sure that gaining access to IBM’s vast partner network opens a lot of interesting business prospects.
By adding the new Incident Response capabilities to their existing QRadar security intelligence solution and X-Force Exchange threat intelligence platform, IBM is hoping to become the world’s first vendor with a fully integrated platform for security operations and response. In the same press release, the company has already announced their new IBM X-Force Incident Response Services.
With RSA kicking off this week, security experts from around the world are getting ready for a flurry of announcements from security vendors. Last Friday, it was Microsoft’s turn, and the company’s CISO Bret Arsenault has publicly announced some interesting news. The motto of the announcement is “Enterprise security for our mobile-first, cloud-first world” and it was all about unifying several key components, such as real-time predictive intelligence, correlating security data with threat intelligence data and, last but not least, collaboration with the industry and partners to provide a unified and agile security platform that can protect, detect and respond to the numerous security risks out there. After the initial announcement last November, the company is ready to deliver the first concrete products and services developed around this concept.
Perhaps the most important and yet the least surprising announced product is Microsoft Cloud App Security. Since the company has acquired a well-known cloud application security vendor Adallom, analysts have been waiting for Microsoft to integrate this technology into their products. With this product, Microsoft’s customers are promised to achieve the same level of visibility and control over their cloud applications as they are used to with their on-premise infrastructures. By combining a proven underlying technology from Adallom with a large number of integrations with popular cloud services like Box, ServiceNow, Salesforce and naturally Office 365, and by leveraging the threat intelligence collected from the world’s largest identity management service, Microsoft has all the chances to become an important player in the rapidly growing CASB (Cloud Access Security Broker) market, compensating for their relatively late coming to the market.
Cloud App Security will become generally available as a standalone product (or as a part of the Enterprise Mobility Suite) in April 2016. Much more interesting however is the announcement that this technology will also power new security management capabilities of Office 365 and will eventually be available to all existing Office 365 customers. With the release planned for Q3 2016, we should expect functions like advanced security alerts, cloud app discovery and permissions management for 3rd party cloud services integrated directly into the platform.
Another major announcement is the public preview of Azure Active Directory Identity Protection service. With this service, Microsoft is tapping into the vast amount of threat intelligence collected from their Azure Active Directory infrastructure and using machine learning algorithms to identify brute force attacks, leaked credentials and various types of anomalies in any applications working with Azure AD. Besides real-time detection, customers will be able to get remediation recommendations or even define their own risk-based policies for automated identity protection. In other words, what we have here is a classic example of a specialized Real Time Security Intelligence solution!
Other announced additions to Microsoft’s secure platform include, for example, Customer Lockbox feature for SharePoint Online and OneDrive for Business, which provides cloud service customers complete and explicit control over privileged access to their data by Microsoft’s support engineers. Combining technical and organizational measures, this feature is aimed at improving trust between Microsoft as a cloud service provider and its customers, which we at KuppingerCole see as one of the critical aspects of Cloud Provider Assurance.
Additionally, numerous improvements in security management and reporting have been announced in Azure Security Center. These include integrations with multiple third party security products (nextgen firewalls and web application firewalls) from vendors like Cisco, Check Point, CloudFlare, Imperva, etc.
To summarize it all, Microsoft is again showing that it’s able to consistently follow their long term strategy, working in parallel in several directions and keeping their new products and services synchronized and integrated into a holistic security platform. Of course, it would have been interesting to learn more about 3rd party integrations and partnerships, especially with various industry alliances. However, we can be sure that this wasn’t the last announcement from Microsoft, so we’re staying tuned for more.
After an “extended holiday season” (which for me included spending a vacation in Siberia and then desperately trying to get back into shape) it’s finally time to resume blogging. And the topic for today is the cloud platform for IoT services from AWS, which went out of beta in December. Ok, I know it’s been a month already, but better late than never, right?
As already mentioned earlier, the very definition of the Internet of Things is way too blanket and people tend to combine many different types of devices under this term. So, if your idea of the Internet of Things means controlling your thermostat or your car from your mobile phone, the new service from AWS is probably not what you need. If, however, your IoT includes thousands or even millions of sensors generating massive amounts of data which needs to be collected, processed by complex rules and finally stored somewhere, then look no further, especially if you already have your backend services in the AWS cloud.
In fact, with AWS being the largest cloud provider, it’s safe to assume that its backend services have already been used for quite a few IoT projects. However, until now they would have to rely on third-party middleware for connecting their “things” to AWS services. Now the company has closed the gap by offering their own managed platform for interacting with IoT devices and processing data collected from them. Typically for AWS, their solution follows the no-frills, no-nonsense approach, offering native integrations with their existing services, a rich set of SDKs and development tools and aggressive pricing. In addition, they are bringing in a number of hardware vendors with starter kits that can help quickly implement a prototype for your new IoT project. And, of course, with the amount of computing resources at hand, they can safely claim to be able to manage billions of devices and trillions of messages.
The main components of the new platform are the following:
The Device Gateway supports low-latency bi-directional communications between IoT devices and cloud backends. AWS provides support for both standard HTTP and much more resource-efficient MQTT messaging protocols, both secured by TLS. Strong authentication and fine-grained authorization are provided by familiar AWS IAM services, with a number of simplified APIs available.
The Device Registry keeps track of all devices currently or potentially connected to the AWS IoT infrastructure. It provides various management functions like support and maintenance or firmware distribution. Besides that, the registry maintains Device Shadows – virtual representations of IoT devices, which may be only intermittently connected to the Internet. This functionality allows cloud and mobile apps to access all devices using a universal API, masking all the underlying communication and connectivity issues.
The Rules Engine enables continuous processing of data sent by IoT devices. It supports a large number of rules for filtering and routing the data to AWS services like Lambda, DynamoDB or S3 for processing, analytics and storage. It can also apply various transformations on the fly, including math, string, crypto and other operations or even call external API endpoints.
A number of SDKs are provided including a C SDK for embedded systems, a node.js SDK for Linux, an Arduino library and mobile SDKs for iOS and Android. Combined with a number of “official” hardware kits available to play with, this ensures that developers can quickly start working on an IoT project of almost any kind.
Obviously, one has to mention that Amazon isn’t the first cloud provider to offer an IoT solution – Microsoft has announced their Azure IoT Suite earlier in 2015 and IBM has their own Internet of Things Foundation program. However, each vendor has a unique approach towards addressing various IoT integration issues. The new solution from AWS, with a strong focus on existing standard protocols and unique features like device shadows, not just looks compelling to existing AWS customers, but will definitely kickstart quite a few new large-scale IoT projects. On the Amazon cloud, of course.
With the amount of digital assets a modern company has to deal with growing exponentially, the need to access them any time from any place, across various devices and platforms has become a critical factor for business success. This does not include just the employees – to stay competitive, modern businesses must be increasingly connected to their business partners, suppliers, current and future customers and even smart devices (or things). New digital businesses therefore have to be agile and connected.
Unsurprisingly, the demand for solutions that provide strongly protected storage, fine-grained access control and secure sharing of sensitive digital information is extremely high nowadays, with vendors rushing to bring their various solutions to the market. Of course, no single information sharing solution can possibly address all different and often conflicting requirements of different organizations and industries, and the sheer number and diversity of such solutions is a strong indicator for this. Vendors may decide to support just certain types of storage or document formats, concentrate on solving a specific pain points of many companies like enabling mobile access, or design their solutions for specific verticals only.
Traditional approach to securing sensitive information is storing it in a secured repository on-premise or in the cloud. By combining strong encryption and customer managed encryption keys or in the most extreme cases even implementing Zero Knowledge Encryption principle, vendors are able to address even the strictest security and compliance requirements. However, as soon as a document leaves the repository, traditional solutions are no longer able to ensure its integrity or to prevent unauthorized access to it.
Information Rights Management (IRM) offers a completely different, holistic approach towards secure information sharing. Evolving from earlier Digital Rights Management technologies, the underlying principle behind IRM is data-centric security. Essentially, each document is wrapped in a tiny secured container and has its own access policy embedded directly in it. Each time an application needs to open, modify or otherwise access the document, it needs to validate user permissions with a central authority. If those permissions are changed or revoked, this will be immediately applied to the document regardless of its current location. The central IRM authority also maintains a complete audit trail of document accesses.
Thus, IRM is the only approach that can protect sensitive data at rest, in motion, and in use. In the post-firewall era, this approach is fundamentally more future-proof, flexible and secure than any combination of separate technologies addressing different stages of the information lifecycle. However, it has one fatal flaw: IRM only works without impeding productivity if your applications support it. Although IRM solutions have gone a long way from complicated on-premise solutions towards cloud-based completely managed services, their adoption rate is still quite low. Probably the biggest reason for that is lack of interoperability between different IRM implementations, but arguably more harmful is the lack of general awareness that such solutions even exist! However, recently the situation has changed, with several notable vendors increasing efforts in marketing their IRM-based solutions.
One of the pioneers and certainly the largest such vendor is Microsoft. With the launch of their cloud-based Azure Rights Management services in 2014, Microsoft finally made their IRM solution affordable not just for large enterprises. Naturally, Microsoft’s IRM is natively supported by all Microsoft Office document formats and applications. PDF documents, images or text files are natively supported as well, and generic file encapsulation into a special container format is available for all other document types. Ease of deployment, flexibility and support across various device platforms, on-premise and cloud services make Azure RMS the most comprehensive IRM solution in the market today.
However, other vendors are able to compete in this field quite successfully as well either by adding IRM functionality into their existing platforms or by concentrating on delivering more secure, more comprehensive or even more convenient solutions to address specific customer needs.
A notable example of the former is Intralinks. In 2014, the company has acquired docTrackr, a French vendor with an innovative plugin-free IRM technology. By integrating docTrackr into their secure enterprise collaboration platform VIA, Intralinks is now able to offer seamless document protection and policy management to their existing customers. Another interesting solution is Seclore FileSecure, which provides a universal storage- and transport-neutral IRM extension for existing document repositories.
Among the vendors that offer their own IRM implementations one can name Covertix, which offers a broad portfolio of data protection solutions with a focus on strong encryption and comprehensive access control across multiple platforms and storage services. On the other end of the spectrum on can find vendors like Prot-On, which focus more on ease of use and seamless experience, providing their own EU-based cloud service to address local privacy regulations.
For more in-depth information about leading vendors and products in the file sharing and collaboration market please refer to KuppingerCole’s Leadership Compass on Secure Information Sharing.
Following the topic of the Internet of Things security covered in our latest Analysts’ View newsletter, I’d like to present a perfect example of how IoT device manufacturers are blatantly ignoring the most basic security best practices in their products. As an Austrian information security company SEC Consult revealed in their report, millions of embedded devices around the world, including routers and modems, IP phones and cameras and other network products, are reusing a small number of hardcoded SSH keys and SSL certificates.
According to SEC Consult, they have analyzed firmware images (usually freely available for download from manufacturers’ websites) of over 4000 various devices and were able to extract more than 580 unique private keys. Remember, a private key is the most critical component of any public key infrastructure and according to the most basic security best practices has to be protected from falling into the wrong hands by all means available. After that, the researches correlated their findings with the data from internet-wide scans, again publicly available to anyone interested, and found out that a handful of those hardcoded keys are used in over 4 million hosts directly connected to the Internet.
Although similar researches has been done earlier, this time the company was able to expose concrete products and vendors responsible, which include both small regional manufacturers and large international companies like Cisco, Huawei or ZyXEL. These devices are deployed by large internet service providers around the world, exposing millions of their subscribers to possible attacks.
It can be speculated what the exact reason for a particular manufacturer to include a hardcoded key into their product would be, but in the end it all boils down to blindly reusing sample code supplied by manufacturers of network chips or boards that power these devices. Whether because of incompetence or pure negligence, these “default” keys or certificates end up being included into device firmware images.
Since hackers would have private keys at hand, they could launch different types of attacks, including impersonation, man-in-the-middle or passive decryption attacks. Although the researchers rightfully point out that exploiting modems or routers from the internet is difficult and mostly limited to “evil ISPs”, one has to realize that SEC Consult’s research has only revealed the tip of an iceberg, and their findings do not present an exceptional case but rather a typical approach of many IoT vendors towards security. As more and more smart devices are deployed everywhere – in hospitals, connected cars and traffic lights or in manufacturing plants and power grids, the risk of exposing these devices to key reuse attacks increases dramatically, along with the severity of possible consequences of such attacks.
So, what can and must be done to prevent these attacks in the future? SEC Consult’s report outlines the steps that vendors and ISPs have to make, and they are pretty obvious. Device vendors have to stop including hardcoded keys into their firmware, generating unique keys on the first boot instead. ISPs should ensure that the devices they install have remote management disabled. End users should change keys in their devices (which, by the way, requires certain technical skills and in many devices is not permitted at all).
However, the bigger question isn’t what’s needed to fix the problem, but how to force vendors and internet providers to change their current business processes. They have not cared about security for years, why would they suddenly change their mind and start investing into it? There is no single answer for this question, and in any case a combined effort of government agencies, security experts and the end users themselves is needed to break the current trend. Only when vendors realize that building their products upon the Security by Design principle not only saves them from massive fines and legal problems, but in fact makes their products more competitive on the market, can we expect to see positive changes. Until then, IoT security will remain simply a fictional concept.
As already discussed in one of our earlier newsletters, Internet of Things as a concept is by no means new – various smart devices capable of communicating with each other and their operators have been used in manufacturing, automotive industry, healthcare and even at home. These “Things” range from popular consumer products for home automation to enterprise devices like RFID tags all the way through to industrial sensors controlling critical processes like manufacturing or power generation. There is actually very little in common between them other than the reliance on standard network protocols for communicating over the existing Internet. Oh, and the complete lack of security.
Unfortunately, for decades, security for most embedded hardware vendors has always been an afterthought. Companies designing consumer products were more interested in bringing their products to the market as fast as possible and industrial control system vendors seemingly still live in an alternate universe where industrial networks are isolated from the Internet. In our reality, however, things have already changed dramatically. Simply because of the sheer scale and interoperability (at least on the network protocol level) that define modern IoT, it introduces a substantial number of new risks and attack surfaces.
First, the vast number of IoT devices out there makes it increasingly difficult not just to control and manage them, but also to update them if a vulnerability is discovered (if the device in question supports updates at all). Also, proliferation of connected devices greatly increases the chances for hackers to compromise a less reliable device and use it to navigate around the network to attack other devices.
Another obvious challenge is that the safety issue becomes much more critical. If a medical device like a pacemaker or an insulin pump is hacked, a patient’s life is at stake, not just his health record. A compromised connected car can cause traffic accidents. An attack on a piece of industrial equipment can cause critical disruptions or lead to industrial disasters (and even if no lives are lost, financial and legal consequences will be huge anyway).
Identity and privacy implications of the IoT proliferation can be massive as well. The information that can be leaked or stolen from unprotected smart sensors is much more sensitive than, say, your email account. Health records, location and habits history, home surveillance – all this data has to be protected accordingly. Solving the identity management challenge on the global scale is a separate and very daunting task, which vendors are only beginning to tackle.
However, although security experts have long realized that IoT has no room for weak security, this mindset is yet to catch on among the IoT manufacturers. Many of them either have no expertise in security or cannot afford spending much on it (this is especially true for consumer products built upon existing commodity hardware from third party manufacturers). Lack of established standards and protocols is another inhibiting factor.
So, where do we even begin to address these problems? On one hand, it seems that IoT device manufactures are primarily responsible for making their products more secure. Security by Design and Privacy by Design must become mandatory parts of their design processes. Vendors have to incorporate security features into their solutions on all levels from device firmware to service provider infrastructures to training their employees accordingly. They also must minimize data collection and store only the information that’s required for their devices to function and ensure that all applicable privacy regulations are addressed. Finally, they must provide continuous security updates and patches for the whole lifecycle of their products. Obviously, they must be both incentivized by government agencies for complying with these requirements and punished for violating them. They should also look to join various industry groups and technology alliances to get access to the latest standards and best practices.
However, it’s also obvious that we cannot rely on the vendors alone to address this massive and multifaceted problem. Designing a proper security infrastructure for modern “hyperconnected” businesses requires a holistic approach, where various security, privacy-enhancing and identity management solutions are operating in accord, orchestrated and monitored from a central management console. Emergence of new standards and open APIs in the IoT field to support such scenarios is therefore critical. Providing flexible identity management and fine-grained access control is especially important here, and many existing IAM tools are yet to be adapted to support the sheer scale and inherently heterogeneous nature of the Internet of Things.
It is also worth stressing that solving the IoT security challenge isn’t limited by addressing technology issues. To fulfill the often conflicting requirements and expectations of all parties involved, a lot of legal and liability issues have to be solved as well. And there are many more parties involved than many expect. For connected vehicles, for example, we have to think not just about relationships between car manufacturers and drivers, but also about insurance companies, auto mechanics, environmental protection agencies and, of course, the police.
Last but not least, we always have to think about consumer’s choice and consent. Giving users control over collection and sharing of their sensitive personal data by IoT devices can be not just a great business enabler for device manufacturers, but also a strong security and privacy-enhancing factor.
In the end, the Internet of Things is here to stay. It provides a great number of new opportunities, but introduces quite a number of new risks. These risks can only be addressed by the combined effort of IoT device manufacturers, “traditional” IT security and IAM vendors, technology alliances and standards bodies, governments and end users. Only together we can ensure that “Industry 4.0” won’t one day turn into “Skynet 1.0”.
Last week, CA Technologies has announced several new products in their API Management portfolio. The announcement was made during their annual CA World event, which took place November 16-20 in Las Vegas. This year, the key topic of the event has been Application Economy, so it is completely unsurprising that API management was a big part of the program. After all, APIs are one of the key technologies that drive the “digital transformation”, helping companies to stay agile and competitive, enable new business models and open up new communication channels with partners and customers.
Whether the companies are leveraging APIs to accelerate their internal application development, expose their business competence to new markets or to adopt new technologies like software-defined computing infrastructures, they are facing a lot of complex challenges and have to rely on third-party solutions to manage their APIs. The API Management market, despite its relatively young age, has matured quickly, and CA Technologies has become one of the leading players there. In fact, just a few months ago KuppingerCole has recognized CA as the overall leader in the Leadership Compass on API Security Management.
However, even a broad range of available solutions for publishing, securing, monitoring or monetizing APIs does not change the fact that before a backend service can be exposed as an API, it has to be implemented – that is, a team of skilled software developers is still required to bring your corporate data or intelligence into the API economy. Although quite a number of approaches exist to make the developer’s job as easy and efficient as possible (sometimes even eliminating the need for a standalone backend, like the AWS Lambda service), business persons are still unable to participate in this process on their own.
Well, apparently, CA is going to change that. The new CA Live API Creator is a solution that’s aiming at eliminating programming from the process of creating data-driven APIs. For a lot of companies, joining the API economy means the need to unlock their existing data stores and make their enterprise data available for consumption through standard APIs. For these use cases, CA offers a complete solution to create REST endpoints that expose data from multiple SQL and NoSQL data sources using a declarative data model and a graphical point-and-click interface. By eliminating the need to write code or SQL statements manually, the company claims tenfold time-to-market improvement and 40 times more concise logic rules. Most importantly, however, business persons no longer need to involve software developers – the process seems to be easy and straightforward enough for them to manage on their own.
CA Live API Creator consists of three components:
- Database Explorer, which provides interactive access to the enterprise data across SQL and NoSQL data sources directly from a browser. With this tool, users can not just browse and search, but also manage this information and even create “back office apps” with graphical forms for editing the data across multiple tables.
- API Creator, the actual tool for creating data-driven APIs using a point-and-click GUI. It provides the means for designing data models, defining logical rules, managing access control and so on, all without the need to write application code or SQL statements. It’s worth stressing that it’s not a GUI-based code generator – the solution is based on an object model, which is directly deployed to the API server.
- The aforementioned API Server is responsible for execution of APIs, event processing and other runtime logic. It connects to the existing data sources and serves client requests to REST-based API endpoints.
Although the product hasn’t been released yet (will become available in December), and although it should be clearly understood that it’s by nature not an universal solution for all possible API use cases, we can already see a lot of potential. The very idea of eliminating software developers from the API publishing process is pretty groundbreaking, and if CA delivers on their promises to make the tool easy enough for business people, it will become a valuable addition to the company’s already first-class API management portfolio.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]