After an “extended holiday season” (which for me included spending a vacation in Siberia and then desperately trying to get back into shape) it’s finally time to resume blogging. And the topic for today is the cloud platform for IoT services from AWS, which went out of beta in December. Ok, I know it’s been a month already, but better late than never, right?
As already mentioned earlier, the very definition of the Internet of Things is way too blanket and people tend to combine many different types of devices under this term. So, if your idea of the Internet of Things means controlling your thermostat or your car from your mobile phone, the new service from AWS is probably not what you need. If, however, your IoT includes thousands or even millions of sensors generating massive amounts of data which needs to be collected, processed by complex rules and finally stored somewhere, then look no further, especially if you already have your backend services in the AWS cloud.
In fact, with AWS being the largest cloud provider, it’s safe to assume that its backend services have already been used for quite a few IoT projects. However, until now they would have to rely on third-party middleware for connecting their “things” to AWS services. Now the company has closed the gap by offering their own managed platform for interacting with IoT devices and processing data collected from them. Typically for AWS, their solution follows the no-frills, no-nonsense approach, offering native integrations with their existing services, a rich set of SDKs and development tools and aggressive pricing. In addition, they are bringing in a number of hardware vendors with starter kits that can help quickly implement a prototype for your new IoT project. And, of course, with the amount of computing resources at hand, they can safely claim to be able to manage billions of devices and trillions of messages.
The main components of the new platform are the following:
The Device Gateway supports low-latency bi-directional communications between IoT devices and cloud backends. AWS provides support for both standard HTTP and much more resource-efficient MQTT messaging protocols, both secured by TLS. Strong authentication and fine-grained authorization are provided by familiar AWS IAM services, with a number of simplified APIs available.
The Device Registry keeps track of all devices currently or potentially connected to the AWS IoT infrastructure. It provides various management functions like support and maintenance or firmware distribution. Besides that, the registry maintains Device Shadows – virtual representations of IoT devices, which may be only intermittently connected to the Internet. This functionality allows cloud and mobile apps to access all devices using a universal API, masking all the underlying communication and connectivity issues.
The Rules Engine enables continuous processing of data sent by IoT devices. It supports a large number of rules for filtering and routing the data to AWS services like Lambda, DynamoDB or S3 for processing, analytics and storage. It can also apply various transformations on the fly, including math, string, crypto and other operations or even call external API endpoints.
A number of SDKs are provided including a C SDK for embedded systems, a node.js SDK for Linux, an Arduino library and mobile SDKs for iOS and Android. Combined with a number of “official” hardware kits available to play with, this ensures that developers can quickly start working on an IoT project of almost any kind.
Obviously, one has to mention that Amazon isn’t the first cloud provider to offer an IoT solution – Microsoft has announced their Azure IoT Suite earlier in 2015 and IBM has their own Internet of Things Foundation program. However, each vendor has a unique approach towards addressing various IoT integration issues. The new solution from AWS, with a strong focus on existing standard protocols and unique features like device shadows, not just looks compelling to existing AWS customers, but will definitely kickstart quite a few new large-scale IoT projects. On the Amazon cloud, of course.
With the amount of digital assets a modern company has to deal with growing exponentially, the need to access them any time from any place, across various devices and platforms has become a critical factor for business success. This does not include just the employees – to stay competitive, modern businesses must be increasingly connected to their business partners, suppliers, current and future customers and even smart devices (or things). New digital businesses therefore have to be agile and connected.
Unsurprisingly, the demand for solutions that provide strongly protected storage, fine-grained access control and secure sharing of sensitive digital information is extremely high nowadays, with vendors rushing to bring their various solutions to the market. Of course, no single information sharing solution can possibly address all different and often conflicting requirements of different organizations and industries, and the sheer number and diversity of such solutions is a strong indicator for this. Vendors may decide to support just certain types of storage or document formats, concentrate on solving a specific pain points of many companies like enabling mobile access, or design their solutions for specific verticals only.
Traditional approach to securing sensitive information is storing it in a secured repository on-premise or in the cloud. By combining strong encryption and customer managed encryption keys or in the most extreme cases even implementing Zero Knowledge Encryption principle, vendors are able to address even the strictest security and compliance requirements. However, as soon as a document leaves the repository, traditional solutions are no longer able to ensure its integrity or to prevent unauthorized access to it.
Information Rights Management (IRM) offers a completely different, holistic approach towards secure information sharing. Evolving from earlier Digital Rights Management technologies, the underlying principle behind IRM is data-centric security. Essentially, each document is wrapped in a tiny secured container and has its own access policy embedded directly in it. Each time an application needs to open, modify or otherwise access the document, it needs to validate user permissions with a central authority. If those permissions are changed or revoked, this will be immediately applied to the document regardless of its current location. The central IRM authority also maintains a complete audit trail of document accesses.
Thus, IRM is the only approach that can protect sensitive data at rest, in motion, and in use. In the post-firewall era, this approach is fundamentally more future-proof, flexible and secure than any combination of separate technologies addressing different stages of the information lifecycle. However, it has one fatal flaw: IRM only works without impeding productivity if your applications support it. Although IRM solutions have gone a long way from complicated on-premise solutions towards cloud-based completely managed services, their adoption rate is still quite low. Probably the biggest reason for that is lack of interoperability between different IRM implementations, but arguably more harmful is the lack of general awareness that such solutions even exist! However, recently the situation has changed, with several notable vendors increasing efforts in marketing their IRM-based solutions.
One of the pioneers and certainly the largest such vendor is Microsoft. With the launch of their cloud-based Azure Rights Management services in 2014, Microsoft finally made their IRM solution affordable not just for large enterprises. Naturally, Microsoft’s IRM is natively supported by all Microsoft Office document formats and applications. PDF documents, images or text files are natively supported as well, and generic file encapsulation into a special container format is available for all other document types. Ease of deployment, flexibility and support across various device platforms, on-premise and cloud services make Azure RMS the most comprehensive IRM solution in the market today.
However, other vendors are able to compete in this field quite successfully as well either by adding IRM functionality into their existing platforms or by concentrating on delivering more secure, more comprehensive or even more convenient solutions to address specific customer needs.
A notable example of the former is Intralinks. In 2014, the company has acquired docTrackr, a French vendor with an innovative plugin-free IRM technology. By integrating docTrackr into their secure enterprise collaboration platform VIA, Intralinks is now able to offer seamless document protection and policy management to their existing customers. Another interesting solution is Seclore FileSecure, which provides a universal storage- and transport-neutral IRM extension for existing document repositories.
Among the vendors that offer their own IRM implementations one can name Covertix, which offers a broad portfolio of data protection solutions with a focus on strong encryption and comprehensive access control across multiple platforms and storage services. On the other end of the spectrum on can find vendors like Prot-On, which focus more on ease of use and seamless experience, providing their own EU-based cloud service to address local privacy regulations.
For more in-depth information about leading vendors and products in the file sharing and collaboration market please refer to KuppingerCole’s Leadership Compass on Secure Information Sharing.
Following the topic of the Internet of Things security covered in our latest Analysts’ View newsletter, I’d like to present a perfect example of how IoT device manufacturers are blatantly ignoring the most basic security best practices in their products. As an Austrian information security company SEC Consult revealed in their report, millions of embedded devices around the world, including routers and modems, IP phones and cameras and other network products, are reusing a small number of hardcoded SSH keys and SSL certificates.
According to SEC Consult, they have analyzed firmware images (usually freely available for download from manufacturers’ websites) of over 4000 various devices and were able to extract more than 580 unique private keys. Remember, a private key is the most critical component of any public key infrastructure and according to the most basic security best practices has to be protected from falling into the wrong hands by all means available. After that, the researches correlated their findings with the data from internet-wide scans, again publicly available to anyone interested, and found out that a handful of those hardcoded keys are used in over 4 million hosts directly connected to the Internet.
Although similar researches has been done earlier, this time the company was able to expose concrete products and vendors responsible, which include both small regional manufacturers and large international companies like Cisco, Huawei or ZyXEL. These devices are deployed by large internet service providers around the world, exposing millions of their subscribers to possible attacks.
It can be speculated what the exact reason for a particular manufacturer to include a hardcoded key into their product would be, but in the end it all boils down to blindly reusing sample code supplied by manufacturers of network chips or boards that power these devices. Whether because of incompetence or pure negligence, these “default” keys or certificates end up being included into device firmware images.
Since hackers would have private keys at hand, they could launch different types of attacks, including impersonation, man-in-the-middle or passive decryption attacks. Although the researchers rightfully point out that exploiting modems or routers from the internet is difficult and mostly limited to “evil ISPs”, one has to realize that SEC Consult’s research has only revealed the tip of an iceberg, and their findings do not present an exceptional case but rather a typical approach of many IoT vendors towards security. As more and more smart devices are deployed everywhere – in hospitals, connected cars and traffic lights or in manufacturing plants and power grids, the risk of exposing these devices to key reuse attacks increases dramatically, along with the severity of possible consequences of such attacks.
So, what can and must be done to prevent these attacks in the future? SEC Consult’s report outlines the steps that vendors and ISPs have to make, and they are pretty obvious. Device vendors have to stop including hardcoded keys into their firmware, generating unique keys on the first boot instead. ISPs should ensure that the devices they install have remote management disabled. End users should change keys in their devices (which, by the way, requires certain technical skills and in many devices is not permitted at all).
However, the bigger question isn’t what’s needed to fix the problem, but how to force vendors and internet providers to change their current business processes. They have not cared about security for years, why would they suddenly change their mind and start investing into it? There is no single answer for this question, and in any case a combined effort of government agencies, security experts and the end users themselves is needed to break the current trend. Only when vendors realize that building their products upon the Security by Design principle not only saves them from massive fines and legal problems, but in fact makes their products more competitive on the market, can we expect to see positive changes. Until then, IoT security will remain simply a fictional concept.
As already discussed in one of our earlier newsletters, Internet of Things as a concept is by no means new – various smart devices capable of communicating with each other and their operators have been used in manufacturing, automotive industry, healthcare and even at home. These “Things” range from popular consumer products for home automation to enterprise devices like RFID tags all the way through to industrial sensors controlling critical processes like manufacturing or power generation. There is actually very little in common between them other than the reliance on standard network protocols for communicating over the existing Internet. Oh, and the complete lack of security.
Unfortunately, for decades, security for most embedded hardware vendors has always been an afterthought. Companies designing consumer products were more interested in bringing their products to the market as fast as possible and industrial control system vendors seemingly still live in an alternate universe where industrial networks are isolated from the Internet. In our reality, however, things have already changed dramatically. Simply because of the sheer scale and interoperability (at least on the network protocol level) that define modern IoT, it introduces a substantial number of new risks and attack surfaces.
First, the vast number of IoT devices out there makes it increasingly difficult not just to control and manage them, but also to update them if a vulnerability is discovered (if the device in question supports updates at all). Also, proliferation of connected devices greatly increases the chances for hackers to compromise a less reliable device and use it to navigate around the network to attack other devices.
Another obvious challenge is that the safety issue becomes much more critical. If a medical device like a pacemaker or an insulin pump is hacked, a patient’s life is at stake, not just his health record. A compromised connected car can cause traffic accidents. An attack on a piece of industrial equipment can cause critical disruptions or lead to industrial disasters (and even if no lives are lost, financial and legal consequences will be huge anyway).
Identity and privacy implications of the IoT proliferation can be massive as well. The information that can be leaked or stolen from unprotected smart sensors is much more sensitive than, say, your email account. Health records, location and habits history, home surveillance – all this data has to be protected accordingly. Solving the identity management challenge on the global scale is a separate and very daunting task, which vendors are only beginning to tackle.
However, although security experts have long realized that IoT has no room for weak security, this mindset is yet to catch on among the IoT manufacturers. Many of them either have no expertise in security or cannot afford spending much on it (this is especially true for consumer products built upon existing commodity hardware from third party manufacturers). Lack of established standards and protocols is another inhibiting factor.
So, where do we even begin to address these problems? On one hand, it seems that IoT device manufactures are primarily responsible for making their products more secure. Security by Design and Privacy by Design must become mandatory parts of their design processes. Vendors have to incorporate security features into their solutions on all levels from device firmware to service provider infrastructures to training their employees accordingly. They also must minimize data collection and store only the information that’s required for their devices to function and ensure that all applicable privacy regulations are addressed. Finally, they must provide continuous security updates and patches for the whole lifecycle of their products. Obviously, they must be both incentivized by government agencies for complying with these requirements and punished for violating them. They should also look to join various industry groups and technology alliances to get access to the latest standards and best practices.
However, it’s also obvious that we cannot rely on the vendors alone to address this massive and multifaceted problem. Designing a proper security infrastructure for modern “hyperconnected” businesses requires a holistic approach, where various security, privacy-enhancing and identity management solutions are operating in accord, orchestrated and monitored from a central management console. Emergence of new standards and open APIs in the IoT field to support such scenarios is therefore critical. Providing flexible identity management and fine-grained access control is especially important here, and many existing IAM tools are yet to be adapted to support the sheer scale and inherently heterogeneous nature of the Internet of Things.
It is also worth stressing that solving the IoT security challenge isn’t limited by addressing technology issues. To fulfill the often conflicting requirements and expectations of all parties involved, a lot of legal and liability issues have to be solved as well. And there are many more parties involved than many expect. For connected vehicles, for example, we have to think not just about relationships between car manufacturers and drivers, but also about insurance companies, auto mechanics, environmental protection agencies and, of course, the police.
Last but not least, we always have to think about consumer’s choice and consent. Giving users control over collection and sharing of their sensitive personal data by IoT devices can be not just a great business enabler for device manufacturers, but also a strong security and privacy-enhancing factor.
In the end, the Internet of Things is here to stay. It provides a great number of new opportunities, but introduces quite a number of new risks. These risks can only be addressed by the combined effort of IoT device manufacturers, “traditional” IT security and IAM vendors, technology alliances and standards bodies, governments and end users. Only together we can ensure that “Industry 4.0” won’t one day turn into “Skynet 1.0”.
Last week, CA Technologies has announced several new products in their API Management portfolio. The announcement was made during their annual CA World event, which took place November 16-20 in Las Vegas. This year, the key topic of the event has been Application Economy, so it is completely unsurprising that API management was a big part of the program. After all, APIs are one of the key technologies that drive the “digital transformation”, helping companies to stay agile and competitive, enable new business models and open up new communication channels with partners and customers.
Whether the companies are leveraging APIs to accelerate their internal application development, expose their business competence to new markets or to adopt new technologies like software-defined computing infrastructures, they are facing a lot of complex challenges and have to rely on third-party solutions to manage their APIs. The API Management market, despite its relatively young age, has matured quickly, and CA Technologies has become one of the leading players there. In fact, just a few months ago KuppingerCole has recognized CA as the overall leader in the Leadership Compass on API Security Management.
However, even a broad range of available solutions for publishing, securing, monitoring or monetizing APIs does not change the fact that before a backend service can be exposed as an API, it has to be implemented – that is, a team of skilled software developers is still required to bring your corporate data or intelligence into the API economy. Although quite a number of approaches exist to make the developer’s job as easy and efficient as possible (sometimes even eliminating the need for a standalone backend, like the AWS Lambda service), business persons are still unable to participate in this process on their own.
Well, apparently, CA is going to change that. The new CA Live API Creator is a solution that’s aiming at eliminating programming from the process of creating data-driven APIs. For a lot of companies, joining the API economy means the need to unlock their existing data stores and make their enterprise data available for consumption through standard APIs. For these use cases, CA offers a complete solution to create REST endpoints that expose data from multiple SQL and NoSQL data sources using a declarative data model and a graphical point-and-click interface. By eliminating the need to write code or SQL statements manually, the company claims tenfold time-to-market improvement and 40 times more concise logic rules. Most importantly, however, business persons no longer need to involve software developers – the process seems to be easy and straightforward enough for them to manage on their own.
CA Live API Creator consists of three components:
- Database Explorer, which provides interactive access to the enterprise data across SQL and NoSQL data sources directly from a browser. With this tool, users can not just browse and search, but also manage this information and even create “back office apps” with graphical forms for editing the data across multiple tables.
- API Creator, the actual tool for creating data-driven APIs using a point-and-click GUI. It provides the means for designing data models, defining logical rules, managing access control and so on, all without the need to write application code or SQL statements. It’s worth stressing that it’s not a GUI-based code generator – the solution is based on an object model, which is directly deployed to the API server.
- The aforementioned API Server is responsible for execution of APIs, event processing and other runtime logic. It connects to the existing data sources and serves client requests to REST-based API endpoints.
Although the product hasn’t been released yet (will become available in December), and although it should be clearly understood that it’s by nature not an universal solution for all possible API use cases, we can already see a lot of potential. The very idea of eliminating software developers from the API publishing process is pretty groundbreaking, and if CA delivers on their promises to make the tool easy enough for business people, it will become a valuable addition to the company’s already first-class API management portfolio.
With the ever-growing number of new security threats and continued deterioration of traditional security perimeters, demand for new security analytics tools that can detect those threats in real time is growing rapidly. Real-Time Security Intelligence solutions are going to redefine the way existing SIEM tools are working and finally provide organizations with clearly ranked actionable items and highly automated remediation workflows.
Various market analysts predict that security analytics solutions will grow into a multibillion market within the next five years. Many vendors, big and small, are now rushing to bring their products to this market in anticipation of its potential. However, the market is still far from reaching the stage of maturity. First, the underlying technologies have not themselves reached full maturity yet, with areas like machine learning and threat intelligence still being constantly developed. Second, very few vendors possess enough intellectual property or resources to integrate all these technologies into a single universal solution.
In a sense, RTSI segment is the frontier of the overall market for information security solutions. When selecting the tools most appropriate for their requirements, customers thus have to be especially careful and should not take vendors’ claims for granted. Support for different data sources, scope of anomaly detection and usability in general may vary significantly.
Although we should expect that in a few years, the market will settle and the broad range of products with various scopes of functionality available today will eventually converge to a reasonable number, today we are still far from that. While some vendors are deciding for evolutionary development of their existing products, others opt for strategic acquisitions. At the same time, smaller companies or even startups are bringing their niche products to the market, aiming for customers looking for point solutions for their most critical problems. The resulting multitude of solutions makes them quite difficult to compare and even harder to predict in which direction the market will evolve. We can however name a few notable vendors from different strata of the RTSI market to at least give you an idea where to start looking.
First, large vendors currently offering “traditional” SIEM solutions are obviously interested in bringing their products up to date with the latest technological developments. This includes IBM Security with their QRadar SIEM and Guardium products with significantly improved analytics capabilities, RSA Security Analytics platform, NetIQ Sentinel or smaller vendors like Securonix or LogRythm.
Another class of vendors are companies coming from the field of cybersecurity. Their products are focusing more on detection and prevention of external and internal threats, and by integrating big data analytics and their own or 3rd party sources of threat intelligence they naturally evolve into RTSI solutions that are leaner and easier to deploy than traditional SIEMs and are targeted at smaller organizations. Notable examples here could be CyberArk with Privileged Threat Analytics as a part of their Privileged Account Security solution, Hexis Cyber Solutions with their HawkEye G and AP analytics platforms or AlienVault with Unified Security Management offering. Another important, yet much less represented aspect of security intelligence is user behavior analytics with vendors like BalaBit with Blindspotter tool recently added to their portfolio or Gurucul providing a number of specialized analytics solutions in that area.
Besides bigger vendors, numerous startups with products usually concentrating on a single source of analytics information like network traffic analysis, endpoint security or mobile security analytics. Their solutions are usually targeted towards small and medium businesses and, although limited in their functional scope, rely more on ease of deployment, simplicity of user interface and quality of support service to win their potential customers. For small companies without sufficient security budgets or expert teams, these products can be a blessing, because they quickly address their most critical security problems. To name just a few vendors here: Seculert with their cloud-based analytics platform, Cybereason with an unorthodox approach towards endpoint security analytics, Cynet with their rapidly deployed integrated solution, Logtrust with a focus on log analysis or Fortscale with a cloud-based solution for detecting malicious users.
Surely, such a large number of different solutions makes RTSI market quite difficult to analyze and predict. On the other hand, almost any company will probably be able to find a product that’s tailored specifically for their requirements. It’s vital however that they should look for complete solutions with managed services and quality support, not just for another set of tools.
When I first read about the newly discovered kind of OS X and iOS malware called XcodeGhost, quite frankly, the first thing that came to my mind was: “That’s the Albanian virus!” In case you don’t remember the original reference, here’s what it looks like:
I can vividly imagine a conversation among hackers, which would go like this:
- Why do we have to spend so much effort on planting our malware on user devices? Wouldn’t it be great if someone would do it for us?
- Ha-ha, do you mean the Albanian virus? Wait a second, I’ve got an idea!
Unfortunately, it turns out that the situation isn’t quite that funny and in fact poses a few far-reaching questions regarding the current state of iOS security.
What is XcodeGhost anyway? In short, it’s Apple’s official developer platform Xcode for creating OS X and iOS software, repackaged by yet unknown hackers to include malicious code. Any developer, who would download this installer and use it to compile an iOS app, would automatically include this code into their app, which is then submitted to the App Store and distributed to all users automatically as a usual update. According to Palo Alto Networks, which published a series of reports on XcodeGhost, this malware is able to collect information from mobile devices and send them to a command and control server. It would also try to phish for user’s credentials or steal their passwords from the clipboard.
Still, the most remarkable is that quite a few legitimate and popular iOS apps from well-known developers (mostly based in China) became infected and were successfully published in the App Store. Although it baffles me why a seasoned developer would download Xcode from a file-sharing site instead of getting it for free directly from Apple, the list of victims includes Tencent, creators of the hugely popular app WeChat that has over 600 million users. In total, around 40 apps in the App Store have been found to contain the malicious code. Update: another report by FireEye identifies over 4000 affected apps.
Unfortunately, there is practically nothing that iOS users can do at the moment to prevent this kind of attack. Surely, they should uninstall any of the apps that are known to contain this malicious code, but how many have not yet been discovered? We can also safely assume that other hackers will follow with their own implementations of this new concept or concentrate on attacking other components of the development chain.
Apple’s position on antivirus apps for iOS has been consistent for years: they are unnecessary and create a wrong impression. In fact, none of the apps remaining in the App Store under a name “Antivirus” is actually capable of detecting malware: there are no interfaces in iOS, which would allow them to function. In this regard, user’s safety is entirely in Apple’s hands. Even if they upgrade the App Store to include better malware detection in submitted apps and incorporate stronger integrity checks into Xcode, can we be sure that there will be no new outbreaks of this kind of malware? After several major security bugs like Heartbleed or Poodle in core infrastructures discovered recently (and yes, I do consider Apple Store a critical infrastructure, too), how many more times does the industry have to fall on its face to finally start thinking “security first”?
Offering Windows 10 as a free upgrade was definitely a smart marketing decision for Microsoft. Everyone is talking about the new Windows and everyone is eager to try it. Many of my friends and colleagues have already installed it, so I didn’t hesitate long myself and upgraded my desktop and laptop at the first opportunity.
Overall, the upgrade experience has been quite smooth. I’m still not sure whether I find all visual changes in Windows 10 positive, but hey, nothing beats free beer! I also realize that much more has been changed “under the hood”; including numerous security features Microsoft has promised to deliver in their new operating system. Some of those features (like built-in Information Rights Management functions or support for FIDO Alliance specifications for strong authentication) many consumers will probably not notice for a long time if ever, so that’s a topic for another blog post. There are several things however, which everyone will face immediately after upgrading, and not everyone will be happy with the way they are.
The most prominent consumer-facing security change in Windows 10 is probably Microsoft’s new browser – Microsoft Edge. Developed as a replacement for aging Internet Explorer, it contains several new productivity features, but also eliminates quite a few legacy technologies (like ActiveX, browser toolbars or VB Script), which were a constant source of multiple vulnerabilities. Just by switching to Edge from Internet Explorer, users are automatically protected from several major malware vectors. It does, however, include built-in PDF and Flash plugins, so it’s potentially still vulnerable to the two biggest known web security risks. It is possible to disable Flash Player under “Advanced settings” in the Edge app, which I would definitely recommend. Unfortunately, after upgrading, Windows changes your default browser to Edge, so make sure you change it back to your favorite one, like Chrome or Firefox.
Another major change that in theory should greatly improve Windows security is the new Update service. In Windows 10, users can no longer choose which updates to download – everything is installed automatically. Although this will greatly reduce the window of opportunity for an attacker to exploit a known vulnerability, an unfortunate side effect of this is that sometimes your computer will be rebooted automatically when you’re away from it. To prevent this, you must choose “Notify to schedule restart” under advanced update options – this way you’ll at least be able to choose a more appropriate time for a reboot. Another potential problem are traffic charges: if you’re connecting to the Internet over a mobile hotspot, updates can quickly eat away your monthly traffic limit. To prevent this, you should mark that connection as “metered” under “Advanced options” in the network settings.
Windows Defender, which is the built-in antivirus program already included in earlier Windows versions, has been updated in a similar way: in Windows 10, users can no longer disable it with standard controls. After 15 minutes of inactivity, antivirus protection will be re-enabled automatically. Naturally, this greatly improves anti-malware protection for users not having a third party antivirus program installed, but quite many users are unhappy with this kind of “totalitarianism”, so the Internet is full of recipes on how to block the program completely. Needless to say, this is not recommended for most users, and the only proper way of disabling Windows Defender is installing a third party product that provides better anti-malware protection. A popular site AV Comparatives maintains a list of security products compatible with Windows 10.
Since most anti-malware products utilize various low level OS interfaces to operate securely, they are known to be affected the most by the Windows upgrade procedure. Some will be silently uninstalled during the upgrade, others will simply stop working. Sometimes, an active antivirus may even block the upgrade process or cause cryptic error messages. It is therefore important to uninstall anti-malware products before the upgrade and reinstall them afterwards (provided, of course, that they are known to be compatible with the new Windows, otherwise now would be a great time to update or switch your antivirus). This will ensure that the upgrade will be smooth and won’t leave your computer unprotected.
What a surprising coincidence: on the same day we were preparing our Leadership Compass on API Security Management for publication, Amazon has announced their own managed service for creating, publishing and securing APIs – Amazon API Gateway. Well, it’s already too late to make changes in our Leadership Compass, but the new service is still worth having a look, hence this blog post.
Typically for Amazon, the solution is fully managed and based on AWS cloud infrastructure, meaning that there is no need to set up any physical or virtual machines or configure resources. The solution is tightly integrated with many other AWS services and is built directly into the central AWS console, so you can start creating or publishing APIs in minutes. If you already have existing backend services running on AWS infrastructure, such as EC2 or RDS, you can expose them to the world as APIs literally with a few mouse clicks. Even more compelling is the possibility to use AWS Lambda service to create completely managed “serverless” APIs without any need to worry about resource allocation or scaling.
In fact, this seems to be the primary focus of the solution. Although it is possible to manage external API endpoints, this is only mentioned in passing in the announcement: the main reason for releasing the service seems to be providing a native API management solution for AWS customers, which until now had to manage their APIs themselves or rely on third-party solutions.
Again typically for Amazon, the solution they delivered is a lean and no-frills service without all the fancy features of an enterprise API gateway, but, since it is based on the existing AWS infrastructure and heavily integrates with other well-known services from Amazon, with guaranteed scalability and performance, extremely low learning curve and, of course, low prices.
For API traffic management, Amazon CloudFront is used, with a special API caching mechanism added for increased performance. This ensures high scalability and availability for the APIs, as well as reasonable level of network security such as SSL encryption or DDoS protection. API transformation capabilities, however, are pretty basic, only XML to JSON conversion is supported.
To authorize access to APIs, the service integrates with AWS Identity and Access Management, as well as with Amazon Cognito, providing the same IAM capabilities that are available to other AWS services. Again, the gateway provides basic support for OAuth and OpenID Connect, but lacks the broad support for authentication methods typical for enterprise-grade solutions.
Analytics capabilities are provided by Amazon CloudWatch service, meaning that all API statistics are available in the same console as all other AWS services.
There seems to be no developer portal functionality provided with the service at the moment. Although it is possible to create API keys for third-party developers, there is no self-service for that. In this regard, the service does not seem to be very suitable for public APIs.
To summarize it, Amazon API Gateway is definitely not a competitor for existing enterprise API gateways like products from CA Technologies, Axway or Forum Systems. However, as a native replacement for third-party managed services (3scale, for example), it has a lot of potential and, with Amazon’s aggressive pricing policies, it may very well threaten their market positions.
Currently, Amazon API Gateway is available in selected AWS regions, so it’s possible to start testing it today. According to the first reports from developers, there are still some kinks to iron out before the service becomes truly usable, but I’m pretty sure that it will quickly become popular among existing AWS customers and may even be a deciding factor for companies to finally move their backend services to the cloud (Amazon cloud, of course).
With the number of high-profile security breaches growing rapidly, more and more large corporations, media outlets and even government organizations are falling victim to hacking attacks. These attacks are almost always widely publicized, adding insult to already substantial injury for the victims. It’s no surprise that the recent news and developments in the field of cybersecurity are now closely followed and discussed not just by IT experts, but by the general public around the world.
Inevitably, just like any other sensational topic, cybersecurity has attracted politicians. And whenever politics and technology are brought together, the resulting mix of macabre and comedy is so potent that it will make every security expert cringe. Let’s just have a look at the few of the most recent examples.
After the notorious hack of Sony Pictures Entertainment last November, which was supposedly carried out by a group of hackers demanding not to release a comedy movie about a plot to assassinate Kim Jong-Un, United States intelligence agencies were quick to allege that the attack was sponsored by North Korea. For some time, it was strongly debated whether a cyber-attack constitutes an act of war and whether the US should retaliate with real weapons.
Now, every information security expert knows that attributing hacking attacks is a long and painstaking process. In fact, the only known case of a cyber-attack more or less reliably attributed to a state agency until now is Stuxnet, which after several years of research has been found out to be a product of US and Israeli intelligence teams. In case of the Sony hack, many security researchers around the world have pointed out that it was most probably an insider job having no relation to North Korea at all. Fortunately, cool heads in the US military have prevailed, but the thought that next time such an attack can be quickly attributed to a nation without nuclear weapons is still quite chilling…
Another repercussion of the Sony hack has been the ongoing debate about the latest cybersecurity ‘solutions’ the US and UK governments have come up with this January. Among other crazy ideas, these proposals include introducing mandatory backdoors into every security tool and banning certain types of encryption completely. Needless to say, all this is served under the pretext of fighting terrorism and organized crime, but is in fact aimed at further expanding government capabilities of spying on their own citizens.
Unfortunately, just like any other technology plan devised by politicians, it won’t just not work, but will have disastrous consequences for the whole society, including ruining people’s privacy, making every company’s IT infrastructure more vulnerable to hacking attacks (exploiting the same government-mandated backdoors), blocking significant part of academic research, not to mention completely destroying businesses like security software vendors or cloud service providers. Sadly, even in Germany, the country where privacy is considered an almost sacred right, the government is engaged in similar activities as well.
Speaking about Germany, the latest, somewhat more lighthearted example of politicians’ inability to cope with cybersecurity comes from the Bundestag, the German federal parliament. After another crippling cyber-attack on its network in May, which allowed hackers to steal large amount of data and led to a partial shutdown of the network, the head of Germany’s Federal Office for Information Security has come up with a great idea. Citing concerns for mysterious Russian hackers still lurking in the network, it has been announced that the existing infrastructure including over 20,000 computers has to be completely replaced. Leaving aside the obvious question – are the same people that designed the old network really able to come up with a more secure one this time? – one still cannot but wonder whether millions needed for such an upgrade could be better spent somewhere else. In fact, my first thought after reading the news was about President Erdogan’s new palace in Turkey. Apparently, he just had to move to a new 1,150-room presidential palace simply because the old one was infested by cockroaches. It was very heartwarming to hear the same kind of reasoning from a German politician.
Still, any security expert cannot but continue asking more specific questions. Was there an adequate incident and breach response strategy in place? Has there been a training program for user security awareness? Were the most modern security tools deployed in the network? Was privileged account management fine-grained enough to prevent far-reaching exploitation of hijacked administrator credentials? And, last but not the least: does the agency have budget for hiring security experts with adequate qualifications for running such a critical environment?
Unfortunately, very few details about the breach are currently known, but judging by the outcome of the attack, the answer for most of these questions would be “no”. German government agencies are also known for being quite frugal with regards to IT salaries, so the best experts are inevitably going elsewhere.
Another question that I cannot but think about is what if the hackers have utilized one of the zero-day vulnerability exploits that the German intelligence agency BND is known to have purchased for their own covert operations? That would be a perfect example of “karmic justice”.
Speaking of concrete advice, KuppingerCole provides a lot of relevant research documents. You should probably start with the recently published free Leadership Brief: 10 Security Mistakes That Every CISO Must Avoid and then dive deeper into specific topics like IAM & Privilege Management in the research area of our website. Our live webinars, as well as recordings from past events can also provide a good introduction into relevant security topics. If you are looking for further support, do not hesitate to talk to us directly!