Newly announced AWS offerings of Access Analyzer, Amazon Detective and AWS Nitro Enclaves discussed in my last blog post, further round out AWS’s security services and tools such as Amazon GuardDuty that continuously monitors for threats to accounts and workloads, Amazon Inspector that assesses application hosts for vulnerabilities and deviations from best practices, Amazon Macie that uses machine learning to discover, classify, and protect sensitive data, and AWS Security Hub, a unified security and compliance center.
These new security capabilities come hard on the heels of other security-related innovation announced ahead of re:Invent, including a feature added to AWS IAM to help organizations identify unused roles in AWS accounts by reporting the latest timestamp when role credentials were used to make an AWS request so that unused roles can be identified and removed; a native feature called Amazon S3 Block Public Access to help customers use core services more securely; and the ability to connect Azure Active Directory to AWS Single Sign-on (SSO) once, manage permissions to AWS centrally in AWS SSO, and enable users to sign in using Azure AD to access assigned AWS accounts and applications.
Increasing focus on supporting regulatory frameworks
Further underlining the focus by AWS on security and compliance, its Security Hub service available in Europe since June 2019 recently announced 12 new partner integrations and plans to announce a set of new features in early 2020, focusing on supporting all major regulatory frameworks.
By making it easier for organizations using web services to comply with regulations, AWS once again appears to be shoring up the security reputation of cloud-based services as well as working to make security and compliance prime drivers of cloud migration.
While Security Hub integrates with three third-party Managed Security Services Providers (MSSPs), namely Alert Logic, Armor and Rackspace and has more than 25 security partner integrations that enable sharing of threat intelligence, most of the tools announced at re:Invent are designed to work with other AWS services to protect AWS workloads.
Reality check: IT environments are typically hybrid and multi-cloud
The reality is that most organizations using cloud services have a hybrid environment and are working with multiple cloud providers, which is something AWS should consider supporting with future security-related services.
In the meantime, organizations that have a hybrid multi-cloud IT environment may want to consider other solutions. At the very least, they should evaluate which set of solutions helps them across their complete IT environment, on premises and across various clouds. Having strong security tools for AWS, for Microsoft Azure, for other clouds, and for their on-premise environments helps for these platforms, but lacks the support for comprehensive security across and integrated Incident Management spanning the whole IT environment.
KuppingerCole Advisory Services can help in streamlining the security tools portfolio with our “Portfolio Compass” methodology, but also in defining adequate security architectures.
If you want more information about hybrid cloud security, check the Architecture Blueprint "Hybrid Cloud Security" and make sure you visit our 14th European Identity & Cloud Conference. Prime Discount expires by the end of the year, so get your ticket now.
The high proportion of cyber attacks enabled by poor security practices has long raised questions about what it will take to bring about any significant change. Finally, however, there are indications that the threat of substantial fines for contravening the growing number of data protection regulations and negative media exposure associated with breaches are having the desired effect.
High profile data breaches driving industry improvements
The positive effect of high-profile breaches was evident at the Amazon Web Services (AWS) re:Invent conference in Las Vegas, where the cloud services firm made several security related announcements, that were undoubtedly expedited if not inspired by the March 2019 Capital One customer data breach, which was a text book example of a breach enabled by a cloud services customer not meeting their obligations under the shared responsibility model, which states organizations are responsible for anything they run in the cloud.
While AWS was not compromised and the breach was traced to a misconfiguration of a Web Application Firewall (WAF) and not the underlying cloud infrastructure, AWS has an interest in helping its customers to avoid breaches that inevitably lead to concerns about cloud security.
It is therefore unsurprising that AWS has introduced Access Analyzer, an Identity and Access Management (IAM) capability for Amazon S3 (Simple Storage Service) to make it easy for customer organizations to review access policies and audit them for unintended access. Users of these services are less likely to suffer data breaches that reflect badly on all companies involved and the cloud services industry is general. Something AWS is obviously keen to avoid.
Guarding against another Capital One type data breach
Access Analyzer complements preventative controls, such as Amazon S3 Block Public Access, which help protect against risks that stem from policy misconfiguration, widely viewed as the single biggest security risk in the context of cloud services. Access Analyser provides a single view across all access policies to determine whether any have been misconfigured to allow unintended public or cross-account access, which would have help prevent the Capital One breach.
Technically speaking, Access Analyzer uses a form of mathematical analysis called automated reasoning, which applies logic and mathematical inference to determine all possible access paths allowed by a resource policy to identify any violations of security and governance best practice, including unintended access.
Importantly, Access Analyzer continuously monitors policies for changes, meaning AWS customers no longer need to rely on intermittent manual checks to identify issues as policies are added or updated. It is also interesting to note, that Access Analyzer has been provided to S3 customers at no additional cost, unlike most of the other security innovations which represent new revenue streams for AWS.
On the security front, AWS also announced the Amazon Detective security service, currently available in preview, which is designed to make it easy for customers to conduct faster and more efficient investigations into security issues across their workloads.
In effect, Amazon Detective helps security teams conduct faster and more effective investigations by automatically analyzing and organizing data from AWS CloudTrail and Amazon Virtual Private Cloud (VPC) Flow Logs into a graph model that summarizes resource behaviors and interactions across a customer’s AWS environment.
Amazon Detective’s visualizations are designed to provide the details, context, and guidance to help analysts quickly determine the nature and extent of issues identified by AWS security services like Amazon GuardDuty, Amazon Inspector, Amazon Macie, and AWS Security Hub, to enable security teams to begin remediation quickly. Essentially an add-on to enable customers (and AWS) to get more value out of existing security services.
Hardware-based data isolation to address data protection regulatory compliance
Another capability due to be available in preview in early 2020 is AWS Nitro Enclaves, which is aimed at making it easy of AWS customers to process highly sensitive data by partitioning compute and memory resources within an instance to create an isolated compute environment.
This is an example of how data protection regulations are driving suppliers to support better practices by customer organizations by creating demand for such services. Although personal data can be protected using encryption, this does not address the risk of insider access to sensitive data as it is being processed by an application.
AWS Nitro Enclaves avoid the complexity and restrictions of either removing most of the functionality that an instance provides for general-purpose computing or creating a separate cluster of instances for processing sensitive data, protected by complicated permissions, highly restrictive networking, and other isolations. Instead, AWS customers can use AWS Nitro Enclave to create a completely isolated compute environment to process highly sensitive data.
Each enclave is an isolated virtual machine with its own kernel, memory, and processor that requires organizations only to select an instance type and decide how much CPU and memory they want to designate to the enclave. There is also no persistent storage, no ability to login to the enclave, and no networking connectivity beyond a secure local channel.
An early adopter of AWS Nitro Enclaves is European online fashion platform, Zalando, to make it easier for the Berlin-based firm to achieve application and data isolation to protect customer data in transit, at rest and while it is being processed.
AWS shoring up security in cloud services while adding revenue streams
The common theme across these security announcements is that they reduce the amount of custom engineering required to meet security and compliance needs, allow security teams to be more efficient and confident when responding to issues, and make it easier to manage access to AWS resources, which also harkens back to the Capital One breach.
In effect, AWS is continually making it easy for customers to meet their security obligations to protect the its own reputation as well as the reputation of the industry as a whole to the point that organizations will not only trust and have confidence in cloud environments, but will increasingly see improved security as being one of the main drivers for cloud migration.
AWS is also focusing on regulatory compliance as a driver rather than inhibitor of cloud migration. We will cover this in a blogpost tomorrow.
If you want more information about hybrid cloud security, check the Architecture Blueprint "Hybrid Cloud Security" and make sure you visit our 14th European Identity & Cloud Conference. Prime Discount expires by the end of the year, so get your ticket now.
Identity and Access Management (IAM) is on the cusp of a new era: that of the Identity Fabric. An Identity Fabric is a new logical infrastructure that acts as a platform to provide and orchestrate separate IAM services in a cohesive way. Identity Fabrics help the enterprise meet the current expanded needs of IAM, like integrating many different identities quickly and securely, allow BYOID, enable accessibility regardless of geographic location or device, link identity to relationship, and more.
The unique aspect of Identity Fabrics is the many interlinking connections between IAM services and front- and back-end systems. Application Programming Interfaces (APIs) are the secure access points to the Identity Fabric, and can make or break it. APIs are defined interfaces that can be used to call a service and get a defined result, and have become a far more critical tool than simply for the benefit of developers.
Because APIs are now the main form of communication and delivery of services in an Identity Fabric, they – by default – become the security gatekeeper. With an API facilitating each interface between aspects of the fabric, it is potentially a weakness.
API security should be comprehensive, serving the key areas of an Identity Fabric. These include:
- Directory Services, one or more authoritative sources managing data on identities of humans, devices, things, etc. at large scale
- Identity Management, i.e. the Identity Lifecycle Management capabilities required for setting up user accounts in target systems, including SaaS applications; this also covers Identity Relationship Management, which is essential for digital services where the relationship of humans, devices, and things must be managed
- Identity Governance, supporting access requests, approvals, and reviews
- Access Management, covering the key element of an Identity Fabric, which is authenticating the users and providing them access to target applications; this includes authentication and authorization, and builds specifically on support for standards around authentication and Identity Federation
- Analytics, i.e. understanding the user behavior and inputs from a variety of sources to control access and mitigate risks
- IoT Support, with the ability of managing and accessing IoT devices, specifically for Consumer IoT – from health trackers in health insurance business cases to connected vehicles or traffic control systems for smart traffic and smart cities
API security is developing as a market space in its own right, and it is recommended that enterprises that are moving towards the Identity Fabric model of IAM be up to date on API security management. The recent Leadership Compass on API Management and Security has the most up-to-date information on the API market, critical to addressing the new era of identity.
As usual, Amazon Web Services (AWS) is making a slew of announcements at its reinvent conference in Las Vegas, and as expected, the key ones related to making it easier for organizations to move workloads to the cloud, keep data secure and get more value out of their data with services supported by Machine Learning.
However, one of the most interesting points made in the keynote by CEO Andy Jassy was not the power of the cloud transform business, revolutionize industry sectors or the latest AWS server processor chip and services, but about the common, non-technical barriers organizations have to overcome to move to the cloud, which every organization thinking about Digital Transformation should bear in mind.
Achieve business leadership alignment to drive cloud migration
The top observation is that leadership is essential. Digital Transformation of the business and the customer experience (which commonly involves moving workloads to the cloud) is most successful where there is strong support from the business leaders.
Leadership must be aligned on how and why the business needs to be transformed and must set aggressive goals for moving to the cloud. This means that one of the first and most important challenges for organizations to tackle is figuring out how to get the executive team aligned.
Set specific, aggressive targets to build momentum
AWS experience shows that setting specific goals forces everyone in the organization to commit to change. That in turn builds momentum within the organization with everyone driving towards achieving the goals that have been set and coming up with ideas for what can be done in the cloud.
Conversely, where organizations start by “dipping their toes” into cloud with experimentation, they tend to get stuck in this phase for an extended period of time without making any real progress. Only when Digital Transformation is driven from the top down, is real progress made quickly.
Cloud is not difficult, but training is essential
After leadership, the next challenge is that typically most people in an organization do not have any experience or understanding of the cloud. Education and training is therefore an important first step so that everyone in the organization understands how and why doing things in the cloud is different, and how that can benefit the business. While using the cloud is not difficult, it does require training.
It is important that organizations not attempt to move everything into the cloud at the same time. Instead, they should prioritize projects and draw up a methodical plan for moving workloads into the cloud, starting with the simplest and easiest first.
This approach avoids organizations getting paralysed into inaction by trying to do too much at once. It also enables the organization to learn with the easiest transitions, which in turn makes it easier to tackle the workloads that are more challenging as people in the organization gains experience and confidence.
AWS Outposts: removing another obstacle to cloud migration
This approach is more likely to result in completing a greater number of cloud projects in a relatively short time and build momentum to moving all remaining workloads into the cloud, and where there are things that simply cannot move or not right away, AWS has just announced general availability of AWS Outposts, fully managed racks that allow organizations to run compute and storage on-prem, while connecting to AWS services in the cloud.
This was just one of many more announcements on the first day of reinvent 2019, but the opening message was all about taking care of the non-technical aspects of cloud and the transformation goals of your business before considering the cloud services that will deliver the desired outcomes.
In short, get everyone in the leadership team to agree and get behind the why, then focus on the how and building momentum by training everyone to enable them to get up to speed.
Cloud migration similar to other complex IT projects
For all complex, heterogenous projects in IT with multiple stakeholders in the organization, including cloud migration projects, KuppingerCole Analysts recommend:
- Knowing your stakeholders and getting their buy-in;
- Understanding problem areas;
- Defining the policies and processes;
- Setting the right expectations about what you want to achieve;
- Outline a clear roadmap with defined goals;
- Highlighting the quick wins; and
- Ensuring you have the right resources on hand;
It is always important that the business understands the benefits of the project, and that will make it easier to get the buy-in and support of all the stakeholders. For the same reasons, it is important to make the purpose of the project clear so that all stakeholders are aware not only of the benefits, but also of what needs to be done, what is expected and when. And while it is important not to try to do too much in the initial stages, it is equally as important to identify quick wins from the outset and prioritize those to demonstrate value or benefit to the business in the early stages of the project.
Part of identifying quick wins is defining the goals and processes at the start - including responsibilities and accountabilities - to support the desired outcome of the project. This is also where the education piece, also mentioned by AWS comes in so that all stakeholders understand the processes and goals and have the tools and skills they need.
Understanding the problem areas and processes of the business is also key to the success of any IT project as this will be valuable in getting stakeholders on board as well as in setting the right goals and ensuring that you have the right resources and skill sets on hand for the project.
Continually measure progress and keep an eye on the future
Once the project is underway, KuppingerCole Analysts recommend continually measuring benefit/progress against the set of defined goals to demonstrate tangible success at every stage of the project.
Finally, keep an eye on emerging IT/business trend relevant to the project. Take them into account when planning your project and update your planning on a regular basis as new trends emerge.
Find out more on how to make your project a success, in this case applied to Identity and Access Management (IAM) projects, in this webinar podcast: How to Make Your IAM Program a Success and this point of view paper: One Identity - The Journey to IAM Success - 70226
Social logins are extremely popular. Instead of going through a process of creating a new account on another website, you just click on the “Continue with Facebook” or “Sign in with Google” button and you’re in. The website in question can automatically pull the needed information like your name or photo from either service to complete your new profile. It can even ask for additional permissions like seeing your friend list or posting new content on your behalf.
When implemented correctly, following all the security and compliance checks, this enables multiple convenient functions for users. However, some applications are known to abuse user consent, asking for excessively broad permissions to illegally collect personal information, track users across websites or post spam messages. The apparent inability (or unwillingness) of companies like Facebook to put an end to this has been a major source of criticism by privacy advocates for years.
Social logins for enterprise environments? A CISO’s nightmare
When it comes to enterprise cloud service providers, however, the issue can go far beyond user privacy. As one security researcher demonstrated just a few days ago, using a similar “Sign in with Microsoft” button can lead to much bigger security and compliance problems for any company that uses Office 365 or Azure AD to manage their employees’ identities.
Even though user authentication itself can be implemented with multiple security features like multi-factor authentication, Conditional Access, and Identity Protection to ensure that a malicious actor is not impersonating your employee, the default settings for user consent in Azure Active Directory are so permissive that a Microsoft account can be used for social logins as well.
Any third-party application can easily request user’s consent to access their mail and contacts, to read any of their documents, send e-mails on their behalf and so on. An access token issued by Microsoft to such an application is not subjected to any of the security validations mentioned above, it also does not expire automatically. If a user has access to any corporate intellectual property or deals with sensitive customer information, this creates a massive, unchecked and easily exploitable backdoor for malicious access or at least a huge compliance violation.
Even in the cloud, it’s still your responsibility
Of course, Microsoft’s own security guidance recommends disabling this feature under Azure Active Directory – Enterprise applications – User settings, but it is nevertheless enabled by default. It is also worth noting that under no circumstances is Microsoft liable for any data breaches which may occur this way: as the data owner, you’re still fully responsible for securing your information, under GDPR or any other compliance regulation.
In a way, this is exactly the same kind of problem as numerous data breaches caused by unprotected Amazon S3 buckets – even though AWS did not initially provide an on-by-default setting for data protection in their storage service, which eventually led to many large-scale data leaks, it was always the owners of this data that were held responsible for the consequences.
So, to be on the safe side, disabling the “Users can consent to apps accessing company data on their behalf” option seems to be a very sensible idea. It is also possible to still give your users a choice of consent, but only after a mandatory review by an administrator.
Unfortunately, this alone isn’t enough. You still have to check every user for potentially unsafe applications that already have access to their data. Unless your Office 365 subscription includes access to the Microsoft Cloud App Security portal, this may take a while…
Addressing cybersecurity within a company often occurs in response to an incident which impacts a business’ operations. A cyber incident could be a data breach or malicious disclosure of internal information to the public. Ideally a company starts thinking about cybersecurity before they are forced to act by an incident. Preparations for a cyber incident can be made through an internal or external benchmarking of the cybersecurity landscape.
What to expect from a benchmarking exercise
To ensure a benchmarking preparation offers real value to the company, the expectations and outcome of the exercise should be clearly defined. An initial step should be to establish a standardized process for a company which allows it to repeat the process and to measure improvements. Benchmarking should provide an indication whether the current environment is ready for a future cyber incident or not. Being ready means having an open architecture which uses standards and is extensible. But it is not sufficient for a company to only look at technological aspects; the benchmarking exercise should provide a deeper insight into organizational topics. Every assessment should show if there are some organizational gaps and help to create a roadmap to fix them soon.
Benchmarking should focus on technology and organization
From our experience, discussions between KuppingerCole representatives and the many relevant stakeholders within an organization improve the quality of the resulting benchmarking tool. Stakeholders are architects, managers, developers, (internal) customers up to the C-Levels, because they all have different perspectives on cybersecurity and other requirements that need to be united. Bringing the varied stakeholders together means discussing various areas of the company. Usually we use our 12 categories for that - 6 organizational aspects and 6 technological aspects.
Focusing on these areas ensures that cybersecurity is seen from the beginning to the end and gaps within a single or multiple areas can be discovered.
Collect information, compare, and define concrete measures
After knowing the relevant areas that are decisive for benchmarking, the next step is to collect the information. There are various documents and guidelines to be evaluated, but also many interviews with teams and stakeholders must be carried out. The best result can be achieved with a set of good questions covering the various areas, with answers from different people, which can be rated by each category.
A graphical visualization with a spider graph allows an easy and fast overview of strengths and weaknesses. One goal of the benchmarking exercise is to create comparable results. This could be done with peers, between maturity levels, or with old benchmarking results. Quality comparative data is quite difficult to generate internally, and it is recommended to have external support.
Understand the result and define a roadmap
The spider graph and the documented benchmarking gives a good insight into the weaknesses of a company. In this example the company has weaknesses in Network Security, Application Security and Risk Management, so the next step should be to prioritize the open topics in those areas. This company should take a deeper look into what is missing and what needs to be improved while also focusing on future requirements. Doing this allows a company to create both a general and a detailed roadmap for planning the next steps to improve the cybersecurity within your company.
Benchmarking the cybersecurity landscape is a complex process and it is difficult to define a metric internally where you can compare yourself to. If you want to benefit from the experience and knowledge of KuppingerCole, our methodology, and our comparable data, feel free to ask for assistance. We can support you!
The Information Protection Life Cycle (IPLC) and Framework describes the phases, methods, and controls associated with the protection of information. Though other IT and cybersecurity frameworks exist, none specifically focus on the protection of information across its use life. The IPLC documents 3 stages in the life of information and 6 categories of controls which can be applied as controls to secure information.
Stages in the life of information
Information is created, used, and (sometimes) disposed of when it is no longer needed or valid. Information can be actively created, such as when you start a new document, add records to a database, take photos, post blogs, etc. Information is also passively created when users and devices digitally interact with one another and with applications. Passively generated information often takes the form of log files, telemetry, or records added to databases without the explicit action of users. During its use life, information can be analyzed and modified in various ways by users, devices, and applications. After a certain point, information may cease to be useful, perhaps due to inaccuracies, inconsistencies, migrations to new platforms, incompatibility with new systems, and/or the regulatory mandates to store it has passed. When information is no longer useful, it needs to be disposed of by archival or deletion, depending on the case.
The types of controls applicable to information protection at each phase are briefly described below.
Discovery and classification
To properly protect information, it must be discovered and classified. The company picnic announcement is not as sensitive and valuable as the secret sauce in your company’s flagship product. Information can be discovered and classified at the time of creation and a result of data inventories. Thanks to GDPR’s Data Protection Impact Assessments (DPIAs), such inventories are more commonly being conducted.
Classification schemes depend on the industry, regulatory regimes, types of information, and a host of other factors. Classification mechanisms depend on the format. For structured data in databases, tools may add rows/columns/tables for tracking cell-level sensitivity. For unstructured data such as documents in file systems, metadata can be applied (“tagged”) to individual data objects.
Access to information must be granular, meaning only authorized users on trusted devices should be able to read, modify, or delete it. Access control systems can evaluate attributes about users, devices, and resources in accordance with pre-defined policies. Several access control standards, tools, and token formats exist. Access control can be difficult to implement across an enterprise due to the disparate kinds of systems involved, from on-premise to mobile to IaaS to SaaS apps. It is still on the frontier of identity management and cybersecurity.
Encryption, Masking, and Tokenization
These are controls that can protect confidentiality and integrity of information in-transit and at-rest. Encryption tools are widely available but can be hard to deploy and manage. Interoperability is often a problem.
Masking means irreversible substitution or redaction in many cases. For personally identifiable information (PII), pseudonymization is often employed to allow access to underlying information while preserving privacy. In the financial space, vaulted and vaultless tokenization are techniques that essentially issue privacy-respecting tokens in place of personal data. This enables one party to the transaction to assume and manage the risk while allowing other parties to not have to store and process PII or payment instrument information.
Sometimes attackers get past other security controls. It is necessary to put tools in place that can detect signs of nefarious activities at the endpoint, server, and network layers. On the endpoint level, all users should be running current Endpoint Protection (EPP, or anti-malware) products. Some organizations may benefit from EDR (Endpoint Detection & Response) agents. Servers should be outfitted similarly as well as dump event logs to SIEMs (Security Incident and Event Management). For networks, some organizations have used Intrusion Detection Systems (IDS), which are primarily rule-based and prone to false positives. Next generation Network Threat Detection & Response (NTDR) tools have advantages in that they utilize machine learning (ML) algorithms to baseline network activities to be able to better alert on anomalous behavior. Each type of solution has pros and cons, and they all require knowledgeable and experienced analysts to run them effectively.
This is a newer approach to information protection, derived from the old notion of honeypots. Distributed Deception Platforms (DDPs) deploy virtual resources designed to look attractive to attackers to lure them away from your valuable assets and into the deception environment for the purposes of containment, faster detection, and examination of attacker TTPs (Tools, Techniques, and Procedures). DDPs help reduce MTTR (Mean Time To Respond) and provide an advantage to defenders. DDPs are also increasingly needed in enterprises with IoT and medical devices, as they are facing more attacks and the devices in those environments usually cannot run other security tools.
When information is no longer valid and does not need to be retained for legal purposes, it should be removed from active systems. This may include archival or deletion, depending on the circumstances. The principle of data minimization is a good business practice to limit liability.
KuppingerCole will further develop the IPLC concept and publish additional research on the subject in the months ahead. Stay tuned! In the meantime, we have a wealth of research on EPP and EDR, access control systems, and data classification tools at KC PLUS.
At VMworld Europe 2019, Pat Gelsinger, CEO of VMware said security is fundamentally broken and that the overabundance of vendors is making the problem worse. I’m not sure this is true. Gelsinger had some good lines: applications that are updated and patched on a regular basis should be illegal and outlawed by legislation, and that security is too threat-based.
Making security less threat-focused is a good thing
The solution, according to VMware, is simple: we need to build more security in the platform with the supreme goal of a single security agent running across the entire enterprise. Security therefore should be built-in, unified and focused on the applications not the threat. That part is true: security should be less threat-focused, but I believe that the security of an organization should also be risk-based identity management.
When large platform vendors start talking about simplifying security it inevitably revolves around their platform – in this case a widely used and trusted platform. So, what is VMware’s solution? Not surprisingly it consists of putting apps and data at the center of access points, endpoint, identity, workload, cloud and the network - all protected by the “intrinsic security” layer, also known as Carbon Black, which VMware has now fully acquired. This will succeed because VMware will use big data analytics with a single agent that monitors all endpoints, and IAM lifecycle management will be built into the infrastructure.
“The Carbon Black platform will deliver a highly differentiated intrinsic security platform across network, endpoint, workload, identity, cloud and analytics. We believe this will bring a fundamentally new paradigm to the security industry,” said Gelsinger.
It ain’t what you do, it's the way that you do it
It’s obviously a compelling prospect but is it realistic? VMware are right to suggest that two major blocks to security are bolted-on solutions, and siloed platforms. But it would be more accurate to say that badly chosen bolted on solutions are a problem, and solutions that run within silos are the result of little or no risk assessment and bad planning. There are indeed thousands of security vendors out there, which VMware illustrated with a couple of slides featuring hundreds of logos (pity the poor guy who had to put that together).
The fundamental reason that so many solutions exist is that so many security and identity challenges exist, and these vary on the type and size of organization. Digital transformation has now added extra challenges. The demands of securing data, identity and authentication are fluid and require innovation in the market, which is why we cover it. Gelsinger was right to say that consolidation must come within organizations and in the vendor space – that is normal, and VMware’s acquisition is a good example of that. But consolidation is often followed by market innovation from startups that serve new demands that the process of consolidation leaves behind.
Super solutions are not a new idea
Which brings us to the crux of this so-called intrinsic security proposition. In simple terms, chucking a semi-intelligent big data analytics engine around your cloud and virtualised infrastructures sounds great. The real-time analysis engine keeps all the bad stuff out without relying solely old-fashioned AV and signature-based protection. Except I don’t think that is possible. It will not solve all granular problems around IAM such as privileged accounts and credentials embedded in code. Intrinsic Security sounds very much like a super firewall solution for VMware – useful to have but it won’t stop organizations that run on VMware from eventually going back to that slide with so many other vendor logos...
For more on Infrastructure as Service please see our Leadership Compass report.
Demand forecasting is one of the most crucial factors that determine the success of every business, online or offline, retail or wholesale. Being able to predict future customer behavior is essential for optimal purchase planning, supply chain management, reducing potential risks and improving profit margins. In some form, demand prediction has existed since the dawn of civilization, just as long as commerce itself.
Yet, even nowadays, when businesses have much more historical data available for analysis and a broad range of statistical methods to crunch it, demand forecasting is still not hard science, often relying on expert decisions based on intuition alone. With all the hype surrounding artificial intelligence’s potential applications in just about any line of business, it’s no wonder then that many experts believe it will have the biggest impact on demand planning as well.
Benefits of AI applications in demand forecasting
But what exactly are the potential benefits of this new approach as opposed to traditional methods? Well, the most obvious one is efficiency due to the elimination of the human factor. Instead of relying on serendipity, machine learning-based methods operate on quantifiable data, both from the business's own operational history and on various market intelligence than may influence demand fluctuations (like competitor activities, price changes or even weather).
On the other hand, most traditional statistical demand prediction methods were designed to better approximate specific use cases: quick vs. slow fluctuations, large vs. small businesses and so on. Selecting the right combination of those methods requires you to be able to deal with a lot of questions you currently might not even anticipate, not to mention know the right answers. Machine learning-based business analytics solutions are known for helping companies to discover previously unknown patterns in their historical data and thus for removing a substantial part of guesswork from predictions.
Last but not least, the market already has quite a few ready-made solutions to offer, either as standalone platforms or as a part of bigger business intelligence solutions. You don’t need to reinvent the wheel anymore. Just connect one of those solutions to your historical data, the rest, including multiple sources of external market intelligence, will be at your fingertips instantly.
What about challenges and limitations?
Of course, one has to consider the potential challenges of this approach as well. The biggest one has even nothing to do with AI: it’s all about the availability and quality of your own data. Machine learning models require lots of input to deliver quality results, and by far not every company has all this information in a form ready for sharing yet. For many, the journey towards AI-powered future has to start with breaking the silos and making historical data unified and consistent.
This does not apply just to sales operations, by the way. Efficient demand prediction can only work when data across all business units can be correlated: including logistics, marketing, and others. If your (or your suppliers’, for that matter) primary analytics tool is still Excel, thinking about artificial intelligence is probably a bit premature.
A major inherent problem of many AI applications is explainability. For many, not being able to understand how exactly a particular prediction has been reached might be a major cause of distrust. Of course, this is primarily an organizational and cultural challenge, but a major challenge, nevertheless.
However, these challenges should not be seen as an excuse to ignore the new AI-based solutions completely. Artificial intelligence for demand forecasting is no longer just a theory. Businesses across various verticals are already using it with various, but undeniably positive results. Researchers claim that machine learning methods can achieve up to 50% better accuracy over purely statistical approaches, to say nothing about human intuition.
If your company is not ready for embracing AI yet, make sure you start addressing your shortcomings before your competitors. In the age of digital transformation, having business processes and business data agile and available for new technologies is a matter of survival, after all. More efficient demand forecasting is just one of the benefits you’ll be able to reap afterward.
Feel free to browse our Focus Area: AI for the Future of your Business for more related content.
The market shift to cloud-based security services was highlighted at the Ignite Europe 2019 held by Palo Alto Networks in Barcelona, where the company announced a few product enhancements in an effort to round out its offerings to meet what it expects will be growing market demand.
A key element of its go-to market strategy is in response to market demand to reduce the complexity of security and to reduce the number of suppliers by adding cloud-delivered so Software-Defined Wide Area Network SD-WAN and DLP (data loss prevention) capabilities to its Prisma Access product.
The move not only removes the need for separate SD-WAN suppliers but also rounds out Prisma Access to deliver converged networking and security capabilities in the cloud to address the limitations of traditional architectures.
Palo Alto Networks is not alone in going after this market, but it is the latest player in the so-called Secure Access Service Edge (SASE) market to add SD-WAN capabilities to combine edge computing, security and wide-area networking (WAN) into a single cloud-managed platform.
The move builds on core company strengths and is therefore logical. Failure to have done so would have missed a market opportunity and would have been surprising.
Data drives better threat detection
In line with the company’s belief that threat intelligence should be shared, that security operations centers (SOCs) need to be more data-driven to support current and future threats, and the company’s drive to enable greater interoperability between security products, the second key announcement at Ignite Europe 2019 is the extension of its Cortex XDR detection and response application to third-party data sources.
In addition to data from Palo Alto Networks and other recognized industry sources, Cortex XDR 2.0 (which automates threat hunting and investigations, and consolidates alerts) is designed to consume logs from third-party security products for analytics and investigations, starting with logs from Check Point, soon to be followed by Cisco, Fortinet and Forcepoint.
SD-WAN market shakeup
The expansion by Palo Alto Networks into the SD-WAN market makes commercial sense to round out its Prisma Access offering to meet market demands for simpler, easier, consistent security with fewer suppliers and support the growing number of Managed Security Service providers.
However, it also signals a change in the relationship between Palo Alto Networks and its SD-WAN integration partners and in the market overall, although executives are downplaying the negative impact on SD-WAN partners, saying relationships are continually evolving and will inevitably change as market needs change.
The big takeaway from Ignite Europe 2019 is that Palo Alto Networks is broadening its brands and capabilities as a cloud-based security services company and that in future, selling through partners such as Managed Service Providers will be a critical part of its go-to market strategy.
It will be interesting to see whether the bet by Palo Alto Networks pays off to steal a march on competitors also rushing to the market with similar offerings such as SD-WAN firm Open Systems, networking firm Cato Networks, IT automation and security firm Infoblox, and virtualization firm VMware.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
Subscribe to our Podcasts
How can we help you