As usual, Amazon Web Services (AWS) is making a slew of announcements at its reinvent conference in Las Vegas, and as expected, the key ones related to making it easier for organizations to move workloads to the cloud, keep data secure and get more value out of their data with services supported by Machine Learning.
However, one of the most interesting points made in the keynote by CEO Andy Jassy was not the power of the cloud transform business, revolutionize industry sectors or the latest AWS server processor chip and services, but about the common, non-technical barriers organizations have to overcome to move to the cloud, which every organization thinking about Digital Transformation should bear in mind.
Achieve business leadership alignment to drive cloud migration
The top observation is that leadership is essential. Digital Transformation of the business and the customer experience (which commonly involves moving workloads to the cloud) is most successful where there is strong support from the business leaders.
Leadership must be aligned on how and why the business needs to be transformed and must set aggressive goals for moving to the cloud. This means that one of the first and most important challenges for organizations to tackle is figuring out how to get the executive team aligned.
Set specific, aggressive targets to build momentum
AWS experience shows that setting specific goals forces everyone in the organization to commit to change. That in turn builds momentum within the organization with everyone driving towards achieving the goals that have been set and coming up with ideas for what can be done in the cloud.
Conversely, where organizations start by “dipping their toes” into cloud with experimentation, they tend to get stuck in this phase for an extended period of time without making any real progress. Only when Digital Transformation is driven from the top down, is real progress made quickly.
Cloud is not difficult, but training is essential
After leadership, the next challenge is that typically most people in an organization do not have any experience or understanding of the cloud. Education and training is therefore an important first step so that everyone in the organization understands how and why doing things in the cloud is different, and how that can benefit the business. While using the cloud is not difficult, it does require training.
It is important that organizations not attempt to move everything into the cloud at the same time. Instead, they should prioritize projects and draw up a methodical plan for moving workloads into the cloud, starting with the simplest and easiest first.
This approach avoids organizations getting paralysed into inaction by trying to do too much at once. It also enables the organization to learn with the easiest transitions, which in turn makes it easier to tackle the workloads that are more challenging as people in the organization gains experience and confidence.
AWS Outposts: removing another obstacle to cloud migration
This approach is more likely to result in completing a greater number of cloud projects in a relatively short time and build momentum to moving all remaining workloads into the cloud, and where there are things that simply cannot move or not right away, AWS has just announced general availability of AWS Outposts, fully managed racks that allow organizations to run compute and storage on-prem, while connecting to AWS services in the cloud.
This was just one of many more announcements on the first day of reinvent 2019, but the opening message was all about taking care of the non-technical aspects of cloud and the transformation goals of your business before considering the cloud services that will deliver the desired outcomes.
In short, get everyone in the leadership team to agree and get behind the why, then focus on the how and building momentum by training everyone to enable them to get up to speed.
Cloud migration similar to other complex IT projects
For all complex, heterogenous projects in IT with multiple stakeholders in the organization, including cloud migration projects, KuppingerCole Analysts recommend:
- Knowing your stakeholders and getting their buy-in;
- Understanding problem areas;
- Defining the policies and processes;
- Setting the right expectations about what you want to achieve;
- Outline a clear roadmap with defined goals;
- Highlighting the quick wins; and
- Ensuring you have the right resources on hand;
It is always important that the business understands the benefits of the project, and that will make it easier to get the buy-in and support of all the stakeholders. For the same reasons, it is important to make the purpose of the project clear so that all stakeholders are aware not only of the benefits, but also of what needs to be done, what is expected and when. And while it is important not to try to do too much in the initial stages, it is equally as important to identify quick wins from the outset and prioritize those to demonstrate value or benefit to the business in the early stages of the project.
Part of identifying quick wins is defining the goals and processes at the start - including responsibilities and accountabilities - to support the desired outcome of the project. This is also where the education piece, also mentioned by AWS comes in so that all stakeholders understand the processes and goals and have the tools and skills they need.
Understanding the problem areas and processes of the business is also key to the success of any IT project as this will be valuable in getting stakeholders on board as well as in setting the right goals and ensuring that you have the right resources and skill sets on hand for the project.
Continually measure progress and keep an eye on the future
Once the project is underway, KuppingerCole Analysts recommend continually measuring benefit/progress against the set of defined goals to demonstrate tangible success at every stage of the project.
Finally, keep an eye on emerging IT/business trend relevant to the project. Take them into account when planning your project and update your planning on a regular basis as new trends emerge.
Find out more on how to make your project a success, in this case applied to Identity and Access Management (IAM) projects, in this webinar podcast: How to Make Your IAM Program a Success and this point of view paper: One Identity - The Journey to IAM Success - 70226
Social logins are extremely popular. Instead of going through a process of creating a new account on another website, you just click on the “Continue with Facebook” or “Sign in with Google” button and you’re in. The website in question can automatically pull the needed information like your name or photo from either service to complete your new profile. It can even ask for additional permissions like seeing your friend list or posting new content on your behalf.
When implemented correctly, following all the security and compliance checks, this enables multiple convenient functions for users. However, some applications are known to abuse user consent, asking for excessively broad permissions to illegally collect personal information, track users across websites or post spam messages. The apparent inability (or unwillingness) of companies like Facebook to put an end to this has been a major source of criticism by privacy advocates for years.
Social logins for enterprise environments? A CISO’s nightmare
When it comes to enterprise cloud service providers, however, the issue can go far beyond user privacy. As one security researcher demonstrated just a few days ago, using a similar “Sign in with Microsoft” button can lead to much bigger security and compliance problems for any company that uses Office 365 or Azure AD to manage their employees’ identities.
Even though user authentication itself can be implemented with multiple security features like multi-factor authentication, Conditional Access, and Identity Protection to ensure that a malicious actor is not impersonating your employee, the default settings for user consent in Azure Active Directory are so permissive that a Microsoft account can be used for social logins as well.
Any third-party application can easily request user’s consent to access their mail and contacts, to read any of their documents, send e-mails on their behalf and so on. An access token issued by Microsoft to such an application is not subjected to any of the security validations mentioned above, it also does not expire automatically. If a user has access to any corporate intellectual property or deals with sensitive customer information, this creates a massive, unchecked and easily exploitable backdoor for malicious access or at least a huge compliance violation.
Even in the cloud, it’s still your responsibility
Of course, Microsoft’s own security guidance recommends disabling this feature under Azure Active Directory – Enterprise applications – User settings, but it is nevertheless enabled by default. It is also worth noting that under no circumstances is Microsoft liable for any data breaches which may occur this way: as the data owner, you’re still fully responsible for securing your information, under GDPR or any other compliance regulation.
In a way, this is exactly the same kind of problem as numerous data breaches caused by unprotected Amazon S3 buckets – even though AWS did not initially provide an on-by-default setting for data protection in their storage service, which eventually led to many large-scale data leaks, it was always the owners of this data that were held responsible for the consequences.
So, to be on the safe side, disabling the “Users can consent to apps accessing company data on their behalf” option seems to be a very sensible idea. It is also possible to still give your users a choice of consent, but only after a mandatory review by an administrator.
Unfortunately, this alone isn’t enough. You still have to check every user for potentially unsafe applications that already have access to their data. Unless your Office 365 subscription includes access to the Microsoft Cloud App Security portal, this may take a while…
Addressing cybersecurity within a company often occurs in response to an incident which impacts a business’ operations. A cyber incident could be a data breach or malicious disclosure of internal information to the public. Ideally a company starts thinking about cybersecurity before they are forced to act by an incident. Preparations for a cyber incident can be made through an internal or external benchmarking of the cybersecurity landscape.
What to expect from a benchmarking exercise
To ensure a benchmarking preparation offers real value to the company, the expectations and outcome of the exercise should be clearly defined. An initial step should be to establish a standardized process for a company which allows it to repeat the process and to measure improvements. Benchmarking should provide an indication whether the current environment is ready for a future cyber incident or not. Being ready means having an open architecture which uses standards and is extensible. But it is not sufficient for a company to only look at technological aspects; the benchmarking exercise should provide a deeper insight into organizational topics. Every assessment should show if there are some organizational gaps and help to create a roadmap to fix them soon.
Benchmarking should focus on technology and organization
From our experience, discussions between KuppingerCole representatives and the many relevant stakeholders within an organization improve the quality of the resulting benchmarking tool. Stakeholders are architects, managers, developers, (internal) customers up to the C-Levels, because they all have different perspectives on cybersecurity and other requirements that need to be united. Bringing the varied stakeholders together means discussing various areas of the company. Usually we use our 12 categories for that - 6 organizational aspects and 6 technological aspects.
Focusing on these areas ensures that cybersecurity is seen from the beginning to the end and gaps within a single or multiple areas can be discovered.
Collect information, compare, and define concrete measures
After knowing the relevant areas that are decisive for benchmarking, the next step is to collect the information. There are various documents and guidelines to be evaluated, but also many interviews with teams and stakeholders must be carried out. The best result can be achieved with a set of good questions covering the various areas, with answers from different people, which can be rated by each category.
A graphical visualization with a spider graph allows an easy and fast overview of strengths and weaknesses. One goal of the benchmarking exercise is to create comparable results. This could be done with peers, between maturity levels, or with old benchmarking results. Quality comparative data is quite difficult to generate internally, and it is recommended to have external support.
Understand the result and define a roadmap
The spider graph and the documented benchmarking gives a good insight into the weaknesses of a company. In this example the company has weaknesses in Network Security, Application Security and Risk Management, so the next step should be to prioritize the open topics in those areas. This company should take a deeper look into what is missing and what needs to be improved while also focusing on future requirements. Doing this allows a company to create both a general and a detailed roadmap for planning the next steps to improve the cybersecurity within your company.
Benchmarking the cybersecurity landscape is a complex process and it is difficult to define a metric internally where you can compare yourself to. If you want to benefit from the experience and knowledge of KuppingerCole, our methodology, and our comparable data, feel free to ask for assistance. We can support you!
The Information Protection Life Cycle (IPLC) and Framework describes the phases, methods, and controls associated with the protection of information. Though other IT and cybersecurity frameworks exist, none specifically focus on the protection of information across its use life. The IPLC documents 3 stages in the life of information and 6 categories of controls which can be applied as controls to secure information.
Stages in the life of information
Information is created, used, and (sometimes) disposed of when it is no longer needed or valid. Information can be actively created, such as when you start a new document, add records to a database, take photos, post blogs, etc. Information is also passively created when users and devices digitally interact with one another and with applications. Passively generated information often takes the form of log files, telemetry, or records added to databases without the explicit action of users. During its use life, information can be analyzed and modified in various ways by users, devices, and applications. After a certain point, information may cease to be useful, perhaps due to inaccuracies, inconsistencies, migrations to new platforms, incompatibility with new systems, and/or the regulatory mandates to store it has passed. When information is no longer useful, it needs to be disposed of by archival or deletion, depending on the case.
The types of controls applicable to information protection at each phase are briefly described below.
Discovery and classification
To properly protect information, it must be discovered and classified. The company picnic announcement is not as sensitive and valuable as the secret sauce in your company’s flagship product. Information can be discovered and classified at the time of creation and a result of data inventories. Thanks to GDPR’s Data Protection Impact Assessments (DPIAs), such inventories are more commonly being conducted.
Classification schemes depend on the industry, regulatory regimes, types of information, and a host of other factors. Classification mechanisms depend on the format. For structured data in databases, tools may add rows/columns/tables for tracking cell-level sensitivity. For unstructured data such as documents in file systems, metadata can be applied (“tagged”) to individual data objects.
Access to information must be granular, meaning only authorized users on trusted devices should be able to read, modify, or delete it. Access control systems can evaluate attributes about users, devices, and resources in accordance with pre-defined policies. Several access control standards, tools, and token formats exist. Access control can be difficult to implement across an enterprise due to the disparate kinds of systems involved, from on-premise to mobile to IaaS to SaaS apps. It is still on the frontier of identity management and cybersecurity.
Encryption, Masking, and Tokenization
These are controls that can protect confidentiality and integrity of information in-transit and at-rest. Encryption tools are widely available but can be hard to deploy and manage. Interoperability is often a problem.
Masking means irreversible substitution or redaction in many cases. For personally identifiable information (PII), pseudonymization is often employed to allow access to underlying information while preserving privacy. In the financial space, vaulted and vaultless tokenization are techniques that essentially issue privacy-respecting tokens in place of personal data. This enables one party to the transaction to assume and manage the risk while allowing other parties to not have to store and process PII or payment instrument information.
Sometimes attackers get past other security controls. It is necessary to put tools in place that can detect signs of nefarious activities at the endpoint, server, and network layers. On the endpoint level, all users should be running current Endpoint Protection (EPP, or anti-malware) products. Some organizations may benefit from EDR (Endpoint Detection & Response) agents. Servers should be outfitted similarly as well as dump event logs to SIEMs (Security Incident and Event Management). For networks, some organizations have used Intrusion Detection Systems (IDS), which are primarily rule-based and prone to false positives. Next generation Network Threat Detection & Response (NTDR) tools have advantages in that they utilize machine learning (ML) algorithms to baseline network activities to be able to better alert on anomalous behavior. Each type of solution has pros and cons, and they all require knowledgeable and experienced analysts to run them effectively.
This is a newer approach to information protection, derived from the old notion of honeypots. Distributed Deception Platforms (DDPs) deploy virtual resources designed to look attractive to attackers to lure them away from your valuable assets and into the deception environment for the purposes of containment, faster detection, and examination of attacker TTPs (Tools, Techniques, and Procedures). DDPs help reduce MTTR (Mean Time To Respond) and provide an advantage to defenders. DDPs are also increasingly needed in enterprises with IoT and medical devices, as they are facing more attacks and the devices in those environments usually cannot run other security tools.
When information is no longer valid and does not need to be retained for legal purposes, it should be removed from active systems. This may include archival or deletion, depending on the circumstances. The principle of data minimization is a good business practice to limit liability.
KuppingerCole will further develop the IPLC concept and publish additional research on the subject in the months ahead. Stay tuned! In the meantime, we have a wealth of research on EPP and EDR, access control systems, and data classification tools at KC PLUS.
At VMworld Europe 2019, Pat Gelsinger, CEO of VMware said security is fundamentally broken and that the overabundance of vendors is making the problem worse. I’m not sure this is true. Gelsinger had some good lines: applications that are updated and patched on a regular basis should be illegal and outlawed by legislation, and that security is too threat-based.
Making security less threat-focused is a good thing
The solution, according to VMware, is simple: we need to build more security in the platform with the supreme goal of a single security agent running across the entire enterprise. Security therefore should be built-in, unified and focused on the applications not the threat. That part is true: security should be less threat-focused, but I believe that the security of an organization should also be risk-based identity management.
When large platform vendors start talking about simplifying security it inevitably revolves around their platform – in this case a widely used and trusted platform. So, what is VMware’s solution? Not surprisingly it consists of putting apps and data at the center of access points, endpoint, identity, workload, cloud and the network - all protected by the “intrinsic security” layer, also known as Carbon Black, which VMware has now fully acquired. This will succeed because VMware will use big data analytics with a single agent that monitors all endpoints, and IAM lifecycle management will be built into the infrastructure.
“The Carbon Black platform will deliver a highly differentiated intrinsic security platform across network, endpoint, workload, identity, cloud and analytics. We believe this will bring a fundamentally new paradigm to the security industry,” said Gelsinger.
It ain’t what you do, it's the way that you do it
It’s obviously a compelling prospect but is it realistic? VMware are right to suggest that two major blocks to security are bolted-on solutions, and siloed platforms. But it would be more accurate to say that badly chosen bolted on solutions are a problem, and solutions that run within silos are the result of little or no risk assessment and bad planning. There are indeed thousands of security vendors out there, which VMware illustrated with a couple of slides featuring hundreds of logos (pity the poor guy who had to put that together).
The fundamental reason that so many solutions exist is that so many security and identity challenges exist, and these vary on the type and size of organization. Digital transformation has now added extra challenges. The demands of securing data, identity and authentication are fluid and require innovation in the market, which is why we cover it. Gelsinger was right to say that consolidation must come within organizations and in the vendor space – that is normal, and VMware’s acquisition is a good example of that. But consolidation is often followed by market innovation from startups that serve new demands that the process of consolidation leaves behind.
Super solutions are not a new idea
Which brings us to the crux of this so-called intrinsic security proposition. In simple terms, chucking a semi-intelligent big data analytics engine around your cloud and virtualised infrastructures sounds great. The real-time analysis engine keeps all the bad stuff out without relying solely old-fashioned AV and signature-based protection. Except I don’t think that is possible. It will not solve all granular problems around IAM such as privileged accounts and credentials embedded in code. Intrinsic Security sounds very much like a super firewall solution for VMware – useful to have but it won’t stop organizations that run on VMware from eventually going back to that slide with so many other vendor logos...
For more on Infrastructure as Service please see our Leadership Compass report.
Demand forecasting is one of the most crucial factors that determine the success of every business, online or offline, retail or wholesale. Being able to predict future customer behavior is essential for optimal purchase planning, supply chain management, reducing potential risks and improving profit margins. In some form, demand prediction has existed since the dawn of civilization, just as long as commerce itself.
Yet, even nowadays, when businesses have much more historical data available for analysis and a broad range of statistical methods to crunch it, demand forecasting is still not hard science, often relying on expert decisions based on intuition alone. With all the hype surrounding artificial intelligence’s potential applications in just about any line of business, it’s no wonder then that many experts believe it will have the biggest impact on demand planning as well.
Benefits of AI applications in demand forecasting
But what exactly are the potential benefits of this new approach as opposed to traditional methods? Well, the most obvious one is efficiency due to the elimination of the human factor. Instead of relying on serendipity, machine learning-based methods operate on quantifiable data, both from the business's own operational history and on various market intelligence than may influence demand fluctuations (like competitor activities, price changes or even weather).
On the other hand, most traditional statistical demand prediction methods were designed to better approximate specific use cases: quick vs. slow fluctuations, large vs. small businesses and so on. Selecting the right combination of those methods requires you to be able to deal with a lot of questions you currently might not even anticipate, not to mention know the right answers. Machine learning-based business analytics solutions are known for helping companies to discover previously unknown patterns in their historical data and thus for removing a substantial part of guesswork from predictions.
Last but not least, the market already has quite a few ready-made solutions to offer, either as standalone platforms or as a part of bigger business intelligence solutions. You don’t need to reinvent the wheel anymore. Just connect one of those solutions to your historical data, the rest, including multiple sources of external market intelligence, will be at your fingertips instantly.
What about challenges and limitations?
Of course, one has to consider the potential challenges of this approach as well. The biggest one has even nothing to do with AI: it’s all about the availability and quality of your own data. Machine learning models require lots of input to deliver quality results, and by far not every company has all this information in a form ready for sharing yet. For many, the journey towards AI-powered future has to start with breaking the silos and making historical data unified and consistent.
This does not apply just to sales operations, by the way. Efficient demand prediction can only work when data across all business units can be correlated: including logistics, marketing, and others. If your (or your suppliers’, for that matter) primary analytics tool is still Excel, thinking about artificial intelligence is probably a bit premature.
A major inherent problem of many AI applications is explainability. For many, not being able to understand how exactly a particular prediction has been reached might be a major cause of distrust. Of course, this is primarily an organizational and cultural challenge, but a major challenge, nevertheless.
However, these challenges should not be seen as an excuse to ignore the new AI-based solutions completely. Artificial intelligence for demand forecasting is no longer just a theory. Businesses across various verticals are already using it with various, but undeniably positive results. Researchers claim that machine learning methods can achieve up to 50% better accuracy over purely statistical approaches, to say nothing about human intuition.
If your company is not ready for embracing AI yet, make sure you start addressing your shortcomings before your competitors. In the age of digital transformation, having business processes and business data agile and available for new technologies is a matter of survival, after all. More efficient demand forecasting is just one of the benefits you’ll be able to reap afterward.
Feel free to browse our Focus Area: AI for the Future of your Business for more related content.
The market shift to cloud-based security services was highlighted at the Ignite Europe 2019 held by Palo Alto Networks in Barcelona, where the company announced a few product enhancements in an effort to round out its offerings to meet what it expects will be growing market demand.
A key element of its go-to market strategy is in response to market demand to reduce the complexity of security and to reduce the number of suppliers by adding cloud-delivered so Software-Defined Wide Area Network SD-WAN and DLP (data loss prevention) capabilities to its Prisma Access product.
The move not only removes the need for separate SD-WAN suppliers but also rounds out Prisma Access to deliver converged networking and security capabilities in the cloud to address the limitations of traditional architectures.
Palo Alto Networks is not alone in going after this market, but it is the latest player in the so-called Secure Access Service Edge (SASE) market to add SD-WAN capabilities to combine edge computing, security and wide-area networking (WAN) into a single cloud-managed platform.
The move builds on core company strengths and is therefore logical. Failure to have done so would have missed a market opportunity and would have been surprising.
Data drives better threat detection
In line with the company’s belief that threat intelligence should be shared, that security operations centers (SOCs) need to be more data-driven to support current and future threats, and the company’s drive to enable greater interoperability between security products, the second key announcement at Ignite Europe 2019 is the extension of its Cortex XDR detection and response application to third-party data sources.
In addition to data from Palo Alto Networks and other recognized industry sources, Cortex XDR 2.0 (which automates threat hunting and investigations, and consolidates alerts) is designed to consume logs from third-party security products for analytics and investigations, starting with logs from Check Point, soon to be followed by Cisco, Fortinet and Forcepoint.
SD-WAN market shakeup
The expansion by Palo Alto Networks into the SD-WAN market makes commercial sense to round out its Prisma Access offering to meet market demands for simpler, easier, consistent security with fewer suppliers and support the growing number of Managed Security Service providers.
However, it also signals a change in the relationship between Palo Alto Networks and its SD-WAN integration partners and in the market overall, although executives are downplaying the negative impact on SD-WAN partners, saying relationships are continually evolving and will inevitably change as market needs change.
The big takeaway from Ignite Europe 2019 is that Palo Alto Networks is broadening its brands and capabilities as a cloud-based security services company and that in future, selling through partners such as Managed Service Providers will be a critical part of its go-to market strategy.
It will be interesting to see whether the bet by Palo Alto Networks pays off to steal a march on competitors also rushing to the market with similar offerings such as SD-WAN firm Open Systems, networking firm Cato Networks, IT automation and security firm Infoblox, and virtualization firm VMware.
Do you belong to the group of people who would like to completely retire all obsolete solutions and replace existing solutions with new ones in a Big Bang? Do you do the same with company infrastructures? Then you don't need to read any further here. Please tell us later, how things worked out for you.
Or do you belong in the other extreme to those companies in which infrastructures can be further developed only through current challenges, audit findings, or particularly prestigious projects funded with a budget?
However, you should read on, because we want to give you argumentative backing for a more comprehensive approach.
Identity infrastructure is the basis of enterprise security
In previous articles we have introduced the Identity Fabric, a concept that serves as a viable foundation for enterprise architectures for Identity and Access Management (IAM) in the digital age.
This concept, which KuppingerCole places at the center of its definition of an IAM blueprint, expressly starts from a central assumption: practically every company today operates an identity infrastructure. This infrastructure virtually always forms the central basis of enterprise security, ensures basic compliance and governance, helps with the request for authorizations and perhaps even for their withdrawal, if no longer needed.
As a result, the existing infrastructures already meet basic requirements today, but these were often defined in previous phases for then existing companies.
The demand for new ways of managing identities
Yes, we too cannot avoid the buzzword "digitalization" now, as requirements that cannot be adequately covered by traditional systems arise precisely from this context. And just adding some additional components (a little CIAM here, some MFA there, or Azure AD, which came with the Office 365 installation anyway) won't help. The way we communicate has changed and companies are advancing their operations with entirely new business models and processes. New ways of working together and communicating demand for new ways of managing identities, satisfying regulatory requirements and delivering secure processes, not least to protect customers and indeed your very own business.
What to do if your own IAM (only) follows a classic enterprise focus, i.e. fulfills the following tasks very well?
- Traditional Lifecycles
- Traditional Provisioning
- Traditional Access Governance
- Traditional Authentication (Username/Password, Hardware Tokens, VPNs)
- Traditional Authorization (Roles, Roles, Roles)
- Consumer Identities (somehow)
And what to do if the business wants you, as the system owner of an IAM, to meet the following requirements?
- High Flexibility
- High Delivery Speed for New Digital Services
- Software as a Service
- Container and Orchestration
- Identity API and API Security
- Security and Zero Trust
The development of parallel infrastructures has been largely recognized as a wrong approach.
Convert existing architectures during operation
Therefore, it is necessary to gently convert existing architectures so that the ongoing operation is permanently guaranteed. Ideally, this process also optimizes the architecture in terms of efficiency and costs, while at the same time adding missing functionalities in a scalable and comprehensive manner.
Figuratively speaking, you have to renovate your house while you continue to live in it. Nobody who has fully understood digitization will deny that the proper management of all relevant identities from customers to employees and partners to devices is one, if not the central basic technology for this. But on the way there, everything already achieved must continue to be maintained, quick wins must provide proof that the company is on the right track, and an understanding of the big picture (the "blueprint") must not be lost.
If you want to find out more: Read the "Leadership Brief: Identity Fabrics - Connecting Anyone to Every Service - 80204" as the first introduction to this comprehensive and promising concept. The KuppingerCole "Architecture Blueprint Identity and Access Management -72550" has just been published and wants to provide you with the conceptual foundation for sustainably transforming existing IAM infrastructures into a future-proof basic technology for the 2020s and beyond.
In addition, leading-edge whitepaper documents currently being prepared and soon to be published (watch this space, we will provide links in one of the upcoming issues of our “Analysts’s view IAM”) will provide essential recommendations for the initialization and implementation of such a comprehensive transformation program.
KuppingerCole has supported successful projects over the course of the past months in which existing, powerful but functionally insufficient IAM architectures were put on the road to a sustained transformation into a powerful future infrastructure. The underlying concepts can be found in the documents above, but if you would like us to guide you along this path, please feel free to talk to us about possible support.
Feel free to browse our Focus Area: The Future of Identity & Access Management for more related content.
There is more to the cloud than AWS, Azure, IBM and Google according to OVHCloud - the new name for OVH as it celebrates its 20th anniversary. While the big four have carved up the public cloud between them, the French cloud specialist believes that business needs are changing, which gives them an opportunity in the enterprise market it is now targeting. In short, OVHCloud believes there is a small, but discernible shift back to the private cloud - for security and compliance imperatives.
That does not mean that OVHCloud is abandoning the public cloud to the Americans. At October’s OVHCloud Summit in Paris, CEO Michel Paulin spoke forcefully of the need for Europe (for that, read France) to compete in this space. “We believe we can take on the US and Chinese hegemony. Europe has all the talents needed to build a digital channel that can rival all the other continents.” he said.
OVHCloud needs to shift focus and mature
The company is growing, with 2,200 employees and revenue estimated at around $500m. For comparison, AWS posted revenue of $9bn for its third quarter in 2019 – spot the difference. OVHCloud is doubling down on the Security as a Service (SaaS) market with 100 SaaS products announced for a new dedicated marketplace. The company says the focus will be on collaboration and productivity tools, web presence and cloud communication. On the security front, OVHCloud is promising the following soon: Managed Private Registry, Public Cloud Object Storage Encryption and K8s private network.
If OVHCloud is to take even a chunk of the Big Four’s market, it needs to shift focus and mature. It believes it can by moving from what it terms the “startup world” of digital native companies into the traditional enterprise sector (without neglecting its cloud native customers). Gained customers so far include insurance, aviation, big IT services and some finance and retail customers. OVHCloud believes the enterprise market is lagging its traditional customers in digital innovation and transformation.
Security reasons and better data oversight bring customers back to private clouds
Crucially, the company thinks that enterprise customers are coming back to private clouds for security reasons and better oversight of data in the age of big compliance. At the same time, it predicts that the future of cloud should remain open and multi-cloud, something I and others would agree with.
In terms of business strategy, OVHCloud is moving from a product approach to a solution approach along with the shift towards enterprise customers – this makes sense. OVHCloud makes much of its ability to build its own servers and cooling systems, and sees this as a USP, claiming the industry’s lowest TCO for energy usage. Such an advantage depends on scale, however, and in an open multi-cloud, multi-vendor market, the cost savings may make little difference to enterprise customers. But the green message may play well in today’s climate conscious market for some buyers in the startup crowd and potentially in the more digital parts of larger enterprises.
For more insight into the enterprise cloud market please read our reports or contact one of our analysts.
From what used to be a purely technical concept created to make developers’ lives easier, Application Programming Interfaces (APIs) have evolved into one of the foundations of modern digital business. Today, APIs can be found everywhere – at homes and in mobile devices, in corporate networks and in the cloud, even in industrial environments, to say nothing about the Internet of Things.
When dealing with APIs, security should not be an afterthought
In a world where digital information is one of the “crown jewels” of many modern businesses (and even the primary source of revenue for some), APIs are now powering the logistics of delivering digital products to partners and customers. Almost every software product or cloud service now comes with a set of APIs for management, integration, monitoring or a multitude of other purposes.
As it often happens in such scenarios, security quickly becomes an afterthought at best or, even worse, it is seen as a nuisance and an obstacle on the road to success. The success of an API is measured by its adoption and security mechanisms are seen as friction that limits this adoption. There are also several common misconceptions around the very notion of API security, notably the idea that existing security products like web application firewalls are perfectly capable of addressing API-related risks.
An integrated API security strategy is indispensable
Creating a well-planned strategy and reliable infrastructure to expose their business functionality securely to be consumed by partners, customers, and developers is a significant challenge that has to be addressed not just at the gateway level, but along the whole information chain from backend systems to endpoint applications. It is therefore obvious that point solutions addressing specific links in this chain are not viable in the long term.
Only by combining proactive application security measures for developers with continuous activity monitoring and deep API-specific threat analysis for operations teams and smart, risk-based and actionable automation for security analysts one can ensure consistent management, governance and security of corporate APIs and thus the continuity of business processes depending on them.
Security challenges often remain underestimated
We have long recognized API Economy as one of the most important current IT trends. Rapidly growing demand for exposing and consuming APIs, which enables organizations to create new business models and connect with partners and customers, has tipped the industry towards adopting lightweight RESTful APIs, which are commonly used today.
Unfortunately, many organizations tend to underestimate potential security challenges of opening up their APIs without a security strategy and infrastructure in place. Such popular emerging technologies as the Internet of Things or Software Defined Computing Infrastructure (SDCI), which rely significantly on API ecosystems, are also bringing new security challenges with them. New distributed application architectures like those based on microservices, are introducing their own share of technical and business problems as well.
KuppingerCole’s analysis is primarily looking at integrated API management platforms, but with a strong focus on security features either embedded directly into these solutions or provided by specialized third party tools closely integrated with them.
The API market has changed dramatically within just a few years
When we started following the API security market over 5 years ago, the industry was still in a rather early emerging stage, with most large vendors focusing primarily on operational capabilities, with very rudimentary threat protection functions built into API management platforms and dedicated API security solutions almost non-existent. In just a few years, the market has changed dramatically.
On one hand, the core API management capabilities are quickly becoming almost a commodity, with, for example, every cloud service provider offering at least some basic API gateway functionality built into their cloud platforms utilizing their native identity management, monitoring, and analytics capabilities. Enterprise-focused API management vendors are therefore looking into expanding the coverage of their solutions to address new business, security or compliance challenges. Some, more future-minded vendors are even no longer considering API management a separate discipline within IT and offer their existing tools as a part of a larger enterprise integration platforms.
On the other hand, the growing awareness of the general public about API security challenges has dramatically increased the demand for specialized tools for securing existing APIs. This has led to the emergence of numerous security-focused startups, offering their innovative solutions, usually within a single area of the API security discipline.
Despite consolidation, there is no “one stop shop” for API security yet
Unfortunately, the field of API security is very broad and complicated, and very few (if any) vendors are currently capable of delivering a comprehensive security solution that could cover all required functional areas. Although the market is already showing signs of undergoing consolidation, with larger vendors acquiring these startups and incorporating their technologies into existing products, expecting to find a “one stop shop” for API security is still a bit premature.
Although the current state of API management and security market is radically different from the situation just a few years ago, and the overall developments are extremely positive, indicating growing demand for more universal and convenient tools and increasing quality of available solutions, it is yet to reach anything resembling the stage of maturity. Thus, it’s even more important for companies developing their API strategies to be aware of the current developments and to look for solutions that implement the required capabilities and integrate well with other existing tools and processes.
Hybrid deployment model is the only flexible and future-proof security option
Since most API management solutions are expected to provide management and protection for APIs regardless of where they are deployed – on-premises, in any cloud or within containerized or serverless environments – the very notion of the delivery model becomes complicated.
Most API management platforms are designed to be loosely coupled, flexible, scalable and environment-agnostic, with a goal to provide consistent functional coverage for all types of APIs and other services. While the gateway-based deployment model remains the most widespread, with API gateways deployed either closer to existing backends or to API consumers, modern application architectures may require alternative deployment scenarios like service meshes for microservices.
Dedicated API security solutions that rely on real-time monitoring and analytics may be deployed either in-line, intercepting API traffic or rely on out-of-band communications with API management platforms. However, management consoles, developer portals, analytics platforms and many other components are usually deployed in the cloud to enable a single pane of glass view across heterogeneous deployments. A growing number of additional capabilities are now being offered as Software-as-a-Service with consumption-based licensing.
In short, for a comprehensive API management and security architecture a hybrid deployment model is the only flexible and future-proof option. Still, for highly sensitive or regulated environments customers may opt for a fully on-premises deployment.
In our upcoming Leadership Compass on API Management and Security, we evaluate products according to multiple key functional areas of API management and security solutions. These include API Lifecycle Management core capabilities, flexibility of Deployment and Integration, developer engagement with Developer Portal and Tools, strength and flexibility of Identity and Access Control, API Vulnerability Management for proactive hardening of APIs, Real-time Security Intelligence for detecting ongoing attacks, Integrity and Threat Protection means for securing the data processed by APIs, and, last but not least, each solution’s Scalability and Performance.
The preliminary results of our comparison will be presented at out Cybersecurity Leadership Summit, which will take place next week in Berlin.
Get access to the whole body of KC PLUS research including Leadership Compass documents for only €800 a year
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]