Just two weeks after One Identity has acquired Balabit, the news spread about the next acquisition in this market segment: Bomgar acquires Lieberman Software. Both vendors have been active in this market. While Bomgar entered the market a couple of years ago, having a long history in Remote Control solutions, Lieberman Software is one of the Privilege Management veterans.
Looking at their portfolios, there is some functional overlap. However, while the strength of Bomgar comes from Session Management related to their Remote Control features, Lieberman Software is stronger in the Shared Account Password Management and related capabilities. The two companies will be able to deliver strong capabilities in most areas of Privilege Management by joining their forces.
With that second merger in a row, the Privilege Management market dynamics are under change. Aside from the established leaders in the market, there are now two vendors about to bring strong combined offerings to the market. This will foster competition among the leaders, but also increase pressure on smaller vendors that need to rethink their positioning and strategy to find their sweet spots in the market. However, from a customer perspective, more competition and more choice is always a good thing.
Yesterday, One Identity announced that they have acquired Balabit, a company specialized on Privileged Management, headquartered in Luxembourg but with their main team located in Hungary. One Identity, a Quest Software business, counts amongst the leading vendors in the Identity Management market. Aside of their flagship product One Identity Manager, they deliver a number of other products, including Safeguard as their Privilege Management offering. Balabit, on the other hand, is a pure-play Privilege Management vendor, offering several products with particular strengths around Session Management and Privileged Behavior Analytics.
One Identity already has a technical integration with Balabit’s Session Management product as a part of their Safeguard offering. With the acquisition, One Identity gets direct access to one of the leading Session Management technologies, but also the Privileged Behavior Analytics capabilities of Balabit. Combined with the One Identity Safeguard capabilities, this results in a comprehensive Privilege Management offering, from Shared Account Password Management to Session Management and Privileged Behavior Analytics. Given that there is already some integration, we expect One Identity to progress fast on creating a fully integrated solution. Another advantage might occur from the fact that still a significant portion of the One Identity Manager team is based in Germany, geographically relatively close to Hungary.
The acquisition strengthens the position of One Identity in both the Privilege Management market and the overall Identity Management market. For Privilege Management, the combined portfolio and the expected close integration moves One Identity into the group of the market leaders, with respect to both the number of customers and technical capabilities. One Identity becomes a clear pick for every shortlist, when evaluating vendors in this market segment.
When looking at the overall Identity Management market, One Identity improves its position as one of the vendors that cover all major areas of that market, with particular strengths in IGA (Identity Governance and Administration, i.e. Identity Provisioning and Access Governance) and Privilege Management, but also in Identity Federation and Cloud SSO, plus other capabilities such as cloud-based MFA (Multi-Factor Authentication). For companies that focus on single sourcing for Identity Management or at least one core supplier, One Identity becomes an even more interesting choice now.
The acquisition underpins the strategy that One Identity had announced after the split of Quest Software from Dell and the creation of One Identity as a separate business of Quest Software: playing a leading role in the overall Identity Management market as a vendor that covers all major areas of this market segment.
Looks like we the IT people have gotten more New Year presents than expected for 2018! The year has barely started, but we already have two massive security problems on our hands, vulnerabilities that dwarf anything discovered previously, even the notorious Heartbleed bug or the KRACK weakness in WiFi protocols. Discovered back in early 2017 by several independent groups of researchers, these vulnerabilities were understandably kept from the general public to give hardware and operating system vendors time to analyze the effects and develop countermeasures for them and to prevent hackers from creating zero-day exploits.
Unfortunately, the number of patches recently made for the Linux kernel alone was enough to raise suspicion of many security experts. This has led to a wave of speculations about the possible reasons behind them: has it something to do with the NSA? Will it make all computers in the world run 30% slower? Why is Intel’s CEO selling his stock? In the end, the researchers were forced to release their findings a week earlier just to put an end to wild rumors. So, what is this all about after all?
Technically speaking, both Meltdown and Spectre aren’t caused by some bugs or vulnerabilities. Rather, both exploit the unforeseen side effects of speculative execution, a core feature present in most modern processors that’s used to significantly improve calculation performance. The idea behind speculative execution is actually quite simple: every time a processor must check a condition in order to decide which part of code to run, instead of waiting till some data is loaded from memory (which may take hundreds of CPU cycles to complete), it makes an educated guess and starts executing the next instruction immediately. If later the guess proves to be wrong, the processor simply discards those instructions and reverts its state to a previously saved checkpoint, but if it was correct, the resulting performance gain can be significant. Processors have been designed this way for over 20 years, and potential security implications of incorrect speculative execution were never considered important.
Well, not any more. Researchers have discovered multiple methods of exploiting side effects of speculative execution that allow malicious programs to steal sensitive data they normally should not have access to. And since the root cause of the problem lies in the fundamental design in a wide range of modern Intel, AMD and ARM processors, nearly every system using those chips is affected including desktops, laptops, servers, virtual machines and cloud services. There is also no way to detect or block attacks using these exploits with an antivirus or any other software.
The only way to fully mitigate all variants of the Spectre exploit is to modify every program explicitly to disable speculative execution in sensitive places. There is some consolation in the fact that exploiting this vulnerability is quite complicated and there is no way to affect the operating system kernel this way. This cannot be said about the Meltdown vulnerability, however.
Apparently, Intel processors take so many liberties when applying performance optimizations to the executed code that the same root cause gives hackers access to arbitrary system memory locations, rendering (“melting”) all memory isolation features in modern operating systems completely useless. When running on an Intel processor, a malicious code can leak sensitive data from any process or OS kernel. In a virtualized environment, a guest process can leak data from the host operating system. Needless to say, this scenario is especially catastrophic for cloud service providers, where data sovereignty is not just a technical requirement, but a key legal and compliance foundation for their business model.
Luckily, there is a method of mitigating the Meltdown vulnerability completely on an operating system level, and that is exactly what Microsoft, Apple and Linux Foundation have been working on in the recent months. Unfortunately, to enforce separation between kernel and user space memory also means to undo performance optimizations processors and OS kernels are relying on to make switching between different execution modes quicker. According to independent tests, for different applications these losses may be anywhere between 5 and 30%. Again, this may be unnoticeable to average office users, but can be dramatic for cloud environments, where computing resources are billed by execution time. How would you like to have your monthly bill suddenly increased by 30% for… nothing, really.
Unfortunately, there is no other way to deal with this problem. The first and most important recommendation is as usual: keep your systems up-to-date with the latest patches. Update your browsers. Update your development tools. Check the advisories published by your cloud service provider. Plan your mitigation measures strategically.
And keep a cool head – conspiracy theories are fun, but not productive in any way. And by the way: Intel officially states that their CEO selling stocks in October has nothing to do with this vulnerability.
In looking at the current investor craze mainly around the primary use case of blockchain, the Bitcoin, it sometimes gets a bit difficult to think beyond the bubble and track those blockchain projects, which indeed are on their way to becoming useful in changing the way we do things like selling or buying stuff, digitally moving value, immutably store any kind of documents and data, consume information, create and manage digital IDs, or otherwise influence and change most aspects of our social, political and economic interactions. What we see happening in the crypto-world, is an explosion of creativity and innovation, well-funded through initial coin offerings (ICOs). Most of the blockchain projects we are observing show a high potential for disrupting whole industries.
Blockchain in Cybersecurity
Based on decades of research in cryptography and resilience, cybersecurity and blockchain technology have the same roots and look like natural allies. In offering a totally new way of securing information integrity, performing transactions and creating trust relationships between parties that don´t know each other, blockchains are secure by design and suit well for use cases with high security requirements. It is therefore easily understandable that DARPA (US Defense Advanced Research Projects Agency) has been funding a number of interesting blockchain startups experimenting with secure, private and failsafe communication infrastructures. DARPA’s program manager behind the blockchain effort, Timothy Booher, well describes the paradigm shift blockchain implies to cybersecurity in an analogy: “Instead of trying to make the walls of a castle as tall as possible to prevent an intruder from getting in, it’s more important to know if anyone has been inside the castle, and what they’re doing there.”
Blockchain Identity & Privacy: It all Depends on the Governance Model
Managing digital identities as well as linking them to real humans (identification) is becoming a primary playground for blockchain technology, as it is fundamental for any blockchain use case and as it seems to not only reduce vulnerabilities of traditional infrastructures, but finally offer a solution to give control over personal information back to the user it belongs to (“Self-Sovereign Identity – SSI”). However, the assumption that blockchain is the only way to repair the missing internet identity layer would be as wrong as the opposite assumption. There is no doubt about that blockchain will change the way we deal with identity and privacy, but there are some vital challenges to be solved before - with Blockchain Governance being the one that matters most, as all other problems that are being discussed depend on selecting the right governance model:
- How do we deal with change? We have been in the IT space long enough to know that the only constant is permanent change. Who would decide on “updating” the blockchain? How much of the pure-play blockchain do we need to give up avoiding messing with hard-forks?
- Scalability: The proof-of-work based Bitcoin blockchain has its limits. Is proof-of-stake the only viable alternative or will we soon see massive parallel blockchain infrastructures?
- Private vs. Public, "permissioned vs. unpermissioned": Are we facing a future of walled blockchain gardens?
- Off-Chain vs. On-Chain Governance: What are the risks of on-chain Governance? Will self-amending ledgers be the ones that rule the identity field?
- Future Governance Models based on prediction markets
Shaping the Future of Blockchain ID, Privacy & Security: Be part of it!
The Blockchain discussion will continue to be a core element in KuppingerCole´s Upcoming Events
For the 1st time ever, we´ll offer a “Blockchain ID Innovation Night” at #EIC18, where you will meet with developers, evangelists and experts from most or all blockchain ID projects out there.
McAfee, from its foundation in 1987, has a long history in the world of cyber-security. Acquired by Intel in 2010, it was spun back out, becoming McAfee LLC, in April 2017. According to the announcement on April 23rd, 2017 by Christopher D. Young, CEO – the new company will be “One that promises customers cybersecurity outcomes, not fragmented products.” So, it is interesting to consider what the acquisition of Skyhigh Networks, which was announced by McAfee on November 27th, will mean.
Currently, McAfee solutions cover areas that include: antimalware, endpoint protection, network security, cloud security, database security, endpoint detection and response, as well as data protection. Skyhigh Networks are well known for their CASB (Cloud Access Security Broker) product. So how does this acquisition fit into the McAfee portfolio?
Well, the nature of the cyber-risks that organizations are facing has changed. Organizations are increasingly using cloud services because of the benefits that they can bring in terms of speed to deployment, flexibility and price. However, the governance over the use of these services is not well integrated into the normal organizational IT processes and technologies; CASBs address these challenges. They provide security controls that are not available through existing security devices such as Enterprise Network Firewalls, Web Application Firewalls and other forms of web access gateways. They provide a point of control over access to cloud services by any user and from any device. They help to demonstrate that the organizational use of cloud services meets with regulatory compliance needs.
In KuppingerCole’s opinion, the functionality to manage access to cloud services and to control the data that they hold should be integrated with the normal access governance and cyber security tools used by organizations. However, the vendors of these tools were slow to develop the required capabilities, and the market in CASBs evolved to plug this gap. The McAfee acquisition of Skyhigh Networks is the latest of several recent examples of acquisitions of CASBs by major security and hardware software vendors.
The diagram illustrates how the functions that CASBs provide fit into the overall cloud governance process. These basic functionalities are:
- Discovery of what cloud services are being used, by whom and for what data.
- Control over who can use which services and what data can be transferred.
- Protection of data in the cloud against unauthorized access and leakage.
- Regulatory compliance and protection against cyber threats through the above controls.
So, in this analysis CASBs are closer to Access Governance solutions than to traditional cyber-security tools. They recognize that identity and access management are the new cyber-frontier, and that cyber-defense needs to operate at this level. By providing these functions Skyhigh Networks provides a solution that is complementary to those already offered by McAfee and extends McAfee’s capabilities in the direction needed to meet the capabilities of the cloud enabled, agile enterprise.
The Skyhigh Networks CASB provides comprehensive functionality that strongly matches the requirements described above. It is also featured in the leadership segment of KuppingerCole’s Leadership Compass: Cloud Access Security Brokers - 72534. This acquisition is consistent with KuppingerCole’s view on how cyber-security vendors need to evolve to meet the challenges of cloud usage. Going forward, organizations need a way to provide consistent access governance for both on premise and cloud based services. This requires functions such as segregation of duties, attestation of access rights and other compliance related governance aspects. Therefore, in the longer term CASBs need to evolve in this direction. It will be interesting to watch how McAfee integrates the Skyhigh product and how the McAfee offering evolves towards this in the future.
You have heard it all before: May 25th, 2018, enormous fines, "you have to act now", the "right to be forgotten", DPO and breach notification. Every manufacturer whose marketing database contains your data will send you information, whitepapers, webinars, product information and reminders about GDPR. And they of course can “help” you in getting towards compliance. So you have set up a filter in your mail client that sorts GDPR messages directly into spam and #gdpr is muted in your Twitter client.
Because you have started your journey towards compliance to GDPR early? Compliance activities have long been established and your employees are informed? Consent management is not just theory? Data Leakage is prevented, monitored, detected and if it does occur, communicated appropriately?
But there might be still a good reason to read on: Unlike other regulations, there is no regular inspection of compliance with the requirements. Rather, individuals (including customers, employees or other relevant data subjects) and the competent supervisory authorities are able to make enquiries if alleged or actual omissions or offences are to be investigated. However, as yet there is no proof of GDPR compliance as a regular and permanent seal of quality.
It is difficult to identify sufficient indicators for good preparation. Yes, vendors and integrators provide some basic questionnaires… But you still might be in need of a neutral set of criteria determining the maturity level of your organization's readiness in the areas of compliance with regulatory or industry-specific regulations or frameworks. To support such reviews, KuppingerCole provides Maturity Level Matrixes that are specifically targeted to distinct areas of the IT market, in this case, GDPR readiness.
Assessing the quality and maturity of the controls, systems and processes implemented by your organization is essential. Given the level of agility required from business and market requirements this assessment needs to be executed on a regular basis. Continuous improvements are essential to achieve an adequate level of compliance in all key areas of the GDPR.
To achieve the highest level 5 of GDPR maturity it is essential to continuously measure GDPR readiness to enable an organization to understand their status quo, document it and, if possible, realize the potential benefits of investing in improving data protection. Then you might happily ignore further GDPR-related blogposts.
The KuppingerCole Maturity Level Matrix for GDPR readiness provides neutral criteria exactly for that purpose. Find it here for download.
And get in touch with us if you feel that an independent assessment (along the lines of exactly the same maturity levels) might be even more meaningful.
Today, companies are increasingly operating on the basis of IT systems and are thus dependant on them. Cyber risks must therefore be understood as business risks. The detection and prevention of cyber security threats and appropriate responses to them are among the most important activities to protect the core business from risks.
But in practice, however, many challenges arise here. The requirement to arrive at a uniform and thus informed view of all types of business risks often fails due to a multitude of organisational, technical and communication challenges:
Technical risk monitoring systems in the enterprise (e. g. systems for monitoring compliance with SoD rules or systems for monitoring network threats at the outer edge of an enterprise network) are often extremely powerful in their specific area of application. Interoperability across these system boundaries usually fails due to a lack of common language (protocols) or the semantics of information to be exchanged (uniform risk concepts and metrics).
The same thing is happening in the organization of large organizations: although it is only a few years in which we have observed this trend, this leads to independently operating IT operations teams, IT security teams and (cyber) governance teams that focus on individual tasks and their solutions with which they deal with individual, but very similar problems. They typically act without adequate integration into a corporate security strategy or a consolidated communication approach for the joint, coordinated management of risks. They do this without correlating the results to determine a comprehensive IT security maturity and thus without identifying the overall risk situation of a company.
Management boards and executives must act and react on the basis of incomplete and mostly very technical data, which can only lead to inadequate and incomplete results. The implicit link between cyber risks and business risks is lost when only individual aspects of cyber security are considered. Management decisions made on the basis of this information are usually far from adequate and efficient.
The only way to solve this problem is to move from tactical to strategic approaches. Recently the term “Cyber Risk Governance” has been coined to describe holistic solutions to this problem, covering organization, processes and technologies. More and more companies and organizations are realizing that cyber risk governance is a challenge that needs to be addressed at management level. Cyber security and regulatory compliance are strong drivers for rethinking and redesigning a mature approach to improve cyber resilience.
This requires an adequate strategic approach instead of tactical, more or less unplanned ad hoc measures. A strong risk governance organisation, a strategic framework for a comprehensive cyber risk governance process and related technological components must underpin it. This can only be achieved by bundling corporate expertise, taking a holistic view of the overall risk situation and understanding the sum of all risk mitigation measures implemented.
If the situation described above sounds familiar, read more about “Cyber Risk Governance” as a strategic architecture and management topic in the free KuppingerCole "White Paper: TechDemocracy: Moving towards a holistic Cyber Risk Governance approach".
At KuppingerCole, cybersecurity and identity management product/service analysis are two of our specialties. As one might assume, one of the main functional areas in vendor products we examine in the course of our research is administrative security. There are many components that make up admin security, but here I want to address weak authentication for management utilities.
Most on-premises and IaaS/PaaS/SaaS security and identity tools allow username and password for administrative authentication. Forget an admin password? Recover it with KBA (Knowledge-based authentication).
Many programs accept other stronger forms of authentication, and this should be the default. Here are some better alternatives:
- Web console protected by existing Web Access Management solution utilizing strong authentication methods
- SAML for SaaS
- Mobile apps (if keys are secured in Secure Enclave, Secure Element, and app runs as Trusted App in Trusted Execution Environment [TEE])
- FIDO UAF Mobile apps
- USB Tokens
- FIDO U2F devices
- Smart Cards
Even OATH TOTP and Mobile Push apps, while having some security issues, are still better than username/passwords.
Why? Let’s do some threat modeling.
Scenario #1: Suppose you’re an admin for Acme Corporation, and Acme just uses a SaaS CIAM solution to host consumer data. Your CIAM solution is collecting names, email addresses, physical addresses for shipping, purchase history, search history, etc. Your CIAM service is adding value by turning this consumer data into targeted marketing, yielding higher revenues. Until one day a competitor comes along, guesses your admin password, and steals all that business intelligence. Corporate espionage is real - the “Outsider Threat” still exists.
Scenario # 2: Same CIAM SaaS background as #1, but let’s say you have many EU customers. You’ve implemented a top-of-the-line CIAM solution to collect informed consent to comply with GDPR. If a hacker steals customer information and publishes it without user consent, will Acme be subject to GDPR fines? Can deploying username/password authentication be considered doing due diligence?
Scenario # 3: Acme uses a cloud-based management console for endpoint security. This SaaS platform doesn’t support 2FA, only username/password authentication. A malicious actor uses KBA to reset your admin password. Now he or she is able to turn off software updates, edit application whitelists, remove entries from URL blacklists, or uninstall/de-provision endpoint agents from your company’s machines. To cover their tracks, they edit the logs. This would make targeted attacks so much easier.
Upgrading to MFA or risk-adaptive authentication would decrease the likelihood of these attacks succeeding, though better authentication is not a panacea. There is more to cybersecurity than authentication. However, the problem lies in the fact that many security vendors allow password-based authentication to their management consoles. In some cases, it is not only the default but also the only method available. Products or services purporting to enhance security or manage identities should require strong authentication.
Recently, I have attended the Oracle OpenWorld in San Francisco. For five days, the company has spared no expenses to inform, educate and (last but not least) entertain its customers and partners as well as developers, journalists, industry analysts and other visitors – in total, a crowd of over 50 thousand. As a person somewhat involved in organizing IT conferences (on a much smaller scale, of course), I could not but stand in awe thinking about all the challenges organizers of such an event had to overcome to make it successful and safe.
More important, however, was the almost unexpected thematic twist that dominated the whole conference. As I was preparing for the event, browsing the agenda and the list of exhibitors, I found way too many topics and products quite outside of my area of coverage. Although I do have some database administrator (DBA) experience, my current interests lie squarely within the realm of cybersecurity and I wasn’t expecting to hear a lot about it. Well, I could not be more wrong! In the end, cybersecurity was definitely one of the most prominent topics, starting right with Larry Ellison’s opening keynote.
The Autonomous Database, the world’s first database, according to Oracle, that comes with fully automated management, was the first and the biggest announcement. Built upon the latest Oracle Database 18c, this solution promises to completely eliminate human labor and hence human error thanks to complete automation powered by machine learning. This includes automated upgrades and patches, disaster recovery, performance tuning and more. In fact, an autonomous database does not have any controls available for a human administrator – it just works™. Of course, it does not replace all the functions of a DBA: a database specialist can now focus on more interesting, business-related aspects of his job and leave the plumbing maintenance to a machine.
The offer comes with a quite unique SLA that guarantees 99.995% availability without any exceptions. And, thanks to more elastic scalability and optimized performance, “it’s cheaper than AWS” as we were told at least a dozen times during the keynote. For me however, the security implications of this offer are extremely important. Since the database is no longer directly accessible to administrators, this not only dramatically improves its stability and resilience against human errors, but also substantially reduces the potential cyberattack surface and simplifies compliance with data protection regulations. This does not fully eliminate the need for database security solutions, but at least simplifies the task quite a bit without any additional costs.
Needless to say, this announcement has caused quite a stir among database professionals: does it mean that a DBA is now completely replaced by an AI? Should thousands of IT specialists around the world fear for their jobs? Well, the reality is a bit more complicated: the Autonomous Database is not really a product, but a managed service combining the newest improvements in the latest Oracle Database release with the decade-long evolution of various automation technologies, running on the next generation of Oracle Exadata hardware platform supported by the expertise of Oracle’s leading engineers. In short, you can only get all the benefits of this new solution when you become an Oracle Cloud customer.
This is, of course, a logical continuation of Oracle’s ongoing struggle to position itself as a Cloud company. Although the company already has an impressive portfolio of cloud-based enterprise applications and it continues to invest a lot in expanding their SaaS footprint, when it comes to PaaS and IaaS, Oracle still cannot really compete with its rivals that started in this business years earlier. So, instead of trying to beat competitors on their traditional playing fields, Oracle is now focusing on offering unique and innovative solutions that other cloud service providers simply do not have (and in the database market probably never will).
Another security-related announcement was the unveiling of the Oracle Security Monitoring and Analytics – a cloud-based solution that enables detection, investigation and remediation of various security threats across on-premises and cloud assets. Built upon the Oracle Management Cloud platform, this new service is also focusing on solving the skills gap problem in cybersecurity by reducing administration burden and improving efficiency of cybersecurity analysts.
Among other notable announcements are various services based on applied AI technologies like intelligent conversation bots and the newly launched enterprise-focused Blockchain Cloud Service based on the popular Hyperledger Fabric project. These offerings, combined with the latest rapid application development tools unveiled during the event as well, will certainly make the Oracle Cloud Platform more attractive not just for existing Oracle customers, but for newcomers of all sizes – from small startups with innovative ideas to large enterprises struggling to make their transition to the cloud as smooth as possible.
Guest post by Christian Goy, Co-founder and Managing Director of Behavioral Science Lab
In the future, marketing will be driven neither by demographics, on- or off-line behavioral identifiers or psychographics, but by understanding and fulfilling the individual utility expectations of the consumer.
Mitch Joel captures this view of future marketing by concluding, “If the past decade was about developing content and engagement strategies in social channels (in order to provide value, humanize the brand, be present in search engines and more), the next decade will be about the brands that can actually create a level of utility for the consumer.”
No one disputes that a persuasive marketing message or social media campaign drives web traffic. However, if your brand does not deliver utility, it will not be purchased. Consumers do not love brands because of their brilliant ad campaigns or funny videos on Facebook. Consumers love brands that create utility or true value for themselves; this is what creates affinity between the consumer and the brand, not just the brand attributes. Utility is what consumers believe they cannot live without.
Utility is the heart of behavioral economics. The utility of each product or service is determined by a very specific set of psychological and economic elements, which determine how the consumer determines the expected value (utility) of each brand. Relative differences in expected utility associated with each choice option determines how much consumers will pay, what they purchase and how loyal they expect to be. Interestingly, we are learning that the economic and psychological factors that determine utility and purchase have little or nothing to with the buyer’s demographics or psychographics.
In none of our studies did demographic or psychographic segmentation explain why consumers switch or remain loyal to a brand. However, when consumers were typed by their utility expectation for individual brands, our clients were able to predict with extreme accuracy whether the consumer would stay loyal, switch away from their brand, and more importantly, why.
Knowing the expectation of utility explains why Instacart — an app that lets shoppers buy all their groceries online from any grocery store and have them delivered to their doorstep — became an instant hit for a small, but important, percentage of US shoppers.
Old line marketers assumed that a certain percentage of US shoppers with relatively high household income and education, who were environmentally savvy and attracted to organic produce would remain loyal Whole Foods or Trader Joe’s shoppers. What they didn’t understand was that the utility for those shoppers was not driven by their demographics or psychographics, but by what they were looking for — convenience, ease of shopping and minimal shopping time which could not be fulfilled by either Whole Foods or Trader Joe’s.
What is next for marketing?
To remain effective, marketing must move beyond traditional segmentation, psychographics, and message development strategies. Marketers should first understand what drives true utility for their consumers — what consumers value, what they can or cannot live without. As some are already doing, marketers will create personalized messages that maximize individual utility expectations requirements by:
Deeply understanding what drives the expectation of the utility of their products — These are the psychological and economic decision elements used by the buyer to define utility.
Defining buyers by their utility expectation — Group buyers on the basis of a similar utility expectation. This allows marketers to be more cost-efficient and effective in their messaging and product offerings because the specific needs of their customers will be met.
Creating products and services that address consumers’ psychological and economic needs — Do not just focus on the product. Understand how the consumer defines utility, and then deliver on it. In our studies, we have found that by only addressing and fulfilling the primary driver in consumers’ utility “equation,” the likelihood of purchase is very high. Just imagine how much greater the likelihood of purchase could be if the second and third drivers were addressed as well.
Product and service utility are the future of effective marketing. Start today with an understanding how your consumers arrive at their utility expectation to stay ahead of the game.
Learn more about in my session at the Consumer Identity World from November 27-29, 2017 in Paris.
Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.
AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]