KuppingerCole Blog

FIAM – Fake Identity and Access Management

Just when you thought we had enough variations of IAM, along comes FIAM. Fake digital identities are not new, but they are getting a lot of attention in the press these days. Some fake accounts are very sophisticated and are difficult for automated methods to recognize. Some are built using real photos and stolen identifiers, such as Social Security Numbers or driver’s license numbers. Many of these accounts look like they belong to real people, making it difficult for social media security analysts to flag them for investigation and remove them. With millions of user credentials, passwords, and other PII available on the dark web as a result of the hundreds of publicly acknowledged data breaches, it’s easy for bad actors to create new email addresses, digital identities, and social media profiles.

As we might guess, fake identities are commonly used for fraud and other types of cybercrime. There are many different types of fraudulent use cases, ranging from building impostor identities and attaching to legitimate user assets, to impersonating users to spread disinformation, and for defamation, extortion, catfishing, stalking, trolling, etc. Fake social media accounts were used by St. Petersburg-based Internet Research Agency to disseminate election-influencing propaganda. Individuals associated with these events have been indicted by the US, but won’t face extradition.

Are there legitimate uses for fake accounts? In many cases, social network sites and digital identity providers have policies and terms of service that prohibit the creation of fake accounts. In the US, violating websites’ terms of service also violates the 1984 Computer Fraud and Abuse Act. Technically then, in certain jurisdictions, creating and using fake accounts is illegal. It is hard to enforce, and sometimes gets in the way of legitimate activities, such as academic research.

However, it is well-known that law enforcement authorities routinely and extensively use fake digital identities to look for criminals. Police have great success with these methods, but also scoop up data on innocent online bystanders as well. National security and intelligence operatives also employ fake accounts to monitor the activities of individuals and groups they suspect might do something illegal and/or harmful. It’s unlikely that cops and spies have to worry much about being prosecuted for using fake accounts.

A common approach that was documented in the 1971 novel “The Day of the Jackal by Frederick Forsyth is to use the names and details of dead children. This creates a persona that is very difficult to identify as being a fraud. It is still reported as being in use and when discovered causes immense distress to the relatives.

In the private sector, employees of asset repossession companies also use fake accounts to get close to their targets to make it easier for them to repo their cars and other possessions. Wells Fargo has had an ongoing fake account creation scandal, where up to 3.5 million fake accounts were created so that the bank could charge customers more fees. The former case is sneaky and technically illegal, while the latter case is clearly illegal. What are the consequences, for Wells Fargo? They may have suffered a temporary stock price setback and credit downgrade, but their CEO got a raise.

FIAM may sound like a joke, but it is a real thing, complete with technical solutions (using above-board IDaaS and social networks), as well as laws and regulations sort of prohibiting the use of fake accounts. FIAM is at once a regular means of doing business, a means for spying, and an essential technique for executing fraud and other illegal activities. It is a growing concern for those who suffer loss, particularly in the financial sector. It is also now a serious threat to social networks, whose analysts must remove fake accounts as quickly as they pop up, lest they be used to promote disinformation.

PSEUDO WHAT AND GDPR?

GDPR comes into force on May 25th this year, the obligations from this are stringent, the penalties for non-compliance are severe and yet many organizations are not fully prepared. There has been much discussion in the press around the penalties under GDPR for data breaches. KuppingerCole’s advice is that preparation based on six key activities is the best way to avoid these penalties. The first two activities are first to find the personal data and second to control access to this data.

While most organizations will be aware of where personal data is used as part of their normal business operations, many use this data indirectly, for example as part of test and development activities. Because of the wide definition of processing given in GDPR, this use is also covered by the regulation. The Data Controller is responsible to demonstrate that this use of personal data is fair and lawful. If this can be shown, then the Data Controller will also need to be able to show that this processing complies with all the other data protection requirements.

While the costs and complexities of compliance with GDPR may be justified by the benefits from using personal data for normal business processes this is unlikely to be the case for its non-production use. However, the GDPR provides a way to legitimately avoid the need for compliance. According to GDPR (Recital 26), the principles of data protection should not apply to anonymous information, that is information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not identifiable.

One approach is known as pseudonymisation, and GDPR accepts the use of pseudonymisation as an approach to data protection by design and data protection by default. (Recital 78). Pseudonymisation is defined in Article 4 as “the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information...” with the additional proviso that the additional information is kept separate and well protected.

In addition, Under Article 6 (4)(e), the Data Controller can take account of the existence of appropriate safeguards, which may include encryption or pseudonymisation, when considering whether processing for another purpose is compatible with the purpose for which the personal data were initially collected and the processing for another purpose. However, the provisos introduce an element of risk for the Data Controller relating to the reversibility of the process and protection of any additional information that could be used identify individuals from the pseudonymized data.

However, not all approaches to anonymization and pseudonymisation are equal. In 2014, the EU article 29 Working Party produced a report providing their opinion on the use of Anonymization Techniques applied to EU privacy. Although it is written with reference to the previous directive 95/46/EC, it is still very relevant. It identifies three tests which should be used to judge an anonymization technique:

  1. is it still possible to single out an individual?
  2. is it still possible to link records relating to an individual?
  3. can information be inferred concerning an individual?

It also provides examples of where anonymization techniques have failed. For example, in 2006, AOL publicly released a database containing twenty million search keywords for over 650,000 users over a 3-month period. The only privacy preserving measure consisted of replacing the AOL user ID by a numerical attribute. This led to the public identification and location of some of the users by the NY Times and other researchers.

Pseudonymization provides a useful control over the privacy of personal data and is recognized by GDPR as a component of privacy by design. However, it is vital that you chose and use the appropriate pseudonymization techniques for your use case correctly. For more information on this subject attend KuppingerCole’s webinar “Acing the Upcoming GDPR Exam”. There will also be a stream of sessions on GDPR at KuppingerCole’s European Identity & Cloud Conference in Munich May 15-18th, 2018.

Azure Advanced Threat Protection: Securing Your Identities Right From the Cloud

Recently, Microsoft has announced general availability for another addition to their cybersecurity portfolio: Azure Advanced Threat Protection (Azure ATP for short) – a cloud-based service for monitoring and protecting hybrid IT infrastructures against targeted cyberattacks and malicious insider activities.

The technology behind this service is actually not new. Microsoft has acquired it back in 2014 with the purchase of Aorato, an Israel-based startup company specializing in hybrid cloud security solutions. Aorato’s behavior detection methodology, named Organizational Security Graph, enables non-intrusive collection of network traffic, event logs and other data sources in an enterprise network and then, using behavior analysis and machine learning algorithms, detects suspicious activities, security issues and cyber-attacks against corporate Active Directory servers.

Although this may sound like an overly specialized tool, in reality solutions like this can be a very useful addition to any company’s security infrastructure – after all, according to statistics, the vast majority of security breaches leverage compromised credentials, and close monitoring of the heart of nearly every company’s identity management – the Active Directory servers – allows for quicker identification of both known malicious attacks and traces of unknown but suspicious activities. And since practically every cyberattack involves manipulating stolen credentials at some stage of the killchain, identifying them early allows security experts to discover these attacks much earlier than the typical 99+ days.

Back in 2016, we have reviewed Microsoft Advanced Threat Analytics (ATA), the first product Microsoft released with the Security Graph technology. KuppingerCole’s verdict at the time was that the product was easy to deploy, transparent and non-intrusive, with an innovative and intuitive user interface, yet powerful enough to identify a wide range of security issues, malicious attacks and suspicious activities in corporate networks. However, the product was only intended for on-premises deployment and provided very limited forensic and mitigation capabilities due to lack of integration with other security tools.

Well, with the new solution, Microsoft has successfully addressed both of these challenges. Azure ATP, as evident from its name, is a cloud-based service. Although you obviously still need to deploy sensors within your network to capture the network traffic and other security events, they are sent directly to the Azure cloud, and all the correlation magic happens over there. This makes the product substantially more scalable and fitting even for the largest corporate networks. In addition, it can directly consume the latest threat intelligence data collected by Microsoft across its cloud infrastructure.

On top of that, Azure ATP integrates with Windows Defender ATP – Microsoft’s endpoint protection platform. If you’re using both platforms, you can seamlessly switch between them for additional forensic information or direct remediation of malware threats on managed endpoints. In fact, the company’s Advanced Threat Protection brand now also includes Office 365 ATP, which provides protection against malicious emails and URLs, as well as secures files in Office 365 applications.

With all three platforms combined, Microsoft can now offer seamless protection against malicious attacks across the most critical attack surfaces as a fully managed cloud-based solution.

CyberArk Acquires Vaultive to Strengthen Its Privilege Management Capabilities in Cloud

CyberArk, an overall leader in privilege management according to KuppingerCole Leadership Compass on Privilege Management, announced yesterday that it has acquired certain assets in a privately held America-based Israeli cloud security provider, Vaultive.

Data encryption has emerged as a key inhibitor for organizations seeking to adopt cloud services. Most cloud providers today offer own encryption to ensure that data in transit and at rest remains unreadable if a breach occurs. However, as organizations adopt multiple SaaS applications, varied encryption standards and inconsistent key management practices of cloud providers can quickly lead to a complex environment with lack of visibility and control of keys.

While most privilege management products today can help with credential vaulting and monitoring of shared administrative access to cloud platforms (including SaaS, IaaS and PaaS), they are largely ineffective against the risks of privileged credentials under direct compromise at cloud providers' end. Some cloud access security brokers (CASBs) can prevent such risks by offering data encryption capabilities that separate encryption of data at rest and key management from that of the cloud providers.  However, the CASBs lack privileged account management capabilities and usually do not support on-premises systems. Therefore, organizations requiring a complete control of privileged access across cloud platforms have no option but to integrate CASB's capabilities with their privileged management solution. CyberArk's acquisition of Vaultive is primarily aimed at solving this challenge for its customers.

Vaultive is a data encryption platform for cloud that helps organizations retain control of their encryption keys providing an end-to-end encryption of data across cloud platforms. CyberArk with its existing capabilities to manage privileged access in cloud platforms can benefit from Vaultive's data encryption capabilities to:

  1. assure its customers of exclusive administrative access to cloud while retaining control over entire data lifecycle
  2. extend its privilege management capabilities beyond administrative access to privileged business users of SaaS applications
  3. build finer-grained privileged access control for cloud environments using context-aware access policies from Vaultive

While only time will tell how well CyberArk is able to integrate and promote Vaultive's Cloud Data Security platform within its privileged account and session management capabilities for cloud, this acquisition comes in the wake of a conscious and well thought out decision to offer a one-stop cloud security solution for the customers.

Make Things Happen Rather Than Watch Things Happen With Vendor-Provided Compliance Solutions

In May 2017, my fellow KuppingerCole analyst Mike Small published the Executive Brief research document entitled “Six Key Actions to Prepare for GDPR” (then and now free to download). This was published almost exactly one year before the GDPR takes full effect and outlines six simple steps needed to adequately prepare for this regulation. “Simple” here means “simple to describe”, but not necessarily “simple to implement”.   However, while time has passed since then, and further regulations and laws are gradually gaining additional importance, properly ensure consumers’ privacy remains a key challenge today.

An even briefer summary of the recommendations provided by Mike is: (1) Find personal data in your organization, (2) control access to it, (3) store and process it legally and fairly, e.g. by obtaining and managing consent. Do (4) all this accordingly in the cloud as well. Prevent a data breach but (5) be properly prepared for what to do should one occur. And finally (6) implement privacy engineering so that IT systems are designed and built from ground up to ensure data privacy.

While tools-support for these steps was not overwhelming back then, things have changed in the meantime. Vendors inside and outside the EU have understood the key role they can play in supporting and guiding their customers on their path to compliance by providing built-in and additional controls in their systems and platforms. Compliance and governance are no longer just ex-post reports and dashboards (although these are still essential for providing adequate evidence).  Applications and platforms in daily use now provide actionable tools and services to support privacy, data classification, access control, consent management, and data leakage prevention.

One example: Microsoft’s Office and software platforms continue to be an essential set of applications for almost all organizations, especially in their highly collaborative and cloud-based incarnations with the suffix 365. Just recently, Microsoft announced the availability of a set of additional tools to help organizations implement an information protection strategy with a focus on regulatory and legal requirements (including EU GDPR, ISO 27001, ISO 27018, NIST 800- 53, NIST 800-171, and HIPAA) across the Microsoft 365 platforms. 

For data, processes and applications running within their ecosystems, these tools support the implementation of many of the steps described above. By automatically or semi-automatically detecting and classifying personal data relevant to GDPR the process of identifying the storage and processing of this kind of data can be simplified. Data protection across established client platforms as well as on-premises is supported through labeling and access control. This labeling mechanism, together with Azure Information Protection and Microsoft Cloud App Security extends the reach of stronger data protection into the cloud.

An important component on an enterprise level is Compliance Manager, which is available for Azure, Dynamics 365, as well as Office 365 Business and Enterprise customers in public clouds. It enables continuous risk assessment processes across these platforms, deriving individual and specific compliance scores from weighted risk scores and implemented controls and measures.

In your organization’s ongoing journey to achieve and maintain compliance to GDPR as well as for other regulations you need your suppliers to become your partners.  In this respect, other vendors have announced the provision of tools and strategies for several other applications, as well as virtualization and infrastructure platforms, ranging from VMware to Oracle and from SAP to Amazon. Leveraging their efforts and tools can greatly improve your strategy towards implementing continuous controls for privacy and security. 

So, if you are using platforms that provide such tools and services, you should evaluate their use and benefit to you and your organization. Where appropriate, embed them into your processes and workflows as fundamental building blocks as part of your individual strategy for compliance. There is not a single day to waste, as the clock is ticking.

Not a Surprise: German Government Under (Cyber) Attack

Yesterday, the reports of the German government having become a victim of a cyber-attack spread the news. According to them, the attack affected the Ministry of Defense and the Department of Foreign Affairs. There is an assumption that the attack had been carried out by APT28, a group of Russian hackers. However, only very few details are available to the public.

When reading the news, there are various points that made me raise my eyebrows. These include

  • it has been a group of Russian hackers
  • the attack is under control/isolated
  • the German government network is well secured
  • there has been only one attack

Let’s be realistic and start with the last one. Anything but having continuous attacks against the German government network would be an unrealistic assumption. There must be permanent automated attacks, but also manual ones on a regular basis. Most of them will just bounce at the perimeter, but others will go through undetected. That one large attack has been detected, obviously after already running for quite a while. It might be under control or not. That is in the nature of APTs (Advanced Persistent Threats), involving various attack vectors and spanning multiple systems. Isolating it is not easy at all.

The fact that this attack took place and went undetected for quite a while raises the question whether there are other, undetected attacks still running (or dormant to further evade detection). The probability is high. Notably, the source of the attack remains unclear. Even while there might be hints to a certain group of attackers, it also might turn out that other attackers camouflaged as them. Contrary to some sources quickly jumping to conclusions, cyberattack attribution is a very difficult and unreliable process.

So, this leads to the question: Is the German government network so super secure as they claim?

Obviously not. It might be good in security, it might even be above average. But it is, as every network, vulnerable to attacks. When looking at the IT security spending of the German government, I have massive doubts that it can be secure enough. Security costs money, and the cost of security increases exponentially when approaching 100% security. Notably, the limit is infinite here, or, in other words, there is no absolute security.

This all should be kept in mind when commenting on the recent attack:

  • we can’t be sure about who the attackers were or whether they’re associated with any state actors;
  • even if this particular attack has been isolated (which isn’t necessary so), there might be other attacks still running and new attacks will continue on daily basis;
  • the network might be well-secured, but there is no 100% security and its safety should not be a blind assumption;

The essence is: prevention alone is not enough anymore. It is about understanding the weaknesses and potential attack vectors. Modern IT security combines well-designed, multi-layered protection/prevention with advanced detection, response and recovery and is all about continuous improvement. That needs people and costs a lot of money. Time for the German government to review their cyber security spending.

GDPR and Financial Services – Imperatives and Conflicts

Over the past months two major financial services regulations have come into force. These are the fourth money laundering directive (4AMLD) and the Second Payment Services Directive (PSD II). In May this year the EU General Data Protection Regulation will be added. Organizations within the scope of these need to undertake a considerable amount of work to identify obligations, manage conflicts, implement controls and reduce overlap.

The EU GDPR (General Data Protection Regulation), which becomes effective on May 25th, 2018, will affect organizations worldwide that hold or process personal data relating to people resident in the European Union. The definition of both personal data and processing under GDPR are very broad, and processing is only considered to be lawful if it meets a set of strict criteria. GDPR also gives the data subjects extended rights to access, correct and erase their personal data, as well as to withdraw consent to its use. The sanctions for non-compliance are very severe with penalties of up to 4% of annual worldwide turnover. Critically, the organization that collects the personal data, called the Data Controller, is responsible for both implementing and demonstrating compliance.

GDPR emphasizes transparency and the rights of data subjects and this may lead to conflicts with the other directives.

4AMLD - EU Directive 2015/849 of the 20 May 2015 is often referred to as Fourth EU Anti-Money Laundering Directive (4AMLD). The purpose of the Directive is to remove any ambiguities in the previous legislation and to improve consistency of anti-money laundering (AML) and counter terrorist financing (CTF) rules across all EU Member States. This directive applies to a wide range of organizations not just to banks. These include: credit institutions, financial institutions, auditors, external accountants and tax advisors, estate agents, anyone trading in cash over EUR 10,000 and providers of gambling services.

In the UK this directive has been implemented through the “Money Laundering, Terrorist Financing and Transfer of Funds (Information on the Payer) Regulations 2017”, which came into force on 26th June 2017. In this, the 44 pages of the EU Directive have become 120 pages of regulation.

Clearly, to counter money laundering and terrorist financing involves understanding the identities of the individuals performing transactions and exactly who owns the assets being held and transferred. This makes it necessary to obtain, use and store personal data. So, is there any conflict with GDPR?

One area where there may be some concern is in relation to Politically Exposed Persons (PEPs) and their known close associates. In the UK regulatory instrument, to decide whether a person is a known close associate 35 (15) an organization need only have regard to information which is in its possession, or to credible information which is publicly available.

The UK Information Commissioner made several comments on this area in the drafts of the regulations.

  • Political party registers are a source of publicly available information on PEPs but is it not clear that party members are informed or understand that their information in these could be used in this way.
  • A person could be denied access to financial products due to inaccurate publicly sourced data or misattributed publicly sourced data. Under GDPR a data subject has the right to know where information has been sourced from and to challenge its accuracy. A clearer definition of “credible information” is needed.

The regulation requires the creation and maintenance of various registers. Specifically, a register of the beneficial owners of trusts must include 45 (6) personal data. The unauthorized exposure of this data could potentially be very damaging to the individuals and it is subject to GDPR.

PSD II - EU Directive 2015/2366 of 25 November 2015 is often referred to as Payment Services Directive II (PSDII). This directive amends and consolidates several existing directives and has as a key purpose to open the market for electronic payment services. Member States, including the UK, must implement the Directive into national law by 13 January 2018 and this has been achieved through the Payment Services Regulations 2017. Some aspects have been delegated to the European Banking Authority (EBA) and will not be effective until Q3, 2019.

PSD II introduces third parties into financial transactions and this can add to the privacy challenges as recognized by comments from the UK ICO on the UK Regulations mentioned above. Where an individual is scammed into making a transfer, or makes a payment using incorrect details for the payee, the banks often cite data privacy as a reason to refuse to provide the payer with the details of the actual recipient. Under Open Banking there is now an additional party involved in the transaction and this may make it even more difficult for the payer under these circumstances. However, in the UK Regulation 90:

  • obliges the payment service provider to make reasonable efforts to recover the funds involved in the payment transaction; and
  • If unable to recover the funds it must, on receipt of a written request, provide to the payer all available relevant information for the payer to claim repayment of the funds.

This leaves an element of uncertainty does “relevant information” include the personal details of the recipient? Clearly, if it does, under GDPR the payment service providers must make sure that they have obtained consent from their customers for the use of their data under these circumstances.

In conclusion – the EU directives and regulations usually state how they relate to each other. In the case of directives their implementation can add an extra degree of complexity. Furthermore, these regulations exist within legal frameworks and local case law. In principle there should be no conflicts however, organizations have often been ready to cite “privacy” as a reason for providing poor service.

EBA Rules out Secure Open Banking?

On January 30th in London I attended a joint workshop between OpenID and the UK Open Banking community that was facilitated by Don Thibeau of OIX. This workshop included an update from Mike Jones on the work being done by OpenID and from Chris Michael Head of Technology, OBIE on UK Open Banking.

Firstly, some background to set the context for this. On January 13th, 2018 a new set of rules for banking came into force that stem from the EU Directive 2015/2366 of 25 November 2015 commonly known as Payment Services Directive 2 (PSD2). While PSDII prevents the UK regulators from mandating a particular method of access, the UK’s Competition and Markets Authority set up the Open Banking Implementation Entity (OBIE) to create software standards and industry guidelines that drive competition and innovation in UK retail banking. As one might expect, providing authorized access to payment services requires identifying and properly authenticating users – see KuppingerCole’s Advisory Note: Consumer Identity and Access Management for “Know Your Customer”.

One of the key players in this area is the OpenID Foundation. This is a non-profit, international standards organization, founded in 2007, that is committed to enabling, promoting and protecting OpenID technologies. While OpenID is relevant to many industries one area of particular interest is financial services. OpenID has a Financial API Working Group (FAPI) led by Nat Sakimura that is working to define APIs that enable applications to utilize the data stored in financial accounts, interact with those accounts, and to enable users to control their security and privacy settings.

Previously it was common for financial services such as those providing account aggregation services to use screen scraping and to store user passwords. Screen scraping is inherently insecure (see GDPR vs. PSD2: Why the European Commission Must Eliminate Screen Scraping). The current approach utilizes a token model such as OAuth [RFC6749, RFC6750], with the aim to develop a REST/JSON model protected by OAuth. However, OAuth needs to be profiled for the financial use cases.

In the UK, the APIs being specified by OBIE include an Open Banking OIDC Security Profile, which is based upon the work of OpenID. This has some differences between the FAPI R+W profile necessary to reduce delivery risk for ASPSPs.

In July 2017 it seemed that the EBA (European Banking Authority) had made a wise decision and rejected Commission Amendments on screen scraping. However in November 2017 the draft supplement to the EU technical regulations was published. In this, Article 32 (3) sets out the obligations for a dedicated interface. In summary these oblige account servicing payment service providers to ensure that this does not create obstacles. Obstacles specified in the RTS include:

  • Preventing the use by payment service providers of the credentials issued by account servicing payment service providers to their customers;
  • Imposing redirection to the account servicing payment service provider's authentication or other functions,
  • Requiring additional authorisations and registrations in addition or requiring additional checks of the consent given by payment service users to providers of payment initiation and account information services.

These obligations appear to fly in the face of what has become accepted security good practice: that one application should never directly share actual credentials with another application. Identity federation technologies such as OAuth and SAML have been reliably providing more secure means for cross-domain authentication for over a decade.

Ralph Bragg, Head of Architecture at OBIE, described 3 possible approaches that were being considered in the context of these obligations. These approaches can be summarized as:

  • Redirect – which is the OAuth model where an end user is redirected to the ASP to authenticate and the PSP receives a token. This appears to be non-compliant.
  • Embedded – where the PISP obtains the first and second factors from the end user and transmits these to the bank. This appears to be insecure.
  • Decoupled – where the end user completes the authorization on a separate device or application. This introduces further complexities.

This was discussed in a panel session involving many of the leading thinkers in this area including: Mike Jones, Microsoft, John Bradley, Yubico, Dave Tonge, Momentum FT, and Joseph Heenan, Fintech Labs.

There was a wide-ranging discussion which resulted in a general agreement that:

  • The embedded model involves the third party (PISP) in holding and transmitting credentials. This is very poor security practice and increases the attack surface. Attacks on the PISP could result in theft of the credentials to access the bank (ASPSP).
  • The redirection model is overall the best from a security point of view. Customers are generally happiest with redirect because they feel confident in their own bank. However, the bank may be the competitor of the PISP and so could make the process unfriendly.
  • PSD2 should be taken in an end to end perspective.

It seems perverse that technical regulations associated with the opening of electronic payment services appear to inhibit the use of the most up-to-date cybersecurity measures. The direct sharing of passwords or other forms of authentication credentials between services increases risks. It is generally better for regulations to oblige the use of widely accepted best practices rather than prohibiting them. OAuth is a well-understood and ubiquitously employed protocol that can help financial service providers achieve cross-domain authorization. It is my hope that the current wording of the regulations will not lead to a retrograde step in banking security.

Successful IAM Projects Are Not a Rocket Science – if You Do It Right

While we still regularly see and hear about IAM (Identity & Access Management) projects that don’t deliver to the expectations or are in trouble, we all see and hear about many projects that ran well. There are some reasons for IAM projects being more complex than many other IT projects, first and foremost the fact that they are cross-system and cross-organization. IAM integrates a variety of source systems such as HR and target systems, from the mainframe to ERP applications, cloud services, directory services, and many others. They also must connect business and IT, with the business people requesting access, defining business roles, and running recertifications.

In a new whitepaper by One Identity, we compiled both the experience of a number of experts from companies out of different regions and industries, and our own knowledge and experience, to provide concrete, focused recommendations on how to make your IAM project a success. Amongst the top recommendations, we find the need for setting the expectations of stakeholders and sponsors right. Don’t promise what you can’t deliver. Another major recommendation is splitting the IAM initiative/program into smaller chunks, which can be run successfully as targeted projects. Also, it is essential not to run IAM as a technology project only. IAM needs technology, but it needs more – the interaction with the business, well-defined processes, and well-thought-out models for roles and entitlements.

Don’t miss that new whitepaper when you are already working on your IAM program or when you will have to do in future.

Free Tools That Can Save Millions? We Need More of These

When IT visionaries give presentations about the Digital Transformation, they usually talk about large enterprises with teams of experts working on exciting stuff like heterogeneous multi-cloud application architectures with blockchain-based identity assurance and real-time behavior analytics powered by deep learning (and many other marketing buzzwords). Of course, these companies can also afford investing substantial money into building in-depth security infrastructures to protect their sensitive data.

Unfortunately, for every such company there are probably thousands of smaller ones, which have neither budgets nor expertise of their larger counterparts. This means that these companies not only cannot afford “enterprise-grade” security products, they are often not even aware that such products exist or, for that matter, what problems they are facing without them. And yet, from the compliance perspective, these companies are just as responsible for protecting their customer’s personal information (or other kinds of regulated digital data) as the big ones and they are facing the same harsh punishments for GDPR violations.

One area where this is especially evident is database security. Databases are still the most widespread technology for storing business information across companies of all sizes. Modern enterprise relational databases are extremely sophisticated and complex products, requiring trained specialists for their setup and daily maintenance. The number of security risks a business-critical database is open to is surprisingly large, ranging from the sensitivity of the data stored in it all the way down to the application stack, storage, network and hardware. This is especially true for popular database vendors like Oracle, whose products can be found in every market vertical.

Of course, Oracle itself can readily provide a full range of database security solutions for their databases, but needless to say, not every customer can afford spending that much, not to mention having the necessary expertise to deploy and operate these tools. The recently announced Autonomous Database can solve many of those problems by completely taking management tasks away from DBAs, but it should be obvious that at least in the short term, this service isn’t a solution for every possible use case, so on-premises Oracle databases are not going anywhere anytime soon.

And exactly for these, the company has recently (and without much publicity) released their Database Security Assessment Tool (DBSAT) – a freeware tool for assessing the security configuration of Oracle databases and for identifying sensitive data in them. The tool is a completely standalone command-line program that does not have any external dependencies and can be installed and run on any DB server in minutes to generate two types of reports.

Database Security Assessment report provides a comprehensive overview of configuration parameters, identifying weaknesses, missing updates, improperly configured security technologies, excessive privileges and so on. For each discovered problem, the tool provides a short summary and risk score, as well as remediation suggestions and links to appropriate documentation. I had a chance to see a sample report and even with my quite limited DBA skills I was able to quickly identify the biggest risks and understand which concrete actions I’d need to perform to mitigate them.

The Sensitive Data Assessment report provides a different view on the database instance, showing the schemas, tables and columns that contain various types of sensitive information. The tool supports over 50 types of such data out of the box (including PII, financial and healthcare for several languages), but users can define their own search patterns using regular expressions. Personally, I find this report somewhat less informative, although it does its job as expected. If only for executive reporting, it would be useful not just to show how many occurrences of sensitive data were found, but to provide an overview of the overall company posture to give the CEO a few meaningful numbers as KPIs.

Of course, being a standalone tool, DBSAT does not support any integrations with other security assessment tools from Oracle, nor it provides any means for mass deployment across hundreds of databases. What it does provide is the option to export the reports into formats like CSV or JSON, which can be then exported into third party tools for further processing. Still, even in this rather simple form, the program helps a DBA to quickly identify and mitigate the biggest security risks in their databases, potentially saving the company from a breach or a major compliance violation. And as we all know, these are going to become very expensive soon.

Perhaps my biggest disappointment with the tools, however, has nothing to do with its functionality. Just like other companies before, Oracle seems to be not very keen on letting the world know about tools like this. And what use is even the best security tool or feature if people do not know of its existence? Have a look at AWS, for example, where misconfigured permissions for S3 buckets have been the reason behind a large number of embarrassing data leaks. And even though AWS now offers a number of measures to prevent them, we still keep reading about new personal data leaks every week.

Spreading the word and raising awareness about the security risks and free tools to mitigate them is, in my opinion, just as important as releasing those tools. So, I’m doing my part!

Discover KuppingerCole

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Blog

Spotlight

AI for the Future of your Business Learn more

AI for the Future of your Business

AI for the Future of your Business: Effective, Safe, Secure & Ethical Everything we admire, love, need to survive, and that brings us further in creating a better future with a human face is and will be a result of intelligence. Synthesizing and amplifying our human intelligence have therefore the potential of leading us into a new era of prosperity like we have not seen before, if we succeed keeping AI Safe, Secure and Ethical. Since the very beginning of industrialization, and even before, we have been striving at structuring our work in a way that it becomes accessible for [...]

Latest Insights

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00