Wednesday, June 05, 2024 12:00—13:00
Wednesday, June 05, 2024 12:00—13:00
This session will look at how analytics can pave the way for decisions about the implementation of IGA instead of hundreds of requirements based on assumptions.
The glitzy and very convincing diagrams of IGA processes and architectures demonstrate what happens during the provisioning, deprovisioning, access request, and attestation stages using one key assumption… that everything will go according to plan. But what happens if something goes wrong? As an example: what happens if an access request process gets stuck because a dynamically calculated approver is on long-term leave or has left the organization? Ask IAM operations professionals, and they will tell you many stories of what went wrong and how tough it is to be in emergency mode fixing issues. IAM departments spend almost 80% of their time handling the results of something going wrong. So, it’s not about “if” something goes wrong, but “when”, and how to deal with it.
In the session, we will discuss practical steps to predict and handle situations when something doesn’t go according to plan. This is based on real-life examples from day-to-day IGA business. We will go over several use cases and show the cause, issue, resolution, and the prediction methodology for reducing the probability of it happening again. We will cover business, technological, and human factor aspects of handling possible standard process deviations and propose a model for predicting and mitigating such issues. This topic is important not just for IAM/IGA professionals, but also for COOs and CFOs because it impacts operational efficiency and cost.
The Ramones took the boredom out of Rock&Roll. They played it faster and with more fun. And they invented a new genre. Three fresh ingredients will reshape IAM as we know it today like the Ramones did with R&R. Graph DBs, Large Language Models (LLMs) and Temporal Graph Networks will reshape IGA, addressing the complexity and lack of temporal insights. This session will provide a practical understanding of how these three new ingredients will reshape the way we think about IGA and make our current nightmares (high costs, business dissatisfaction, and poor UX) a thing of the past.
In the rapidly evolving landscape of identity and access management (IAM), the integration of Artificial Intelligence (AI) brings forth both unprecedented opportunities and significant challenges. This talk examines the critical importance of trust and transparency in AI systems, particularly in the context of IAM. We explore how AI decisions are made and communicated, emphasizing the need for Explainable AI (XAI) to demystify complex processes and foster user confidence.
We begin by exploring diverse real-world scenarios where AI's pivotal role in access management is demonstrated. This includes examining how AI systems are employed to make automated access decisions, while highlighting the challenges and successes in building trust through transparency. Detailed attention is given to XAI models and techniques such as LIME and SHAP, which are used to make AI's decision-making process accessible and understandable to all users.
The talk then transitions to innovative ways XAI can enhance user experience in IAM systems. We discuss how XAI not only clarifies AI decisions for users but also contributes to creating more intuitive, personalized, and user-friendly interfaces. Through examples and case studies, we showcase how XAI has been instrumental in reducing user frustration, increasing system adoption, and improving overall satisfaction.
Attendees will gain insights into the practical applications and benefits of transparent AI systems in identity governance, the significance of XAI in bridging the gap between complex AI algorithms and user-centric experiences, and how these elements collectively contribute to more secure, efficient, and trustworthy IAM solutions.