Early-bird Discount
expires in
Register Now


Where the Wild Bots Are: Stopping Bots while Minimizing Friction With AI

Where the Wild Bots Are: Stopping Bots while Minimizing Friction With AI

Combined Session
Thursday, June 06, 2024 17:50—18:10
Location: B 05

Bots are a scourge on customer identity and access management systems before, during and after the login process. How can we use the power of AI to find malicious bots and protect users while minimizing friction? This session presents real-life examples of bot attacks, AI-based bot mitigation techniques, quantifies their efficacy, and explores what's next for AI in cybersecurity.

AI Chatbots can do your homework, but can they safeguard your users while ensuring a seamless user experience? During this session, we share our firsthand experiences in creating AI controls that strike a balance between robust user protection and minimal user friction.

First, you must know your enemy. Our session will provide a comprehensive perspective from the vantage point of a Customer Identity Access Management (CIAM) provider: geographical origins of bots, the networks they exploit, their targets (based on factors such as enterprise size and industry), their tactics to achieve their primary objectives, often revolving around financial gains.

With the backdrop of these threat actors' attempts to exploit CIAM systems, we compare three distinct approaches: a no-control baseline, rule-based controls, and AI-based controls.

AI-driven defenses demonstrate significant improvement from the rule-based baseline systems. Furthermore, an unexpected byproduct of a performing adaptive system like this one, is its quantifiable deterrent effect on bots (Like a lock on a bike, bots are deterred to easier targets).

But are AI-based bot mitigation techniques able to maintain a seamless user experience? The diminution of bot traffic has measurable advantages (more precise modeling of ‘normal’ traffic, ability to see higher quality/lower volume attacks), but AI-systems are probabilistic and cannot be perfect. False Positive Rates (How many humans are challenged by the security control) are a concern for many customers. We discuss tools and recommendations for customer-driven adjustment of risk/friction ratio. We wrap up this section with lessons learned from our journey implementing AI technology in protecting users.

We believe that AI controls will continue to improve user security while simultaneously minimizing friction for users. Then, what is the next challenge for AI-systems in CIAM? Our position is that upcoming challenges will demand paying attention to explainability, privacy and compliance. We are at the dawn of legislation on data privacy and ethics of AI systems, and AI-systems will need to be adequately explainable (deep neural networks aren’t, nor are LLMs) to meet compliance standards. Next year’s security systems will have to balance a third weight in the equation: Security vs Friction vs Explainability.

Dr. Beatrice Moissinac
Staff Threat Intelligence & AI Researcher
Béatrice Moissinac is a Staff Data Scientist at Okta. She holds two Masters, and a PhD in Computer Science, and joined Okta in 2021. Her main focus is applied AI research to Identity...
Secure your ticket
Be quick before the Early-bird Discount expires in
00d 00h 00m 00 s
Get a ticket
Almost Ready to Join EIC 2024?
Reach out to our team with any remaining questions
Get in touch