Early-bird Discount
expires in
Register Now

Agenda

The Dark Side of AI

The Dark Side of AI

Combined Session
Thursday, June 06, 2024 17:30—18:30
Location: B 05
Log in to download presentations

AI's Shadow Realm: Navigating the Ethical Abyss
17:30—17:50

Watch the video

 

Let´s have a look at the complex and often overlooked underbelly of Artificial Intelligence (AI), exploring the multifaceted ethical, social, and technical challenges that emerge as AI systems become increasingly integral to our daily lives. We will examine real-world scenarios where AI's unintended consequences have sparked significant debates on privacy invasion and the amplification of biases, underscoring the urgent need for a paradigm shift in how these technologies are developed, deployed, and governed.

Imagine the near future of online deceit and manipulation, marked by phenomena such as artificial intimacy, challenged cognitive liberty, and advanced data brokerage.

This presentation aims to foster a more informed discourse on AI, advocating for a future where technology aligns with societal values and contributes to the betterment of humanity rather than exacerbating existing inequalities. Join us as we navigate the murky waters of AI's darker side, emphasizing the importance of responsible innovation and the collaborative effort required to steer the digital future towards ethical horizons.

Emilie van der Lande
Cyber Risk Consultant
Independent
Emilie combines expertise in Digital Identity, Data Privacy, and Cyber Strategy, offering a multifaceted perspective at the crossroads of technology, law, and business. Emilie is equipped with a...
Where the Wild Bots Are: Stopping Bots while Minimizing Friction With AI
17:50—18:10

Watch the video

 

Bots are a scourge on customer identity and access management systems before, during and after the login process. How can we use the power of AI to find malicious bots and protect users while minimizing friction? This session presents real-life examples of bot attacks, AI-based bot mitigation techniques, quantifies their efficacy, and explores what's next for AI in cybersecurity.

AI Chatbots can do your homework, but can they safeguard your users while ensuring a seamless user experience? During this session, we share our firsthand experiences in creating AI controls that strike a balance between robust user protection and minimal user friction.

First, you must know your enemy. Our session will provide a comprehensive perspective from the vantage point of a Customer Identity Access Management (CIAM) provider: geographical origins of bots, the networks they exploit, their targets (based on factors such as enterprise size and industry), their tactics to achieve their primary objectives, often revolving around financial gains.

With the backdrop of these threat actors' attempts to exploit CIAM systems, we compare three distinct approaches: a no-control baseline, rule-based controls, and AI-based controls.

AI-driven defenses demonstrate significant improvement from the rule-based baseline systems. Furthermore, an unexpected byproduct of a performing adaptive system like this one, is its quantifiable deterrent effect on bots (Like a lock on a bike, bots are deterred to easier targets).

But are AI-based bot mitigation techniques able to maintain a seamless user experience? The diminution of bot traffic has measurable advantages (more precise modeling of ‘normal’ traffic, ability to see higher quality/lower volume attacks), but AI-systems are probabilistic and cannot be perfect. False Positive Rates (How many humans are challenged by the security control) are a concern for many customers. We discuss tools and recommendations for customer-driven adjustment of risk/friction ratio. We wrap up this section with lessons learned from our journey implementing AI technology in protecting users.

We believe that AI controls will continue to improve user security while simultaneously minimizing friction for users. Then, what is the next challenge for AI-systems in CIAM? Our position is that upcoming challenges will demand paying attention to explainability, privacy and compliance. We are at the dawn of legislation on data privacy and ethics of AI systems, and AI-systems will need to be adequately explainable (deep neural networks aren’t, nor are LLMs) to meet compliance standards. Next year’s security systems will have to balance a third weight in the equation: Security vs Friction vs Explainability.

Dr. Beatrice Moissinac
Staff Threat Intelligence & AI Researcher
Okta
Béatrice Moissinac is a Staff Data Scientist at Okta. She holds two Masters, and a PhD in Computer Science, and joined Okta in 2021. Her main focus is applied AI research to Identity...
The Dark Side of Innovation: Identity Theft, Fraud and the Rise of Generative AI
18:10—18:30

Watch the video

 

Generative AI offers remarkable potential for innovation, but we must be vigilant about its dark side. As technology evolves, so do the tactics of cybercriminals, but the important thing to note is that we are actually dealing with a recognizable playbook and we have the tools to meet the challenge. By adopting proactive fraud prevention and strong authentication measures and fostering a culture of awareness, we can strive to harness the full potential of generative AI while protecting ourselves from the potential for misuse. Join us for a thought-provoking session on what this means practically and how to combine the available tools to effectively combat the risks of AI generated identity theft.

Frances Zelazny
Co-Founder & CEO
Anonybit
Frances is a seasoned marketing strategist and business development professional with over 25 years of experience with start-up and scale-up companies, primarily focused on biometrics and digital...
Almost Ready to Join EIC 2024?
Reach out to our team with any remaining questions
Get in touch