AI & ML Enhanced Attacks
- TYPE: Combined Session DATE: Wednesday, May 15, 2019 TIME: 12:00-13:00 LOCATION: AMMERSEE II
All the information that is needed to find and stop bad actors from entering our financial system already exists and is available to you today; it’s just buried in terabits of messy, unstructured data all over the internet. For those performing investigations and evaluating risk, this needle in a stack of needles problem is huge and growing: Unstructured data already dominates the web (growing exponentially year over year), and the traditional technology these departments use cannot keep up.
Recent developments in natural language processing technology (NLP), the field of AI that focuses on human language, have, for the first time, made it possible for automated systems to find and deliver identity-relevant intelligence hidden in unstructured textual data. These innovations unlock a new world of actionable insight, providing much-needed ammunition in the fight against frauds, money-laundering, financial crime and terrorism.
- All the information financial institutions need to sort criminals from customers exists; it's just buried in unstructured text and spread throughout public and private data
- The traditional identity verification technology financial institutions use can only leverage structured text, allowing many criminals to freely use the world's financial system to support criminal enterprise
-Innovations in natural language processing technology (NLP), the field of AI that focuses on human language, have finally made it possible for automated systems to find and deliver identity-relevant intelligence hidden in unstructured textual data.
-These innovations unlock a new world of actionable insight, providing much-needed ammunition in the fight against fraud, money-laundering, financial crime, and terrorism.
One can think of many arguments of why machine learning (ML) is a game changer in the field of cybersecurity. In fact, today, many ML applications already exist to protect our systems from attacks in more intelligent ways than before. However, have you considered that your own ML model can be the direct target of attacks? This talk elaborates on the topic of adversarial attacks in ML; How they work and how you can defend against them.
- Know how your ML model/application works
- Human interaction is crucial for AI-first autonomous systems
- Defence against adversarial attacks should be built into your ML application design
The huge promise offered by AI, and particularly machine learning, has led to an explosion in uptake of AI methods applied to a wide variety of business problems. However, machine learning and the data to which it is applied can be extremely complex, resulting in opaque systems that are not understood by their creators and not trusted by their users. In this talk, I will discuss approaches that can improve trust in AI systems, focussing in particular on explainability (how and why did the system produce this output?) and robustness (how can the system be exploited, and can we defend against such attacks?).
- Deploying AI comes with its own unique set of risks and challenges
- Your AI system will be of no use if the relevant stakeholders do not trust it
- AI systems can be gamed and exploited in surprising and subtle ways
- Registration fee:
- Contact person:
Mr. Levent Kara
+49 211 23707710
- May 14 - 17, 2019 Munich, Germany