This month we launched our Cybersecurity Leadership Summit in Berlin. A pre-conference workshop entitled “Focusing Your Cybersecurity Investments: What Do You Really Need for Mitigating Your Cyber-risks?” was held on Monday. The workshop was both business-oriented and technical in nature. Contemporary CISOs and CIOs must apply risk management strategies, and it can be difficult to determine which cybersecurity projects should be prioritized. Leaders in attendance heard the latest applied research from Martin Kuppinger, Matthias Reinwarth, and Paul Simmonds.

Tuesday’s opening keynote was delivered by Martin Kuppinger on the topic of User Behavioral Analytics (UBA). UBA has become both the successor and adjunct to SIEMs, and as such are link between traditional network-centric cybersecurity and identity management. Torsten George of Centrify pitched the importance of zero-trust concepts. Zero-trust can be seen as improving security by requiring risk-adaptive and continuous authentication. But trust is also a key component of things like federation architecture, so it won’t be going away altogether.

Innovation Night was held on Tuesday. In this event, a number of different speakers competed by describing how their products successfully incorporated Artificial Intelligence / Machine Learning techniques. The winner was Frederic Stallaert, Machine Learning Engineer/ Data Scientist at ML6. His topic was the adversarial uses of AI, and how to defend against them.

Here are some of the highlights. In the social engineering track, Enrico Frumento discussed the DOGANA project. This is the Advanced Social Engineering and Vulnerability Analysis Framework. They have been performing Social Driven Vulnerability Assessments and have interesting but discouraging results. In a recent study, 59% of users tested in an energy sector organization fell prey to a phishing training email. Malicious actors use every bit of information about targets available to them, regardless of legality. Organizations providing anti-phishing training are encumbered by GDPR.

In Threat intelligence, we had a number of good speakers and panelists. Ammi Virk presented on Contextualizing Threat Intelligence. One of his excellent points was recognizing the “con in context”, or guarding against bias, assumptions, and omissions. Context is essential in turning information into intelligence. This point was also made strongly by John Bryk in his session.

JC Gaillard posed a controversial question in his session, “Is the role of CISO outdated?”. He looked at some of the common problems CISOs face, such as being buried in an org chart, inadequate funding, and lack of authority to solve problems. His recommendations were to 1) elevate the CISO role and give it political power, 2) move the purely technical IT Security functions under the CIO or CTO, and 3) put CISOs on the level with newer positions such as CDOs and DPOs.

Internet Balkanization was a topic in the GDPR and Cybersecurity session. Daniel Schnok gave a thought-provoking presentation on the various political, economic, and technological factors that are putting up barriers and fragmenting the Internet today. For example, we know that countries such as China, Iran, and Russia have politically imposed barriers and content restrictions. GDPR is limiting the flow of personal information in Europe, and in some cases, overreaction to GDPR is impairing the flow of other types of data as well. The increasing consolidation of data under the large, US-based tech firms is also another example of balkanization.

In my final keynote I described the role that AI and ML are playing in cybersecurity today. These technologies are not merely nice-to-haves but are essential components, particularly for anti-malware, EDR/MDR, traffic analysis, etc. Nascent work on using ML techniques to facilitate understanding of access control patterns is underway by some vendors. These techniques may lead to a breakthrough in data governance in the mid-term. AI and ML based solutions are subject to attack (or “gaming”). Determined attackers can fool ML enhanced tools into missing detection of malware, for example. Lastly, Generative Adversarial Networks (GANs) serve as an example of how bad actors can use AI technologies as a means to advance attacks. GAN-based tools exist for password-cracking, steganography, and creating fake fingerprints for fooling biometric readers. In short, ML can help, but it can also be attacked and used to create more powerful cyber attacks.

We would like to thank our sponsors: iC Consult, Delinea, Cisco, One Identity, Palo Alto Networks, Airlock, Axiomatics, BigID, ForgeRock, Nexis, Ping Identity, SailPoint, MinerEye, PlainID, FireEye, Varonis, Thycotic, and Kaspersky Lab.

We will return to Berlin for CSLS 2019 on 12-14 November of next year.