Hello everyone. Good morning, good afternoon, wherever you are. I'm John Tolbert, Director of Cybersecurity Research here at KuppingerCole, and today I'm joined by a couple of colleagues and friends. Matthias Reinwarth, also from KuppingerCole. He's our IAM Practice Director and a lead researcher we have on AI solutions.
Welcome, Matthias. Hi, John, and hi to the audience. Looking forward to our discussion today. And we also have Scott David, who's a fellow analyst here at KuppingerCole, and his other role is Executive Director of the University of Washington's Information Risk and Synthetic Intelligence Initiative.
Welcome, Scott. Thanks, John.
Yeah, it's a pleasure to be here. Looking forward to the conversation. Thank you. So today our topic is, we're going to be asking questions, things for enterprises to think about when considering the use of generative AI products. So a little bit of logistical information before we get started. Everybody's muted centrally. There's no need to mute or unmute yourself. We are going to do a couple of poll questions, actually near the beginning of the webinar. And we're going to take questions throughout the webinar.
So you'll see that there's a Cvent control panel, and you can enter questions, and feel free to drop those in at any time, and we'll try to take them. We'd like to make this a little bit more interactive. This will be a little bit different format from some of our previous webinars. We're really aiming to have a conversation and hope that you can not only learn from but contribute to our conversation. And lastly, we're recording this and both the recording and the slides will be available in a couple of days.
So first up, I thought I would just list out some of the more well-known LLM, large language model generative AI applications that are out there. And really the point I want to make here is employees are using them every day now. I've done a lot of travel in the last month or so, and not that I would ever look over anybody's shoulders on planes, but walking up and down the aisle, I was actually astounded to see so many GPT screens open. So this is something that we really all have to deal with.
We need policies, we need to really understand what can be done, what should be done, and how, if there is a way to do it, codify their usage. Any thoughts on the applications that we've got listed here or the observation that these things are already in widespread use?
Well, I'll just jump in. Oh, go ahead, Matthias.
Yeah, I think there are even by the day, there are added applications and which can just not follow up. There's always this built-in AI stuff, and you really don't know what happens in the background. Where's the API? Where's the processing? Who is the data owner afterwards after you've handed that over? So there's a lot of information to look into, and there are a lot of things that you can look into, and there are lots of applications by the day that are added to this list, and it's difficult to catch up with that. You've mentioned that, that I'm researching that.
If I manage to test and use an application or two a week, then I'm good. So that means you know how dynamic and agile that market is.
Yeah, one other thing that we were chatting about before the live stream started was that, you know, from a CISO and CIO perspective, we used to have the bring your own device problem, that people were used to using these things at home, and so they bring them to work. They just keep using them. Then we had the bring your own platform problem. People are using Facebook and other things like that at work because they were used to using them personally, and that democratization is continuing now.
People have access to these things, and they're very powerful tools, and what they expect to be able to use them at work because their productivity, they're very productive at home using them, and so now we have the bring your own intelligence problem, and so those challenges keep compounding, and part of our discussion today is try to compare notes among the group here, assembled group, on what are some of the practices, how we can deal with this, what can we learn from those earlier BYO problems, and now how are these different. Thanks.
Well, you know, there were some large companies that made the news for sort of banning some of these well-known AI applications, and I know a few of them have reversed those decisions too, probably because it was a combination of BYOD and BYOAI, you know, bring your own AI on your own device, so yeah, I think there's, we're far enough along in this part of AI evolution to realize that there's a certain amount of inevitability, and people are using this, and we need to find ways to deal with that.
Yeah, and you really don't hear people talking about the perimeter as much anymore, you know, for a while we had the perimeter for security or privacy purposes as a notion, then we had kind of a perforated perimeter, and now the perimeter is really virtual. It's a policy perimeter. There's technology perimeters we'll talk about also, but the perimeter is really in the wild, which is gets kind of interesting really quick.
Right, when you say bring your own, yeah, go ahead, continue, sorry. Paul is important.
Yeah, let's just, let's launch the poll, you know, and invite everybody to answer that, and go ahead, Matthias. Yeah, I think when you say bring your own device, I think you always have also the chance to run even a large language models on your own device, as long as it's powerful enough.
So, if somebody is really making the right decision when it comes to the kind of data that is being processed, there's even the choice to use your B-O-Y-D to use your B-O-A-I, and to use it really on your own box without the data even leaving your box, and that is an additional layer of, and that is even more democratization, as you have mentioned, that throwing it somewhere where you don't know where it is, is always the best, the worst decision you can make, but to have the right tools at the right place for the right use case, that is maybe also something that we should look into.
I had a discussion around that earlier this year at a conference in Vienna, where we just discussed where does AI actually make sense, not why, because everybody does it, or it's sexy, and it's so helpful, but where is it helpful? Where does it provide value to the individual, to me as somebody working within a company, for me as a private person trying to learn something, or for Kupinger Coal or any other organization providing services to the outside world, where does it really make sense to provide a solution? Do I have the data? Do I have the compute power?
Do I have anything that I need to have a successful deployment of AI, or is it just a me-too thing? I think that is also something beyond all security, beyond all CISO thoughts that need to be spent on that. I think just having the right solution and not wasting money on this, that is also an important aspect.
Yeah, that's a great point. So, we'll take a look at the results of this one a little later on. We'll keep these polls open so you can continue to vote, and let's show the next one.
So, Scott, would you like to kind of walk through the background on some of this, and this actually will form the basis of the questions that we're going to talk about for the rest of the session here. Yeah, one of the reasons we wanted to move this poll to the beginning is these are the topics we're going to cover, maybe not get to all of them equally today, but just to get a sense of the temperature, take the temperature of folks' concerns at the beginning of the session, and then for us to have a chance to revisit it at the end and see if there's any change in sensitivities to it.
And so, just a couple of minutes on kind of setting the stage here. You know, we have a lot of CIOs, CISOs, and other people who have organizational responsibilities, and one of the things we had a conversation yesterday with another gentleman from Carpenter Cole, and he raised the issue of, you know, CIOs, CISOs, other people who are on this call have different positions in different organizations, different reporting responsibilities, and so we recognize that where you are in the diagram of the organization is going to also influence what issues you see.
So, there's certain industries you may see more HR issues if it's more service-oriented, like running a casino, or if it's, you know, something with more IP, like my sister works at Disney, ABC, Cap Cities, and they have more IP issues with the characters and things like that. So, you're going to see different kinds of issues depending on where you are in the organization, what kind of sector, and we recognize that.
So, not all these issues are going to be equal, but one thing that is equal among all the folks on this call is everyone's looking for rules. Give us some rules of the road.
It's like, you know, it's like driving to the work and all of a sudden there's no stop signs and no traffic signs, right? And so, what do we look at?
And so, one of the things that we want to explore with this group in this session and in other sessions in the future is, you know, we have technical requirements. There's a lot of proliferation. We still need more, but there's a lot of proliferation of technical requirements. But one of the things that we talk about in our groups here is BOLTS, Business Operating Legal Technical Social.
So, the BOLTS, Business Operating Legal Technical Social. Each one of those has a set of requirements. The business requirements put on you are budgeting things, operating requirements, how do you function, legal requirements, obviously, compliance, different jurisdictions, contracts, and then social requirements. What are the norms, ethics, expectations, customer expectations, etc., shareholder expectations, owner expectations.
So, all that, why are we talking about all that? Well, the CISO is now increasingly, or the CIO, or anybody, even if you're not C-suite, working in those areas. Organizations are becoming information organizations. It used to be that we had a lot of physical plant was important and that's still important. But increasingly, the information flows in the organization are the center of the organization's heart.
And so, whether it was intended or not, people in IT now are really becoming, you know, it used to be IT and it used to be the people who were, you know, the nerds of the organization, right? And now, it's really who runs the place. And what are the skill sets? What are the things we need to look at?
And so, we'll introduce that in this presentation here. But two other points to think about.
So, we don't yet have standards of care. And so, part of the exercise is, what should those practices, the best practices be?
Well, it's going to be drawn from practices. All human organizations have best practices and standards.
They came, they didn't fall out of the sky, they came from practices. So, part of the thing here is to convene everyone. Let's have a discussion about practices. Let's get some agreements on practices. And let's start suggesting that these are good practices.
You know, out there in the world, they're not going to come from the sky, they come from people like people convening this meeting. That's A. And then B, the last point, before we jump into the details here, is the AI is not the problem here. AI is an artifact of the problem. And the challenge is an exponential increase in interaction volumes. And that's led to cloud challenges, identity challenges. And humans, we don't have institutions and metrics for managing exponential change. And what that means is, well, what do we do if we don't have rules for exponential change?
Well, when the functional surface of our systems, the operating surface that the employees and customers deal with every day, is equal now to the threat surface. So, the functional surface equals the threat surface of all these systems. And then what we're basically advocating for is instrumenting that surface. You're instrumenting the functional surface of interactions and the threat surface of interactions. And it really becomes like a neighborhood watch. Because the people, it's like in DHS says, if you see something, say something.
We need to figure out ways to make the people and the systems all instrumented to respond to security and privacy challenges. And so, that's the only way that I know of that you can deal with exponential change.
So, let's jump into the questions. But I just wanted to raise those few points that you're not alone. At CISO and CIOs, we want to convene the communities to work on this together and to recognize that if it's feeling like it's really, really more challenging than ever, it is. Because we're talking about exponential change and we don't have experience on this, the only way we can do this is to do it together.
So, just wanted to start out with those kind of cheerleading for the process here and then we can jump into the substance. Thanks. Great.
So, yeah, we'll take a look at the results of those polls in a little while. But yeah, let's launch and you'll see that our top 10 risks match up to what were your choices for things to vote on.
So, quality and accuracy. This one, I think, is really interesting. There have been a number of news stories lately about what some might call AI washing, where you have instances of companies talking about, oh, we're using AI for this or we're using AI for that. And then later it's discovered, well, no, you're just using hundreds or thousands of people offshore somewhere who are looking at cameras or listening to text and then typing in the speech for you. There's really no AI involved. It's good old fashioned human intelligence and networking that are making it happen.
So, I mean, I think we still need to be aware, even though there's an awful lot of hype about AI, especially from product marketing people, not all of it is real. Some of it is real and some of it's very, very useful. And I think to Matias' earlier point, that's one of the things that we need to discern. What is appropriate? What is useful? Because it's not all going to be fit for purpose for enterprise use too. And then just one last point on that, and I'll turn it over to Matias, is we also have to think about the quality side. It's not just whether or not it's real. I have done a few tests.
I think AI can, generative AI in particular, can be helpful at certain tasks, but it also tends to constrain your way of thinking. So I think AI in the hands of experts is useful, but AI in the hands of people who may be novices in a field, it may be good for helping them get started, but it certainly doesn't provide all the information that they need. You need context, experience, history to really make full use of generative AI, I think, at least at this point.
Matias, do you have any thoughts on that? Absolutely. So as we are analysts, as we are dealing with topics that are new to the market, new to our customers, new to ourselves, we always are in need of current new information that really covers new topics. And I've had that experience several times now that if you go back and say, okay, what can JetGPT really do?
Well, it's creating text. And creating text does not necessarily mean that it's useful text, or does it make sense? Is it accurate? Is it topical? Is it up to date? And I had the time, I had it twice or three times right now that I went to a online publishing platform built into Amazon and found a word, a document, a book that really looked like it would fulfill my needs. And of course, I spent some money and it was just lowest level JetGPT work that was exported into 50 pages and cost 20 bucks. And that is what you've mentioned. This is low quality. This is from the novice.
This is from somebody who has not had any experience. And I think this is something, why do I mention that? I mentioned that because it influences your reputation, the reputation of the author. If you have such a book on Amazon and it's really bad, and they were bad, there is no better word for that. You will have these reviews, you will have this document out there and you will never get it away. So that is something that can really influence just this person.
And if you do that as an organization and you fail on any of these dimensions, be it quality or accuracy or timeliness, you have an issue. And that is something that should be relevant to the C level that you've mentioned, Scott. That is something that we should avoid because this is parsed per toto. If somebody fails, the image of the whole organization can be deteriorated by that. Yeah. It's funny. It takes the right to be forgotten to a whole nother level of brand, right?
You say, don't pay attention to what happened there. That's very challenging. Yeah. A couple of other points on quality and accuracy. So most human suffering comes from accident, not intentional actions of others. And so a lot of what we have with AI, we have the amplification. AI amplifies harms potentially because it accelerates things, but it also can accelerate interventions in harms. But we don't know yet exactly how to use it, exactly what Mateus is saying. We have to be very specific.
And one of the things we did when I was an attorney years ago, so I was an attorney for 27 years and then had been at Applied Physics Lab for 12 years. This is not legal advice, by the way. Talk to your lawyers before making any significant business decisions. But we formed, I represented Open Identity Exchange and Open ID Foundation. We formed OIX. And so many of you may be familiar with OIX and the trust frameworks for identity. And what we did in that context was look at data actions specifically.
And because lawyers, engineers, policymakers know that when you have actions, a certain subset of them don't go according to plan and that's risk. And so when you look at each action, then you can look at exactly as Mateus is saying, what is it appropriate to bring AI to it? What does it do for you? What might it, what risk might it have?
But really for the CISO and CIOs and others working now in information flows in the enterprise, it really is nobody else is going to do this kind of analysis where you look at each flow internally, each transfer of information, each receipt of information, whether it's from inputs into the organization, within the organization, or outputs from the organization, each one of those information flow actions can have its own challenges, has its own profile of whether AI is going to be important, and also can be a potential source of intervention to de-risk other pieces.
So that looking, the two points here I'd raise is looking at misinformation and disinformation differently. Again, the intentional actions, competitor actions, industrial espionage actions, things that are intentional, hacking attacks, and then looking at misinformation, negligent things where people look at AI and say, oh that looks like a good answer, I'll run with that, then sends it out as a marketing piece or something, or depends in a negotiation of a contract with a supplier, depends on that AI.
So really kind of separating out each action and looking to whether it has the potential for ignorance and misinformation and negligence or attack by threat and intentional action, again across the bold spectrum, business, operating, legal, technical, and social variables. Then getting your organization to spend more time at that level of granularity is the best way to address this, rather than having some big grand policy that'll say all AI is bad in all circumstances or all AI is good. That really doesn't get you anywhere.
Yeah, I would agree. There's a mix of good and bad, and you've got to think about the quality side and what are you going to do with it. So let's look at our next major area here, data privacy and security. So you all have already mentioned Right to be Forgotten and hinted at GDPR. I think that's a good segue into this one. Employees need some guidance on what to do with sensitive personal information or even, we've got IP as a separate risk down here later, but sensitive proprietary or personal information. What kind of guidelines can you foresee that are needed in those areas?
Matthias, you want to start out? Yeah, I think the question is, where's the system that you're actually using? If it's something like you create for your own organization, so it's not the vanilla JetGPT that everybody uses, but a system that is created for a business purpose, then of course you are of the opinion that you can put everything in there because it's your own, it's protected, it's behind your own guardrails and you can just use it.
But training a system with this data that might be proprietary, might be personally identifiable information, that is something that can come back at some time, especially when the training data is not too massive. It's just a small amount of data then that should be and could be an issue even within the same organization that customer information travels from A to B and it shouldn't be at B. So that is something that is a really different problem.
And for everybody out there, if you think of something like an AUP, an acceptable use policy for AI, double thinking and again double thinking, should I put this information into that system because there is personally identifiable information in there, that is the simplest recommendation that you can give to say, think it over. It should maybe not be in that system and then decide whether that system is adequate. Could be something like JetGPT, that's something of all of the stuff that you had on your slide and much more, John, but just double thinking it, that is the first starting point.
Is there a chance of this data leaking and do I want to be the source of that data leak? I don't think so.
Yeah, go ahead, John. Did you have a comment on that?
Well, I was just going to say, I think you're absolutely right and we work with large enterprises that are hungry for data. So there is this perceived massive business need for using tons and tons of data to be able to train machine learning algorithms and things like that.
So yes, there are organizations that would love to be able to take PII and use it for training purposes, but there is an inherent risk there, just like you're saying. Scott, go ahead.
Yeah, so a couple of points here. So for years in our work at the university and in legal work, I've been advocating that privacy and security, they were seen as being opposite each other.
You know, you get more security, maybe you intrude on privacy. And for years I've been advocating for looking at privacy and security as symptoms of an illness. And if you treat the symptoms, that's palliative, makes you feel better, but it's not therapeutic, doesn't cure the illness. The illness, I think, behind data privacy and data security, which is amplified to the AI, is channel integrity. The input channel and output channel from your organization and the processing channels, do they have integrity? Matthias was just referencing leakiness. That's exactly the question, right?
Is, do they have the integrity? Because if you have integrity across business, operating, legal, technical, and social metrics, so it's not just technical integrity, but legal integrity, business integrity, operating integrity, social integrity, then you have a system that can be private and secure. So the good news is that GDPR and HIPAA and Gramm-Leach-Bliley in the United States and state data breach rules in the United States and other rules have been focusing on data security as a surrogate for privacy, right?
They've been looking not at information security, not a context and meaning, but at data security. And the focus has been on data because since the 1970s, the fair information practice principles have been based on data flows. There's a great article by Bob Gelman, G-E-L-L-M-A-N, called Brief History of Fair Information Practice that I suggest everyone read. It gives you the whole trend of where they came from. So why is that interesting?
Well, because we've been forced into compliance with data security rules, then because of GDPR, etc., that's been the focus. So all of our organizations are geared up for the data integrity, so technical integrity metrics and legal integrity metrics we got. So now that's good because then we've already geared up for compliance with those, so now the exercise is filling in the blank on do we have integrity on business, operating, and social? And so that's point one.
Point two, we just produced an article I wrote with a gentleman from W3C and a woman, Lynn Parker-Dupree, who used to be the chief privacy officer of the Department of Homeland Security in the United States, and now she's in private practice. And what we did is we're arguing for a new approach to privacy and security called functional privacy, functional security. And what they are is looking at the metrics for systems in your enterprise on how we recognize, remember, and respond to people. And that's all the interactions we have.
So an employee shows up with a card to get in the building, you have recognize, remember, and respond. Do they get in the building? That's the response. It's authentication and authorization is the common parlance for it. The reason to mention it here is there's a migration now to functional approaches to these. Everyone's sick and tired of having no guidance on what to do on privacy or having different jurisdictions.
And so what we're trying to do and what we're hoping to do in this series of webinars is to understand what are the practices that everyone's using to get reliability in their systems of recognizing, remembering, and responding. What are all of you who are listening to this webinar doing to get those practices together in a pile and then to convene this group and expanding the group into staring at those practices to see how we can best address both privacy and security from all of your existing practices. Thanks. So next one, let's look at legal and compliance risks.
And here we're kind of saying, you know, maybe there's already some sort of policy in place. So maybe you have unauthorized use of generative AI. What might that do?
You know, that's where intellectual property risks come into play. You know, you could have, you know, violation of copyright, using trade secrets inappropriately, patent laws, and then Scott's already too mentioned things like GDPR and HIPAA.
You know, this could lead to non-compliance if organizations are allowing employees to input data that falls under those jurisdictions into different generative AI applications. Matthias, do you have any thoughts on this?
Yeah, absolutely. I think that is also something that people need to be trained on and systems need to be built for that specific purpose to avoid this kind of copyright infringement, et cetera. If I go to chat GPT and I'm a huge fan of great prompts and I love that and elegant prompts providing great results is a great thing. But if I go there and say, act as an IAM specialist, help me in conducting a first workshop with a customer for the introduction of a privileged access management solution, provide me 10 questions for the first workshop and on and on and on.
The results that I will get are most likely protected by somebody because this is very specific knowledge. This is put online by somebody who works in that area. And if you're unlucky, there will be even the same wordings built into that because it's just copied from A to B. And that is why generative AI has this notion of being a bad parrot. So that is something that needs to be avoided.
So A, training people to write proper prompts, providing context, providing content, using your own training data and not something that is publicly available, content stuffing, pages of content stuffing, making the AI focus on what you really provide as input, that can really help. It's not perfect.
So A, good prompts, B, design the system so that is really able to produce content based on what you provide and not what's already there. And the third thing is use your brain, control what's coming back. Does this look like something that really can just come out of ex machina or is it something that looks just as if it has been copied from somewhere and I should not use that.
So there are different angles, but designing systems, designing prompts, designing your use of LLMs in a way that it prevents these copyright infringements, these compliance and legal challenges when it comes to intellectual property, that is doable, but it's not simple. Yeah, I'll add a couple of notes. Those are excellent points, Matthias. The couple of things is, first of all, the legally in compliance, we need to think of regulations and laws, but also contracts. So contracts and laws, all of them establish duties. They're just different.
Some are public law duties and some are private law duties and contracts. So why do I mention that up front?
Well, this group that's convened here has the power to create future standard form terms for contracts that can lead to best practices and essentially self-regulation among industries of those things. That's done very often where you have a group insurance, brokerages, et cetera, lawyers, you have these agreements and their contract law, so they work across jurisdictions as long as they're consistent with the local law.
And so if that seems like it's beyond the ability of any one person to do, that's the reason to have convenings like this for us to all get together and say, okay, look, the laws are lagging on the stuff. We get it. Legislation takes a long time. Laws are different. Why not make some contracts to act as epoxy? I would say that contracts are like epoxy, not like super glue. Super glue just glues things together. Epoxy glues things together and fills gaps. And so we can really do that with the existing laws and say, okay, what do we need? Let's create a contract.
We could do that right here on this call right now. And if people adopt it, we're off to the races. So it's something to think about as an empowerment for the folks on this call for filling those gaps.
Two, the laws are insufficient right now. We know that. We don't even have full laws for the internet. We're already at AI and it's going to keep going. And they're inconsistent. So a really nice article that came out recently, George Mason University, and there's an introduction by a guy, David Bray, B-R-A-Y, if you want to search for it. They did a review of 60 different, George Mason did a review of 60 different laws. And they're not all regulations. Some are incentives, but it's AI laws.
So that might be nice if people are trying for your global organizations or multi-jurisdictional, they identify some trends among the laws, which is nice. So that's good to look at. Then a couple of other just specific points. We're going to talk about IP later, but just in anticipation of that, whenever we talk about IP, you really need to say copyright, patent, trademark, trade secret, certification mark, and really separate it out because they're totally different regimes, totally different protections and rights and duties, and very different exposures for your enterprise.
So we'll get to that a little bit more when we talk about IP. But also what about stuff that's not IP? The legal and compliance data is not covered by IP per se. The presentation of data may be copyrightable, but the data itself is not copyrightable. So what kind of contracts do you have on data? What kind of regulations that are on data? So looking at knitting that together, I suggest treating AI as if you hired an insane employee who is not trustworthy right now because you can't, you don't know.
And those situations where you have somebody does a trade for Goldman Sachs and they lose $60 billion or $60 million. And then they say, well, let's fire that guy. Okay. Then you fire the guy. We're paying him 300,000 a year. You hardly make it back. So the same thing with AI, you can have a big error that costs your enterprise a lot and you're not going to have any recourse.
Here, you don't even have an employee to fire if it's AI, right? Where's the culpability? And so just be aware that walking with baby steps now makes a lot of sense. Once we have this group getting together, groups like this getting together and coming up with standards, we'll have some protection of those standards. We'll talk about that in a little bit. Thanks.
Next up, we've got ethical and bias risks. And here, you know, there's, you know, generative AI models can produce biased or unethical outputs based on the data that they were trained on.
You know, and again, these are things that we've seen some news stories about over the last year and a half or so. You know, and this certainly can lead to reputation damage. It can be used for discrimination. These are things that, you know, we definitely need to watch out for and prevent from going mainstream. What do you all think about that, Matthias? I think the problem is training data. How good is your training data? How well maintained is your training data?
And although this was a simple answer that I provided, it's not a simple solution because cleaning up training data and to make it not susceptible for bias, for ethical challenges can be challenging. So I've mentioned that earlier. When is a solution really something that makes sense for you? It does make sense for you when you have the training data, when you have the volume of training data, when you have historical data, that is all good in the definitions of bias and ethical issues. And you want to have a solution that makes sense for you as an organization.
And the question is, do I want to have a solution that can provide even answers that can be biased? Or do I want to provide solutions that are more focused, are more narrow, building on clean, good, well-maintained training data and providing only a specific subset of results that cannot fall for this bias. Whenever it's a generic language model, you will always be in a situation that you will have to clean up that data. You will have to diversify the training data. You need to check your algorithms for bias all the time.
Maybe use another AI to go for the other AI and to build into, to codify your ethic guidelines into your systems. And that is an ongoing process. This is nothing that is done, checkmark, next thing. This is something that you need to do all the time. And this is really an issue. So I would suppose more narrow solutions, which do not come with the risk of falling for bias. And if it's necessary, then you have to do, yeah, the hard lifting and go through the training data, through the algorithms, through the systems, add additional layers of control. Yeah.
And this is a fascinating one, the ethics and bias. So a couple of things. So I work with the IEEE on their ethics of agentic AI working group right now, which agentic AI is just a multi-step kind of AI, not a single prompt. And it's fascinating because there's a couple of things going on here.
So one, ethics traditionally, as far as I understand it, is human views on what it means to be human. So it's kind of interesting when you think of that, in a sense, what we're doing with ethics is saying, how can we socialize AI to care about humans? And so it's kind of an interesting, when we use the word ethics and say corporations should act ethically, it's a different kind of issue because we're not really talking about human actors there. We're talking about human organization actors. So that's a whole other question on what it means for an organization and enterprise to be ethical.
And along those lines, one of the questions I asked in the IEEE meeting is, well, is it intrinsically unethical to train AI systems on the English language and not other languages? Because what you're doing is you're dismissing the empowerment of all those other languages and those other cultures, et cetera. So is that colonial? Is that intrinsically unethical? That's a big question beyond any of our abilities to answer directly, but it's something to consider.
The other thing is, in terms of bias and ethics, we talk about the AI systems hallucinating and they do, but also humans hallucinate because when we see these results, it's called pareidolia for the nerds out there, P-A-R-E-I-D-O-L-I-A. When we see patterns and things that aren't there, like seeing a lion in the grass, but there's no lion in the grass, it's a survival thing. So we have this computational output of AI staring at English language and it spits out this output that can pass a medical exam in the style of Ernest Hemingway. And we're the ones projecting the meaning onto it.
It doesn't exist. It's a computational result. So actually we're hallucinating as well. And that's a big deal because then you have people saying, oh, this looks pretty authoritative. I'll file it with a court or I'll go submit it for my negotiation because it looked authoritative. We're hallucinating that's authoritative. And that's a big problem in organizations because then you have these organizations making decisions with again, the crazy insane employee that's called AI.
So anyway, it encourages us to think of the mutual socialization of AI, of us and us of AI, when we're thinking about bias and ethical issues. Thanks.
Yeah, those are really good points. It reminds me of a book I read a year or two ago. I can't remember the author off the top of my head, but it was about cognitive science. And the implication was that we all sort of hallucinate reality all the time. It's just how finely attuned to reality are our hallucinations. And I thought, you know, if you take that view, then you really can't fault AI for hallucinating because we do it too. So next one here, HR and workplace culture risks.
I think we're already at the point where we've got job candidates who are using generative AI to help fine-tune or totally generate their resumes, which are then sent off to companies which are using AI to look for keywords, look for matches before a human actually gets involved. And, you know, we could already be over-relying on AI parsing of resumes such that, you know, really good, qualified people are not even getting, you know, a chance for an interview.
So, you know, that's one of the risks here. And the other is, you know, job displacement or fear of job loss.
I mean, we've already heard this. You know, this has been rife within the tech industry itself over the last six to nine months.
You know, some of the irrational exuberance around what will we be able to get AI to do such that we can lay off lots of employees. So I think we've got two related and potentially very dangerous risks in just the field around AR and workplace culture.
Well, Matthias, your thoughts on that? Only a few. I think that AI in our HR processes is something that you double think and double think and double think anyway. So because if you're looking for talent and that does not only mean a company like Kupping & Co. looking for creative and ingenious analysts, but also for any company, maybe the person that you are looking for is not well codified, you know, the person that you are looking for is not well codified into the algorithm that you're using to identify that person based on the data you're providing to analyze it.
So maybe that is a generic or a general failure in the first place. I know that now I'm preaching to those who already know, but I would avoid that in general, that there are other types of organizations that need to have a constant stream of movers, of changers, of new entries and lots of fluctuation within organizations. They need maybe some kinds of these solutions, but I can only just recommend the double and triple check these solutions to be a compliant to everything that we said before when it comes to legal and compliance and ethical and bias risks.
And applying that for HR scenarios should be as narrow as possible. That is just me as a humanist in that point, but I just would not like to do that. That would be the first place or really decision-making in HR.
Well, okay, avoid it if possible, but the fear of job loss, that is something that is, yeah, it's just around the corner. Think of journalists being replaced by bad parrots again. So that is something that is really to be avoided. And I think the downside is already visible. When you say an LLM is good in copying text from A to B, somebody has to write the original. Somebody has to be somewhere doing that work as a journalist and just copying it from A to B and using just an AP message or something like that for making the news is just not good enough.
And I think there will be a bounce back there as well. Life will change and working environments will change dramatically, but I think the output will be different from what we expect, just being replaced, replacing the analyst with an LLM, I hope not.
Yeah, and this is another example. Those are great points. And this is another example of where the CISO or CIO or other people in that function, in computer function, need to help the HR people and reach out to them and help them understand what the heck is going on. Because again, the leadership of information flows, you need to help other people understand what they're seeing. A couple of specific points on HR. So we have all this digital twin stuff going on now, right? And so you have digital twins, you got agentic AI, you got synthetic data sets that people are using.
These are all concepts that are bubbling up in different spots where we have this kind of separate realities that we're generating. So am I going to have this situation where I go for a job in the future and they're going to say, oh, look, Scott David did this thing.
And I say, no, no, that was Scott David 3.0 that did that thing. That's not me. Right. So what are we even talking about with HR? Who is doing the action? That's all unsettled in the agentic AI world in terms of culpability and responsibility, all accountability, all those things. So just being aware of that and helping the HR people kind of think through what they're going to start seeing. Another one specific thing is so credit reports. So credit reports in the United States are regulated by FICRA, the Fair Credit Reporting Act, in fact.
And under those, you can use a credit report in hiring that's allowed to be used when you generate something off of a credit report under those statutes. It's considered itself to be a credit report. So if you start having AI analyzing employment applications and there's credit report material in there and that starts proliferating that material into other material, anything derived from the credit report information is a credit report and covered by FICRA. And that's a big deal.
FICRA is extremely challenging to comply with for a regular organization because it's just a lot of details because it's a credit report. It's personal information. So it's just that kind of thing where if you launch AI into a thing, then it can be iterated in all sorts of ways. It's another compliance problem, but it's one that you might not think about because we don't think about credit reports as part of the employment process. Yeah.
Well, you know, we're running up on the top of the hour here. Why don't we take a look at our poll results and then I think I will suggest maybe we do another call in the near future and we'll take a look at those other five risks. And that way we have more time to sort of think it through and, you know, invite conversation to the topic.
So, Oscar, can you go ahead and show our poll results? So the first question was, has your organization got an enterprise license for any Gen AI tools? And it's pretty evenly split. I'd almost say half and half with just, you know, 10% percent not having any tools or saying they're prohibited by policy.
Well, any thoughts on this? Is this kind of what you would have expected at this point? I think it also depends on the industry you're in. There will be lots of organizations who just do not have the need or the justification for having such a corporate license for Gen AI because the working environments just don't provide the use cases for that. But I think 45 is already quite high up and that should be the people on the plane with you, John. So that would be those who use it for their daily business and ideally using for efficiency and for productivity. I think that that's important.
Otherwise, I'm quite surprised that it's that high because that means there has been a process. Not only this bring your own AI, but it has made its way through the organization. There has been a purchasing process. There's a licensing. Somebody can order a open AI license. I don't know whatever license for any supporting system. So this is quite far and I think these numbers are actually high.
Yeah, it's interesting because companies are starting to have co-pilot and all these different things that are AI related tools. And so some of the licensing may be done through some existing platforms that are enterprise platforms where they're starting to offer AI services. We didn't break that out as a separate kind of standalone license thing. But definitely agree. Everything is under development and so one of the key takeaways I think from this is that's not a static situation.
The license terms are going to continue to change and the risk of the organization licensor and licensee risks are going to continue to change in these places. So one of the things to be aware of is for the folks on this call to coordinate closely with the legal folks and the business folks in the organization on making sure that you're revisiting the terms of those licenses very regularly at this point. Because as Mateusz is saying, there are certain functions for which your exposure will be less for using those than other functions in your enterprise.
And as those terms of those tool usages change, you want to be really aware of what you're taking on for risk and what kind of warranties and representations, if any, are being made with regard to the use of those tools. Okay. Let's take a quick look at the second one then. So which of these risks are most concerning to you?
G, intellectual property. That's kind of what I would have expected. I think some of these others, you know, a lot of us really haven't thought that much about before.
But, you know, data privacy, of course, is one. Ethics and bias.
But, you know, HR and culture, you know, I think this is potentially one of the largest risks around AI usage. You know, as we were saying, you know, it can lead to bias and discrimination in hiring. It can lead to negative outcomes for employees and customers. I think that's one of the reasons why we wanted to do this webinar and, you know, subsequent ones on the subject.
You know, we want to raise awareness of the things that, you know, people may not be thinking about right now. You know, what are the long-term risks?
So again, you know, like Scott said, we want to put this out here. We want to invite discussion and, you know, help facilitate, you know, best practices discussions. So that's really the main purpose for this session today. And Matthias and Scott, would you like to comment on what we see here?
Here, I'm surprised. The first slide was or the first poll result was something that I didn't expect or was a bit positively surprised. That quality and accuracy is by zero. That HR and culture, okay, this is something that normally not everybody has on their plate as the first step that I get. But when it comes to reputation also being at zero, that was my first starting point with these Amazon self-published books squeezed out of JetGPT. And that was a simple starting point. I think both reputation and quality and accuracy are very important aspects to consider in the future.
And on all aspects, if you provide a solution to the outside world, to your customers, to your employees, or if you use it for your own purposes, if quality is bad, you can skip that anyway. So you don't have to use it. So here I'm a bit surprised, but as you said, John, I think this is a test drive, this webinar. We are here three guys talking about that topic and trying to involve all of the audience to contribute, to provide their feedback, questions, input, expertise, best practices. I think that would be something that we could monitor over time. Does this change? Will this result change?
And can we really get to a bigger hive mind that is not an AI that can help us in creating proper rules and an acceptable use policy for that? So I'm really looking forward to that. I hope that this is as successful as I hope it will be. And the discussion is reflected here.
So yeah, this is really interesting. But what are your thoughts, John?
Yeah, I think you make a good point about reputation and quality and accuracy. We need to shed some light on what the issues are.
I think, like I said, there have been some things that have made the news. Maybe we need to amplify that a bit to make sure that everyone understands that these are pretty significant risks.
Scott, your thoughts on this for closing here? Yeah, you know, it's interesting because we will, in the poll, we gave them a choice to choose one, I think, so that they may have several. I don't know how we did it. They could choose multiple. So if you choose one, and then also note the things that were chosen are things for which there's measurements out there in the world, data security, things like that, compliance, and the things were not chosen, there's no measurements. We're all people. We're working at organizations. We got things to do next Tuesday.
You know, there's a lot of aspirations for humans and all this stuff out there that we all would like. But the reality is we got jobs to do. And the jobs to do are the jobs that get measured. What gets measured gets done. And if you look at the things that people said, I'm concerned about this, those are the things that are getting measured by regulations, by outside contracts, by other parties. And the things that aren't chosen are things that we're not yet measuring. Doesn't mean they're not important, but we don't have measurements that are easy, handy measurements.
That's why the data security has been a surrogate for privacy for so long. Data security is not equal to privacy. You can be totally data secure under GDPR and not be private, but you are compliant with GDPR. And so it becomes, as an organization, if you do that, you know you're good. But we need to do more. We need to do those things for which we're not yet have measurements, right? And that's part of the idea here is we can get out ahead of all those things by saying, look, what are the practices here?
Because if you think about it, when you work with trade associations, and all of you probably have some involvement with some trade associations, is what you do is what we call zebra stripes. Zebras develop stripes because if you have a zebra stripes, a predator can't see one zebra in the pack of zebras. If we together develop practices and best practices in those areas, which we don't have measurements, then if somebody gets in trouble and they take into a court or an authority, they can say, look, everyone in the industry is doing this.
Please tell us what we can do better because this is the best we come up with. And that's pretty powerful. So in the absence of regulations and contracts, we got each other. And that's part of the message here is let's make this a risk commons and co-manage the risk for these new emerging things. So I really think this, I've really enjoyed the discussion today. I hope it was helpful to folks. Amen. That was good. Final words, really. I think that is where we're aiming at.
Yeah, that was a great way to sum up. Yeah, we still have five more issues to talk about. We didn't get through them today. I thought that might happen. But as you can see, these are links to email addresses. So if you've got questions, feel free to reach out to us. We will get another one of these sessions on the calendar sometime in the next several weeks and invite you to please come back and this time share some questions or comments with us. Let us know what you're thinking. And we'll look forward to the next one of these. So thanks to Matias and Scott and for all who have joined us today.
And we will look forward to talking with you again soon. Thanks, everyone. Thank you all. Thank you. My pleasure.