Analyst Chat

Analyst Chat #107: From Log4j to Software Supply Chain Security

A new year, and 2022, like 2021, again begins with a look back at a far-reaching security incident. Cybersecurity Analyst Alexei Balaganski and Matthias take the topic of Log4j as an opportunity to look at code quality and cyber supply chain risk management. They also mention Mike Small's excellent blog post, which can be read here.

Welcome to this KuppingerCole Analyst Chat. I'm your host, my name is Matthias Reinwarth. I'm lead advisor and senior analyst with KuppingerCole Analysts. My guest today is Alexei Balaganski. He is a lead analyst with KuppingerCole covering mostly but not only cybersecurity. Hey Alexei, good to see you.
Good to see you too Matthias. And by the way, Happy New Year! This is our first episode in 2022, right?
Exactly. This is the first episode for 2022. And I have some kind of throwback feeling to last year because we started the early episodes of last year, also with a cybersecurity incident that took place in late December of 2020. Then we were talking about the solar winds incident and not necessarily the technical details, but what people could learn from it and where to start mitigating it, working on it. And I think we're almost at the same situation just right now. When we look at 2022, I'll look back to December 20, 21. We need to talk about an incident that took place late December. We want to talk about the Log4j incident. Again, we don't want to talk about too much of the technical details, but nevertheless, what is it about what has happened?
All right. Well, first of all, you have social, right? You don't have to go into the technical nitty gritty stuff because it has been discussed probably a hundred thousand times absolutely to the list, but here it was a really big accident which happened. I believe it was discovered sometime at mid December. And it's all about this Log4j framework, obviously, which is simply put an open source library, which is widely used in Java application development for logging purposes. Basically, anytime you need to write something into a file into a log file, you would use this library and year it has been discovered that this popular open source library has a massive security hole. I will not even call it a bug. It was like a massive Lex of reasoning in designing a library in the first place, because essentially it has allowed calling external URLs and loading external and verified application code, and then execute an X within the context of your application.
Log4j is like the de facto standard for logging in Java applications. And Java applications are still the defacto standard for a lot of enterprise development projects. So probably millions, if not hundreds of millions of servers and systems for potentially affected. And of course, everyone immediately wanted to know, am I affected? Is my neighbor affected? Is there an exploit somewhere? And for some time, for a few days, like the whole internet went bananas, like everyone was trying to scan the whole internet, looking for this other ability. And that at some times I remember the security researchers, where should the generating more malicious activities in that regard than the actual hackers? And in the end, it was discovered that yes, look for J absolutely easy, massive security hole. Basically every application which uses it directly or indirectly can be exploited, or there is no protection against it, other than deploying additional security controls. Like if application firewall, for example, or pension, and of course, patching millions of enterprise systems around the world is a huge undertaking. And especially so shortly before the Christmas or the holiday season, it was a major headache for the lots of lots of security and it people around the world.
So if we ignore this headless chicken mode, many organizations have been in, in the beginning of this crisis, really trying to find out, am I involved? Am I really a target, a victim of this issue? What would be from your suggestion as a practitioner and as an analyst, what would be your recommendations where to start, what to do when it comes to mitigating the threat? What would you recommend?
Well, Martinez, I believe we agreed not to discuss technical shelf, right? Because I'm pretty sure if you Google or look for Jay mitigation, you will find millions of results. And quite a lot of them will be very thorough and very useful. So we don't have to repeat them. And again, kind of, I guess, if you are affected and you probably already doing something anyway, it's been like three weeks already cover. So if you are not doing anything yet, and Hugh has probably been already Hecht multiple times or so. Yeah, but I guess kind of, we should probably focus on slightly more positive if I may say so consequences of this incident, because you're absolutely right. A year ago, the same thing happened with tolerance. It was also a huge, massive scale cybersecurity incident, which affected probably at least tens of thousands of hundreds of thousands customers potentially.
And we also discussed the same thing, like what do we do next time? How do we prepare ourselves for how do we basically prevent this from happening ever again? And it did not help, right? It happened or again, and it happened at a much larger scale, but at least there is one silver lining behind this dark cloud. That's this time everyone's affected. Because last time we could just kind of relax and watch other people run in their own screaming because we weren't using any software from Philippines. So we were safe supposedly, right? This time nobody's safe because as I say, millions of systems are affected. It doesn't just relate to enterprise service. There is a lot of job applications in every public cloud, in a lot of embedded systems, which includes industrial devices and even home devices like routers, for example, almost every internet routine device at home probably runs some Java code.
So we might be affected as well. So this time everyone is involved and everyone is finally sitting up and taking notice this time, everyone has to do something, which is to me, the great thing with finally, we have a common understanding of something by this happening, and everyone has to work on it to fix it. And this includes you and me and everyone else, not even just all the it people around the world, right? And it's also great to see that are some government agencies round-table to also responding and not just kind of waving a fist in your direction, but actually giving explicit recommendations and technical guidance of how to fix this problem. It's also, for example, I've just throw them into there's an article FTC in the United States is basically giving out to the why that if you don't fix this problem, there will be a financial fines. So compliance kind of stepping in and let's hope that kind of a third time the charm, and if a comparable vulnerability will be discovered somewhere else, it won't affect as many companies,
Right. When you talk about lessons learned from this, first of all, how well did we, we are operating infrastructure for KuppingerCole analysts as well. And you are also involved in the technical parts of that. Not talking again about the details, but how much Java did you find in our back office, our systems where there really immediate measures to be taken or where we safe?
Well, I have to say at this time, if you go to very lucky indeed, because we did have a few job applications running in our systems earlier last year or two, we had made the executive decision, like meet 2021 that should decommission them all way in advance of this discovery. I believe what I behaved was like 15 years old, not being updated for years. So it was just common sense to find that the commissioner and actually kind of migrate all its functionality to a cloud system, at least if a cloud service is effective, it's not, it's no longer your job, the paycheck, which is great. So in the end, of course, we had to run some scanners. We had to check our infrastructure, but you found just zero Java code period, not just locked for J
Okay, good, good to hear that. But if we ignore the Java side of things, we are talking about open-source software that is deployed in various scenarios at the different levels of an infrastructure as you've described so that we have that built into applications directly, but also used by components used by other applications and so on and so forth. So if we ignore the Java part, could that happen next week with any other not so well maintained or not well documented well understood component in any other open source framework. And if this is the case,
Yes, you're absolutely right. It actually, and again, kind of ignoring general part, we should probably also kind of ignore the open source part because it can happen with any code, but you're absolutely right about kind of open source or rather this completely false understanding of what open and free software actually means because many people still treat are the so-called free software as a synonym for free as in beer, right. Or software you don't have to pay for. And of course it should be clearly understandable to everyone that free cheese is usually found in the mouse trap. Right? So you cannot just take something for free, not giving anything back, like for example, supporting for the development of that open source project, helping those poor underpaid and under stuff, maintain. If there should be fix the box. Well, you will have the same problem over and over again.
And actually kind of open source is actually much better in that regard. You can actually fix it yourself. If you have the capability, for example, next time, a similar scale problem who will be found in the commercial piece of software, which you actually cannot fix yourself. Then you will have absolutely no other choices, but just wait till the vendor themselves fix the problem. But with I worked for J specifically, it was really kind of almost tragic to observe like how many people are actually working on this project at the time two or five. And there is a huge queue of millions of developers around the world, basically standing around cheering. Yay, give us another pitch. And I believe in three weeks we had like five different versions released and it's still not over yet. I mean, there are you slightly smaller, but still in the front of one to one, the ability to be informed again in the game. And it just didn't work for G I believe even our small or relatively small scale application, I mentioned earlier, use something like 50 different open source libraries, like 49 other what else?
Right. So the focus should be on code quality on code quality assurance, no matter where code comes from. So really making sure that you understand what you're running, where it does come from it. How do they maintain the desired level of security? You need to have that as short either by yourself, if it's open source or by the vendor when it's a commercial software, but in the end, you do not only need to trust, but also control and manage and understand what's really going on.
Yeah, I guess kind of the biggest takeaway here in that software supply chain security is not something which one party can fix. It cannot be trusted. Vendors themselves will fix it. They cannot also rely on open source alone. At least not until we actually put a lot of money into the total unsolved development. And of course we cannot just expect that the government's regulations alone will fix it. So yeah. What FTC does now in the U S what other similar agencies are putting out now around the world is great. Compliance is always, I mean, compliance exists not to make your life miserable. It shouldn't exist to make your life easier. And we also don't understand that, but only, so this kind of combination of this collective effort can actually address challenges like this, right? Because again, if you think about the scale of this, a lot for J a problem, it's like, I mean, imagine that tomorrow you wake up and learn that like half of all the doors around the world can be unlocked just by yelling at them, which is what happened with lock buggy, right? And you cannot do anything because the only way to fix it is to replace all the logs and you just don't have enough locks around. Right?
You have to focus on somehow next time, this happens, you at least come prepared. And I think this is an opportunity to mention that a great blog post, which our colleague Mike, Small threatened sometime earlier, and which you can find on our website. And again, he wasn't focusing on like how to fix the problem, because again, the problem is just one symptom. That's not always, but he asks lots of questions. Like how well were you prepared? Could you identify risk? Can time, do you have like already the software dependency catalog and stuff like that? Basically these are the right questions to ask, to check how well you are prepared for the next lot project.
Right? I think as you said, that's an excellent blog post that really helps you also in understanding your own lessons learned and lessons to be learned from a security perspective, from a compliance perspective on a mapping of your overall software landscape perspective. So, but on the other hand, I think it's really important to understand, especially for larger organizations, which just consume code no matter where it comes from that they should take security, not for granted, but that there is something to contribute back, to give back, to spend money on, to improve security. If it is community stuff, then give money time and effort back also to the community, which then leads ideally also to improve cybersecurity posture globally. So it's the management part as Mike has written, but it's also the software development part, the software maintenance part to really contribute also to improve cyber security overall because improving lock for J, as you mentioned, would have helped all of these 50% of all doors being able to be opened contributing here really helps in improving the overall infrastructure as a whole. And any other suggestions from your perspective that you would like to highlight when looking at this issue and the lessons learned how to continue from here.
And again, I think we should try to emphasize again and again, it's not about technology. It's not about cybersecurity, even cybersecurity in the end. It's always the matter of trust, right? And trust is always mutual. If you expect someone to protect you from a threat for free, just doesn't work like that, we live in a capitalist world. You cannot expect that millions of companies around the world that make billion billions of dollars in whatever ITIL related stuff depends critically on two guys doing an unpaid maintenance of some obscure open source software library, just life isn't fair. It has to be changed somehow and changed profoundly. You have companies like red hat, for example, or some other open source based companies, which actually turns that philosophy into a pretty profitable business. This is the way, right? So until we somehow fix this inequality in software development, nothing will change. It's an ethical problem, just as much as if not even bigger than a technical problem.
Absolutely. And that's a great summary and also a great call to action. Actually, we will link to the blog post that Mike has provided because it really covers very thoroughly, all the other aspects that you mentioned, it also gives the audience some material at hand to challenge themselves for the time being, thank you very much, Alex, for joining me for this first podcast episode in 2022. And I'm looking forward to having you in further episodes very soon.
And so having the materials and let's call it that the rest of the year, we'll be less eventful in the beginning. Yeah.
At least less incident four. Right. Okay. Thank you very much. Bye-bye.

Video Links

Stay Connected

KuppingerCole on social media

Related Videos

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00