Welcome everybody to the investigation of threat hunting workshop with XDR here at the cybersecurity leadership summit. My name is Basian. I'm a Palo Alto networks, cortex systems engineer. And I'll be your host for this workshop, which is both virtual and onsite kind of hybrid. So first time I'm not on site in Berlin, unfortunately just presenting from my home office. We've got something special for you today, as you'll be most of the time working with cortex, XDR our extended detection response solution yourself in a lab environment with however, no strings attached. So you can try out the product based on some real scenarios, doing investigations and threat hunting and the smallest part of this, this session. And the next one after the break is going to be me talking. So there will be a lot of silence people working on the lab and we'll be here handling questions or any technical issues. So without further ado, I don't know. Christopher, do you wanna say something introduce yourself or should I just kick off?
You just can kick off. Thank you. All right. Great. The high level agenda for the day is I'm going to give a brief introduction of how we define threat hunting for our workshop and even briefer introduction on cortex. XDR the product. Then we will start to log onto the hands on workshop. I hope everybody here that's onsite. Scott's laptop ready? Browser, open wifi working. If not, yeah. Check it out, make sure you're online. And you're able to access the environment later. And then as I said, most of the afternoon will be spent on the workshop part with a little introductory, no little summaries in between questions, maybe will do some demos, hopefully a fun exercise, and you'll go away knowing a lot more about code 60 art and what it can actually do from a timing perspective. I've tried to break it down into a schedule.
So you know what to expect. We'll spend the first part. This is, this is about, yeah, less than an hour left now on the welcome, the introductions presentations. And we'll start with the first very, very simple exercises, which are mostly about logging in making sure that everybody's logged in and orienting yourself in the tool. If we get through that quickly, we'll immediately start with the next part, but there is a break at two o'clock 15 minutes, definitely. After the break, we will kick off with the investigation activities where you'll be working with a guide that we're going to provide within an XDR environment, preconfigured with data, incidents, threats, and so on. And that's gonna be a lot of fun. So you get to work as if you were a, so Analyst already using XDR, but it's a very descriptive guide. So we'll lead you from step to step and there is no prerequisite don't need to bend expert in the product or the topic itself to enjoy this workshop, hopefully.
And we'll wrap up towards four o'clock at about right. What is threat hunting before we answer that directly? First of all, maybe let's answer the question. Why threat hunting? Why would I even think about doing something like this? Whatever it is, the reason being the threat landscape that organizations are dealing with and have to deal with on a daily basis has become increasingly complex on the left side of this spectrum of threats that we try to paint out here are known threats, simple threats, which are still happening all the time. There are advanced threats, evasive malware, zero day attacks, which are continuously becoming harder and harder to detect. But overall, we have the technology, so to speak, to prevent most attacks that happen on a daily basis with the right tools in place. Not saying everybody has the right tools already, but there's technology to prevent a huge chunk of attacks before they become really, really damaging and high risk.
However, there's always going to be a percentage which we'll try to minimize. I mean, that's our job as security professionals where targeted attacks attacks originally from insiders threat actors acting low and slow, meaning they'll move very, very slowly inside networks. Once they've got an initial foothold that will be really hard to prevent in the first place. And the best chance you have is detecting them at some point in the attack, kill chain, hopefully very early. So we can still isolate and contain the damage, but it, anything can happen. As we've seen recently in the news, no organization is immune from advanced threat actors these days, and it goes to, and not all detect threats can even be detected using automated tools. And this is actually why we need threat hunting. Now we did a survey a while ago, which trying to get behind the question, how important is this 1% of attacks even isn't it enough to have 99% prevented, simple answers? Of course, no, but the question here was how many of your investigations and also threats detected in the organization originate from a threat hunting exercise from doing threat hunting. And the answer was quite surprising that three quarters of companies that responded to the survey said 25% or more of our investigations come from threat hunting. These are not investigations spawn from alerts by some security tool. That's generating way too many alerts, probably with a more interesting investigations, originate from manual or partially automated for hunt team exercises.
That's a big, big percentage. Normally threat hunting should be a complimentary exercise, something you also do on top of, you know, buying good technology, good policies, putting it all together, sound processes, surprisingly high number. So it's an important thing. So let's figure out what is all, if you could all mute yourselves, if you weren't muted automatically by the meeting, that'd be very nice. Thank you.
So coming to definitions, I put two up here once for Wikipedia, they define it as it's a process, which is about proactively and iteratively searching through networks to detect nice late events, threats that evade it, existing security solutions. So two important components, it's proactive, it's not reactive instant response type of service or activity. It involves somehow searching detecting. So it's a detection process detection technology that evaded existing security solutions. So you're really looking at the 1% that dropped off the radar of whatever you have in your environment, Sans, which is a very well regarded training organization. It's quite a similar definition, again, focused iterative. So you do it again and again and again, but they're focusing more on searching out, identifying and understanding adversaries. So they have this of, there's an adversary, a threat actor that have, that has entered somebody else's network that you then wanna search, identify and understand. We have another one. I mean, they're all good definitions, but for this workshop, the best description of what we will be doing, you'll be all doing is will search across data and assets for notable artifacts, all behaviors of malicious intent. And that goes with or without elite. So it could be this we're not as strict. We don't say that can't be an alert that initiates a threat hunt. That can be a leading indicator that you then take and perform further investigations.
And there's several flavors of threat hunting. You will find if you looked, if you Googled it now you'll probably find out there's quantitative data driven, threat hunting. And there's also technique based threat hunting. These means mean is you, you come at different levels of a detect lifecycle, and they're also simple threat hunts, simple threat hunts would be, for example, looking for indicators of compromise. Easiest thing to do is take a bunch of hashes IP addresses domains based on news threat feeds, block articles you read maybe from unit 42 around your researchers, shameless plug, sorry. And then you search through your environment, using the tools you have, whether you have any hits, it's very simple thing and you can get some good value out of this since many of the feeds and the inputs are completely vendor agnostic. So you don't rely on your security vendor to give you all the right definitions and signatures.
You just take something from open source, threat intelligence, there's more advanced technique based hunting, which would be all about searching for attack behaviors. You would be focusing on tactics, techniques, and procedures. There's an acronym for this, of course called TTPs, which are higher level, more precise rules describing a behavior and not just an IP, which can change from the data tomorrow or within seconds actually. And then lastly, the most advanced form fundings, probably just hypothesis driven. So let's say you have a huge pool of data of logs from various different sources. You try to understand what's normal and find outliers. So maybe you think, you know, our web traffic should have certain properties and shouldn't exhibit this and that. Then you go looking for the outliers. So this is a highly data driven approach could even say big data driven approach.
I talked a little about artifacts and TTPs. So here's one very, very important picture that everybody should know already. And if you don't, this is gonna help you out a lot in trying to understand what to hunt for. What's valuable. And what's actually good use of my time since we're talking about a manual process means there's people doing it most of the time, valuable security employees, SOC Analyst, threat hunters, which is a very, very specific skill and not everybody can do it. This is the pyramid of pain. So it was introduced by David Bianco, the links right here. It's, it's not new, it's still the best description of what sort of stuff in general could we look for and try to disrupt foreign attacker and how much is it going to hurt them? So at the lower level, at the bottom, we find cash values.
So these are, let's say a hundred percent exact representations of a certain file and easy to use, easy to search for with the right tools, easy to block, but also unfortunately completely trivial to circumvent. Like if you had only security based on identifying malicious hashes, you would fail miserably because attackers have ways to get new hashes within microseconds for the malware. And then moving up the pyramid things get a little bit harder, step by step and level by level to circumvent and actually to hide from detection or threat hunters. If we go look at the middle, for example, domain names, it's still not terribly hard to get a domain, but it would have to be paid for you'd have to register it. So that's not the same as just changing a file by flipping a bit, which cost snuffing you'd have to do something and know about how you register domains and manage this.
I mean, still not terribly difficult. There are some so-called Bulletproof hosters, which will register whatever you want almost anonymously. So there's no really good protection of the domain name system, but it's slightly harder. Once we get up here to the higher levels, there are things that attackers really wouldn't like to change. If they've got a specific tool set that they know how to use really well, it becomes very painful then to go move to another one, let's say they've got a remote access Trojan using some windows built in system tools like PowerShell, visual, basic script, changing. All of it out will cost them time and money. So this is clearly a target for us trying to disrupt as high as possible in the pyramid of pain. Actually the levels as they call it from trivial to really tough will be focusing mostly on TTPs on the very top.
Because if an attackers let's say motors, operandi is spearing with a malicious PDF attachment and we, we prevent them from doing it. We're making impossible to do that. They really have to search for very different ways to penetrate networks. And that's gonna be the best use of our time playing in the higher layers of the pyramid of pain, not so much hunting down hash values, IPS and domains, right? Much more could be said about threat hunting. It's a very, very interesting field. Certainly the coolest job title threat hunter, who will want to be one, but that's gonna be enough for now. Now second part of the introduction, what is code 60 R what is this tool that you will be all using in the next almost two and a half hours? Quite simply, it's a Palo Alto network product, the platform for extended detection response.
We're coming from this point where in the industry, things aren't working out. And actually the situation seems to be deteriorated. Companies are unable to detect all attacks and that should be the goal detect all attacks, if not prevent them, if you can. Many, many tools are in the market. People are using them, installing them, putting them together in a sock. But that doesn't mean that all data is actually used effectively. Very often. These are used as point solutions solving one particular small little problem very well sometimes, but they're not interconnected. There's no context between siloed point solutions. And there's another trend, which it's really a lot of fuel to this fire, which is quite frankly burning and many cybersecurity operations teams. There are too many alerts to handle too many tasks, too many things that need to be done manually 50 times a day, if not more.
So that means you're kind of not getting out of this vicious circle of not detecting the attacks, but once you do detect really hard to investigate, trying to piece together this puzzle of what a threat actor did. So this creates a really bad situation and there's not a shortage of products on the market or approaches in the industry to, to participate in the cybersecurity industry is probably one of the most creative ones in inventing a new acronym or a new hot thing you need. For example, endpoint protection and EDR solutions. These are sometimes very powerful tools providing some really deep analytics and detection at one level at one piece of the infrastructure, namely the endpoint, but they have this problem of not enough context and coverage for the rest of the enterprise. There's, there's a world outside endpoint, laptops, workstations, and servers that needs to be also taking into consideration.
On the other end, if we just try to put together all the data, taking a SIM approach, we'll probably have a lot of data in one tool, but very often not a deep understanding of what's going on. It's gonna be tables, indexes, correlation, walls, and trying to build very efficient detection and analytics on top of this, when you really don't know, and you don't own the sensors of your data is difficult. So what we wanted to achieve with XDR is deep analysis and the context built in the detection, analytics and algorithms enable threat hunting. And at the same time have integration points for automation cuz remember problem number three was too many manual repetitive tasks that the same SOC Analyst have to handle that should be out there, threat hunting.
So essentially XDR right now at the, almost the end of 2011, it's a very established market category and there would be dozens of vendors claiming to have an XDR solution. Cor XDR was essentially the first to market. And we, we defined this as extended detection response and that's prevention detection and response capabilities across endpoint network and cloud. This includes a number of things you can buy individually. Dozens of vendors offer endpoint protection, EDR solutions, sometimes combined sometimes not this network detection, response, user behavior, analytics, anyone new EBA. And the next one to join this list is probably some sort of cloud threat detection or cloud anomaly detection solutions. So that's of point solutions. We try to cover them all. I think that's the goal and the essence of the XDR platform in the Samsung workshop, you'll get to work with many of these things, but you see how they seamlessly fit together. Hopefully by the end, this makes sense.
The architecture behind it on a very, very high level is a cloud-based service. It's SaaS based power network is running this data lake and XDR infrastructure in multiple regions across the globe, EU Germany, us India, UK, where we provide this cloud based service consuming information from basically any lock source you could provide. Now, there are some where we have the natively in the portfolio, the right tools and sensors, for example, a next generation firewall. Now the 10th time in a row Gartner magic quadrant leader, which is not just a firewall with many, many functionality, but it's also a sensor. It can deliver very, very detailed to telemetry on the traffic that you still allow, right? Doesn't just, it doesn't stop at blocking to power a detection engine. We need detection, meaning we need logs. So our own firewall is the best possible sensor for network data on the endpoint.
There's an XDR endpoint agent doing prevention and detection. So again, delivering telemetry to data lake, there are cloud services, various integrations with our own and third party tools. And quite recently, we released XCR three to zero, which means any third party data can be integrated in principle. Is it our goal to integrate any third party data at all costs not by any means only where it makes sense will provide valuable context or, or you need that extra level of correlation between a certain tool, which, you know, performs its function in your environment together with those native lock sources, what happens in data lake is then something called stitching. So the difference between a traditional lock management solution and XDR is that we just don't, we don't just store these in different data sets and tables in the cloud and then try to work some magic across them.
The data is at the ingestion point, integrated de-duplicated and stitched. And what stitching means is it's very simple to explain using this little example here, let's suppose I'm browsing the web and I'm accessing a website. And in this screenshot it's espn.com could be any website. It doesn't matter. You will find traces of data and logs in potentially many different places. One could be network data. So the firewall ceases this traffic as it's going through, whatever, if I'm in the office, my perimeter firewall, if I'm working from home, my VPN agent infrastructure, whatever so generates the network log, and that will include information on the application. Maybe we're SSL, decrypting the traffic. So we really looking into my web surfing activities and it will categorize the URL. So some really good information in this lock file about what happened on the network, an endpoint agent on the other end, it would see the endpoint context.
So it will see, for example, what process did this web request and what came before this? So how was the browser launched? What else did the user do at the same time? Were there any files downloaded, uploaded registry key set? So it provides that whole endpoint context and each on its own can be an interesting lock source to look into and do detection. But if you do it only on one layer, you'll miss all the right connections. So what we do in XDR and this is the stitching I talked about is combine that data into one unified story. If you really wanna break it down, make it really simple. It's one lock line and not two or potentially even more. And this story becomes our input for everything, for the threat detection, for visibility, giving you the means to look into the data yourself, hunting and detect from algorithms based all the way up to the machine, learning based algorithms, which need as much data as possible and rich data.
So there's a master plan behind all this, which includes not just COEX XDR, but also other pieces of this platform that we call the cortex platform. And this is about building what you could call a flywheel for the sock, meaning something that helps you analyze threats, detect and respond to them in more automated ways for the future. So it's an aspiration. It's not something you can just buy, plug and play, put it in, and you will have this magical flywheel, but it's more, more of a vision. And we're supporting this with our solutions. It has essentially four steps and it's a cycle as you could see. So first step is sensing and by sensing, we would include also prevention. So on all those different layers, which are important in a security infrastructure and architecture, we would gather telemetry. And the four there's actually a fourth one here, which you haven't seen before is identity.
Cause that's becoming more and more critical to really control and have good visibility into next to the traditional layers, host network and cloud. So we've collected telemetry and build our stories as I've just show you. So there can be stitching between more than two data sources could be any number. We wanna build those integrated stories to understand activities better, and it includes pilot networks and third party data. So let's be clear. We're not saying we can solve any security problem in the world with this platform without taking somebody else's data. It may absolutely make sense to stitch it.
We wanna move the analysis from largely rule based manually created correlation walls to ML based detection, wherever possible. I think the, as again, we have the technology to build intelligent anomaly detection for the most critical parts of attack life cycles, like Poist exploited lateral movement, data exfiltration, and those models are gonna be continuously refined powered by this, by these stories. And once we've detected a threat analyzed, it investigated it as, as you will. When I finished my, the presentation, we move to automation and there's two layers of automation. We would differentiate between automated root cause analysis, which is something that's actually of a feature of cortex XDR. And there's a second layer, which is about automated workflows. So let's say I've got this incident. Data was sensed integrated, analyzed, and incident pops out. How can I build playbooks to take this and then get rid of some of those manual tasks? Like, is it really necessary that for every incident involving an email, someone has to manually analyze the email and its attachments and check IOCs and cross reference with threat feeds. Absolutely not. There's there's machines that can do that today and orchestration solutions that can help save the, you know, the valuable time of these investigators. So they can spend more time with fun stuff like Fred hunting.
If you put all that together, don't worry. I'm gonna, I'm not going to read all this. This is a very powerful platform. This approach to build this flyable for the so results in some key capabilities that any company will need in the future, simple endpoints for prevention, but also full visibility and detection for the things you couldn't prevent a direct tie into the investigation part. So we don't need systems that raise their hand when there's alert. And then you take that and investigate it in some other tool, or try to look for logs in various different places. No, we need the investigation directly connected to detection, same as hunting. So the same data that can power. All of this is, is really valuable. It's like gold dust waiting to be discovered and search for hidden threats. And then lastly, yeah, to close the loop, coordinate the response across enforcement points. Not surprisingly, most of the sensors in the architecture, I really showed you on a high level are also enforcers. Next generation firewalls and point agents can be ideally used to drive remediation, containment, isolation, and so forth.
Right? So I hope you're still with us. I can't see who, how many people are still in the room, but this team meeting is full. So let's get to the hands on workshop from now on way less talking from my side and way more working and clicking on your end. This presentation contains the links, but yeah, I was going to post them into the chat, but there's no chat function in the teams meeting Christopher. What's the best way I can get credentials to everybody on the teams meeting. So they don't have to type stuff down. I think it also makes sense if you send it to me via mail and then I can distribute it in the chat. Yeah. So let's do that. So I will email it to you. I don't have the chat function, but hopefully that's quick.
And I would really advise, don't try to type it down the, the URLs way too long, the lab guide impossible to, to type that down. So give us a second couple caveats, pay attention on the password. If you just double click it, it will not copy the M percent. So if you're cutting and pasting, make sure you grab the M percent at the end. So we don't get any access denied errors. And again, highly recommended use Chrome and using cogno mode in case your power was customer. This won't get commingled with your existing accounts and logins, cookies and stuff like that. And then take your time. Log in if it takes a minute or two that's probably because 30 people are so logging on all at the same time. So give it a minute, try to reload. But if you get a password warning, please, please, please double check. You've got the right password. So we don't wanna lock out this shared account here. Thank you.
So I've shared all your links in the chat. So all of you should be able to copy and paste it right
Now. And yeah. Keep the lab guide open it's PDF. This link is not malicious and try to, I mean, if you have multiple monitors, we'll make sense to put the lab guide on another, monitor another screen, or keep it, you know, behind the X or interface, you, you will want some screen space, cuz we'll look at some complex incidents and you need the screen space for XDR.
Now I think on the presentation perspective, since I don't imagine anyone will type this down some notes. So because those that are in, I wanna give you the opportunity to just start couple notes. There will be some steps here in the guide involving a future called life terminal. You need to ignore those. They won't work because you would've had to set up a virtual machine that you can log on, but no worry. There are screenshots and a description of what this would look like. If you had life terminal screenshots in the guide, they might looks slightly different than what you're seeing today. Simply because these are replicated environments, the data is a bit older. So a good point to keep in mind is whenever there's a time filter deleted, cuz otherwise you might only be seeing the last 24 hours, seven days or 30 days of data. Please don't make any changes. You can't really destroy anything, but it will change the experience for the other participants. We'll all logged in as the same person. So not closing an incident might be confusing. And if you have questions, need to ask them and chat. And I guess Christopher read them out to me since I don't, I don't see them.
And I, I suspect people are trying to log in getting started. So as I said, this first exercise is mainly to make sure everybody's logged in. You've organized yourselves with a guide and there are some small exercises. This is mainly the warm up. So activity zero and activity one or zero is just logging in and activity. One is orienting yourself a little bit in the tool in this web interface of XDR choose whether you, you wanna use light mode or dark mode and then either take a break two or immediately start investigating in the order of the guide. So activity two would be your first instance to look to look at. And if you want to get a head, start by no means, let us not slow you down. There are enough activities definitely to keep working with the tool until four o'clock or even longer. And the environment will be up a little bit longer today. So we're not gonna throw everybody out at four o'clock. I don't know if you need to leave the room in Berlin, maybe could be, but this lab environment is of course in the cloud and still be available throughout the day.
Christopher, I found a way to participate in the chat so I can now ask, answer the questions directly, just so you know, and I'm gonna take the first one live, which is in the, in the, the chat from Alexei that's. Okay. So Alexei is asking when we look at the instance by severity, the blue buttons can N GF w external analytics and external BOC refers to what the answer is. There are different methods of detection and you're right. Somehow, sometimes also the tool that discovered it. So the principle of XDR is that within an incident we would group related alerts together, no matter where you originate from. So Penn and GFW is a firewall prevention, alert, XR analytics means an anomaly behavior. Anomaly detected based on more than just a single event, like something's uncommon, rare, unseen in the organization and X your B I C is a detection technique called behavioral indicator of compromise. There will be a little bit more text if you actually, if you look into the guide and you read through it, it will explain a little bit what each of these is gonna take. Another question that came up from a woman, is it recommended to manually manage incidence scores?
No, I wouldn't. It is it's possible. But the, the real idea is that you'd create some rules to automatically apply scores to incidents. It's not going to be part of this lab since we're focusing on just a few cases and you know, arrive really deep into investigation and hunting, but there's a rule set behind. I don't, I'm not sure if you can actually see it. I don't think so where you'd define, you know, if this happens, this type of event, this group of endpoints is affected this user, this score goes up. So it's a means of, you know, driving some prioritization beyond just low, medium high, which is really not enough in many cases. All right. It's two o'clock. So if you'd like to take a break will be offline for 10, 15 minutes and then go back and then whenever pays back quarter past two, hopefully everybody's locked in oriented themselves. So you can really start if you haven't already investigating incident 26, which goes by the name of behavioral threat and some other alerts, cetera, et cetera, et cetera. And that will be your introduction to XDR. So yeah. See you back in 15 minutes.
Okay. We're back. Well, buddy, enjoyed the break, got some coffee, cause you need to be all coughed up for the next part of this.
Thank you for anybody that might have joined from elastic. I didn't enjoy their XDR workshop. I'm kidding. Of course. So thank you for staying with us and we'll move on to the purely hands on portion of the workshop. So normal presentations, except for one break in the middle where I'm gonna wrap up and maybe show and demo a few things that you couldn't do in your workshop, like life terminal. But otherwise I see from the questions in the chat, people are already continuing. So please carry on. You should be starting with activity two. Now in the workshop guide that's on page. Let me see page 12 and following few more additional notes. This is actually the world first of this, and somebody please mute themselves if possible, if you don't mind, that would be nice.
Thank you world first for this release of the workshop, most current XDR release brand new scenarios. So the guide should be 98% correct and complete. If you see something, you spot an error, feel free to highlight it. We'll be really happy for feedback, how this is going for you. These are more complex scenarios then we're used to run. So stick in there, even if it's 10, 12, 15 steps, hopefully you'll be rewarded with a nice end to end investigation and threat hunt. And with that said, if anybody missed the login credentials, please let us know. Or if you still logged in, let's go and have some fun.
Here's a good question from Alexei again, about the wildfire analysis report, what it shows I'm gonna an answer that live. So essentially you're right. The analysis report is a, is a summary of what our, what wildfire, which is a threat analysis platform. We operate cloud based what the analysis resulted in. And there's both a static component. Like we would just scan the file for strings. Things were suspicious as well as run it in certain VM configurations, depending on what kind of file it is that might be an XP windows, seven windows, 10 or other emulator. And the way that XDR uses wildfire is first and foremost, if there's anything that's already known in wildfire in the cloud, the agent would directly take that verdict and implement it. So create another alert or block of course, introduction. I said, it's trivial to mutate malware and to create your own unknown sample, then the agent would locally evaluate if this file is malicious or not, but then still send the file to wildfire for further analysis for validation. And that will be then even more powerful than the local component. So in the wildfire report, you don't see what it did on your end point. You saw what we observed in a sandbox. Ideally those things would be the same, but our is being becoming more and more sneaky. So it could try to hide its true intentions in a sandbox. And of course, especially if there's a human adversary, you know, hands on keyboard, somewhere remote, controlling their Trojan. You can only see that in XDR through telemetry. Wouldn't help you. I hope that makes sense.
I guess the, well, in a sense the wildfire verdict is, is malicious gray where benign and it's because of what it did, but it's not as fine grained as the XDR data we collect on the machine itself. Those are great questions. Please keep 'em coming and hope you're enjoying the workshop so far.
I will get to the question Chad in just a minute and let's repeat it in re really quick. So wrap up. This is by no means a complete wrap up of the incident. Not gonna repeat the analysis that you all just did, but I wanted to give one, two quick hints for the next stages. This, there was one step towards the very end where you had to use an XQL search query. And when you're cutting and pasting queries into this window, please pay attention for line breaks. It's not easy. I mean, PDF breaks up those lines and then the query's not accepted. So you need to make sure to delete those spaces and have the right zoom factor. Sorry about that. So our query actually works and if it doesn't work for some reason and you zoomed in and out or the query's not recognized, reloading zoom spa
And short question, could you zoom in a little bit more for our onsite attendees
Because it's very difficult to read something. Yeah, I, I will now so zooming in breaks up the input window for some reason sometimes. So if you have any line breaks in here, need to delete them. If you're cut and pasting the query, which is pre, you know, build into the window, the second point, make sure you adjust the time. Filter the data in this environment is not within the last 24 hours. So oftentimes even longer away ago than a month. So there will be instructions with every task. You know what to do about the time filter, you adjust the timeframe, click anywhere, and then you can run your query and hopefully get the results we expect. So this is just to prevent you from getting stuck. I'm gonna take one question which relates to a specific task in a scenario. So it's still three on page 22. You don't see the name of one woman. Okay. So let's have a quick look altogether might be a buck in the guy too. You never know. So I will open our instant 26 in a new tab just to get some screen space. And I'm just gonna,
Let's see if that takes me to the right forensic timeline. I'm trying to shortcut a few of the steps now here and you weren't able to locate the, so I found the file category. Just don't see the name of wonder woman. Okay. So I, I guess if you wanna look into metadata, the better view for you is first of all, you could see here cuz process initiator for alerts, where we're talking about now, we'll always include the initiator. That's one way to go at it. And now we go to the causality view, need to highlight the internet Explorer process and give it just a few seconds to load the bottom draw here. And then you should be able to see file information, including the name. If you scroll to the right horizontal scrolling is a yeah. Part of this view, certainly so that we have our wonder woman extended cut. That answers the question Alexei. Or if not, we'll I'll check with you while everybody else can start with activity. Number three, that's gonna be our next incident. Spoiler alert is related to the one after. So is gonna be almost an hour of trying to get behind these, this threat actor and what they're doing. And I hope you enjoy, keep asking questions or highlighting issues if you have any and we'll be better support. Sorry,
This is Boston here. I hope you're all heavily investigating and hunting threats. Just quick intermission. Still, still time left to worry. Don't stress yourself. If you're inactivity three now, somewhere in the middle you're you're absolutely fine. If not that's okay too. So we're working all in parallel at home or in Berlin each at our own speeds. Of course just wanted to use the time to, to not have too much silence on the meeting to show you a working life terminal because we couldn't make this. We couldn't figure it out how to do it for this workshop for yourselves and show you how it works. So if you wanna take a quick look at my screen, now that I'm sharing, it's really very easy to use. The idea is to have a quick tool, right in the investigation interface, whether you come from hunt and down threats, investigating alerts, whatever to remotely connect to a machine. So I can come there from any you know, direction. This is not your lap environment. This is my personal one. You could come from. I wanna look at this end point and just find it in my list of machines and hit initiate a live terminal. I could use a quick launcher type in a machine if I know the name or cut and paste and run life terminal, or whenever there's a host name, anywhere in the interface.
And I mean anywhere, literally just need to find an incident originating from a windows machine. There's probably going to be some sort of pivoting option or, you know, nothing else helps you can market cut and paste it into life terminal, and you'll find a way to life terminal. So once you initiate it, the agent itself will broker this connection securely. And this is, you know, live. So that's the real response time you can expect until you've got terminal access, meaning task manager, for example, file Explorer, which would be the thing used mostly in the on lab and a command line, which is a full windows command line and PowerShell. And on Linux, it's gonna be bash on Meko S also standard shell. And that's gonna be, allow me to run commands. So just wanted to put this in. Since you don't get to try it hands on yourself, this feature exists. It works very convenient to use. And as soon as you encounter interesting leads where you need to see content of files, get some additional data, acquire some additional information is a really convenient tool. It means nobody has to remote move to a remote desktop or team viewer to get the information they need for their investigation.
Right? And with that said, please carry on and hope everything's working out. If there are any technical issues, let me know and let us know in chat, or if you're in Berlin, reach out to your onsite moderator and I'll be back in 10, 15 minutes or so with another commercial. Thank you. Hey, everybody Basian final interruption for today before the final remarks for those of you that made it into activity four, congratulations, this workshop is tough. I mean, if you haven't been trained on this interface, knowing where the click handling the guides steps, one to 13, really respect. I wanna grab one little workflow out of the activity for, to ensure nobody gets stuck and there's some specific handling issues you could run into. So this all starts with investigating some potential data exfiltration. Wow. Talking about ID 19, the large upload event. Now in this, in this heavy pivoting involved, meaning you come from, you know, one screen to the next, to the next, to the next and really drill down a lot of levels. And it's easy to get lost. So stay with me for two minutes. So you don't get lost. Actually, this is the wrong screen I got lost. Jesus. Sorry about that. It's not about the large upload. What I wanted to show you is the multiple discovery workflow,
Multiple discovery commands. So for those of you who are there already, you will notice this, this is on page 56 of the guides, or quite far ahead, a few things where you could get lost here. This alert is about multiple consecutive discovery commands, the short timeframe. And one thing that's specific about this alert is it could involve multiple applications and you see the little number two here. That means two applications are involved in this discovery and it's a net DOTC and another one. And to get to the other one, need to highlight the hexagon and then cycle through the processes. So you are recognized, then this flips to CMD dot exc, and same here on the left. You need to click the elements so you can cycle through and see some of the more detailed information. This is important because the next step me suggests that I should right click this view, the process instances, opening this overlay window, and then drill in even deeper. And as I suggested something doesn't load yeah, web page, doesn't really like zooming very much with me for a second.
So again, CMD show, process instances, analyze taking you to the familiar causality screen, which is the blow by blow step by step process execution chain. And then it suggests to look at this in a timeline view, and here's where you could easily get stuck due to the data being a bit older than, you know, it's not really real time. This is over a month ago. Everything's garbled up here on the left side. So left click slide over this. Like you were to drag, you know, a selection window over something. So you can progressively pull things apart. You'll see the individual buttons and you'll find the actions that you're supposed to see supposed to find. Let's say to get to the next stage of the investigation.
And with that said, enjoy the rest of the workshop. We'll wrap it up in 15 minutes. But again, if you're right in the middle, just keep going. As I said, I can't guarantee the room in Berlin will still be available, but if you're at home like me, this environment's gonna stay up and we probably won't change credentials until tomorrow. So if you're having fun, use it while you can. Thank you very much. And we'll be back for the final remarks. Five minutes to four. Okay. This is Sebastian final time. You'll hear my lovely voice today for the wrap up. I hope this was a good use of your time. I hope you enjoyed the workshop and that we kept the presentation and marketing to a minimum. I think almost minimum while you were working on these exercises. And I have a feeling that you need a lot more time.
I was able to secure the lab for today and tomorrow. So we'll be reverted on Thursday and everything will disappear. Credentials will reset, but you have the rest of the day and tomorrow to work with the lab. And for those that joined really late, I'm gonna post the credentials one last time in the teams chat, please do not make any changes to incidents. A few times I had to revert some changes in the labs. I mean, these things happen, but if you, if you can don't change incident status severity or exclude alerts. Yeah. There's not gonna be a full wrap up the incident discussion. That will be a bit difficult in this virtual format. If you disagree with some of the findings wrap up summaries in the guide, by all means, please use their survey to give us feedback. This is not a survey for the event you're at. So this is purely about the format, including the presenter. Do you do a good job? Do you talk of bullshit? Sorry. And if you see anything that could be improved, any feedback, please share it. Also, if you, if you'd actually like to be contacted by somebody from Palo Palo Alto networks, because you like what you saw also use the survey. You can leave your contact details or use it completely anonymously, right?
Yeah. That's it from me. Any final words Christopher, before we wrap it up?
So not from my, so not from my end, it was a pleasure to join the workshop to hands on. I had the opportunity to also do the hands on stuff, but it is a really interesting topic. And I think it's a good format, especially for the online attendees to see really what you do, how you hunt someone and what XDR can do. So greetings from Berlin to all over the world, to the online attendees. And thank you for this great workshop.
You're welcome. That was some, some questions. So before we close the session, there's not really proof of attendance, but if you wanna, you know, submit some CPEs for this, for example, why not? And if you get audited, let me know, and we'll find something for you, but this is purely has a workshop with no exercise exam or test at the end. Of course, there's not gonna be a recording. If you, if you miss this meeting, check our website about once a month, we run the same format over and over again. So there's, you can participate in, you know, publicly available hands on workshops like this different moderators, slightly different style maybe, but overall it's gonna be the same content. Yeah. And that's a wrap. So enjoy the cooking a cold party. I'm sure it's gonna be awesome. Wish I was in Berlin maybe next year. Thanks everybody. Great. Thank you very much and have a good day.