Security-Utilizing Data Sources For Incident Response
Security-Utilizing Data Sources For Incident Response
Okay, from
cybersecurity standpoint, anytime you have an attacker do something that attacker will
leave something behind. This is a common principle of stolen from a guy named Edmond
Locard think was a French guy lived quite some time ago.
Kind of kind of crystallized, a lot of concepts used in forensics in criminal investigations. The
idea again is that a threat actor or a bad guy will as they go into commit a crime will bring
something to the crime and leave something behind. And the question is, what are those?
Well, another thing that is important understand here are the concepts of the tactics and the
techniques in the procedures of an attacker. So the tactic is the tool used. The technique is
how the tool is used by the threat actor by the threat agent. And the procedure is the actual
steps taken in order to conduct an attack.
And these are important concepts because when it comes to utilizing data resources, we
have a cybersecurity professionals. A lot of information that's available because hackers do
leave things behind and the question is do we see enough trends? Do we see enough data in
order to identify a trend or really identify what a hacker's been doing?
And the answer is kind of a data visualization sort of step. And a lot of people kind of say,
well, applications will do this for you and this is a screenshot of something called Kibana.
Cabana is part of something they call the Elastic Stack or they used to call it the Elk Stack.
What is it? Elasticsearch, logstash, and Kibana. These are tools that are used in the open
source in cybersecurity, in the open source industry to basically grab data and chunk it
together to capture data, slice and dice it, and then visualize it so that instead of looking at
ugly log files, you can actually look at trends.
The idea is that what you can do is put, before you can visualize though, yeah, you kind of
have to make sure that you get data from relevant resources and that you sift through a lot
of the chat for a lot of the stuff that's not important and really get down into information
that helps you.
So I've been talking about the idea of TTPs, for example in data visualization to kind of set
the stage for understanding the kind of data resources that you will use. So let's take a look
at some of these data resources that you might use when you're responding to an incident
and trying to identify, what it means if you've been hacked or not.
Or if there's something going wrong, I've opened up PowerShell here and I'm basically
taking a look at some log files. In this case, these are old logs from an intrusion detection
system called snort, that's why it says snort logs. Snort has been around a long time. It's
given that name because it does a great job and sniffing the network an looking for
problems.
So this is just a gives you a crude, very simple idea of what a raw syslog file would look like.
So you've got your date. You've got your source, so there's a date old date. You basically
have where it came from, so in this case a bastion host.
So you're looking at something that's coming from some form of probably some sort of
firewalls one way to look at it. And in this case you've got some ICMP packets here and you
got your source and your destination. Okay, 70.81.243, going to, there's the arrow
11.11.79.100 in this case.
So you can see your ping there. Fairly innocuous, some people, ICMP is evil, it's like, well, it
has a job to do. It's a messaging protocol, there are reasons why you have it going. Doesn't
seem to be truly a problematic thing. Of course here you have, at least according to snort,
right?
The attempt for a worm to try to move from one element or one system element 1 computer
to the next. Okay, that's propagation, right? So what it's doing is basically has noticed a
particular suspicious port or some sort of suspicious activity. These are the ephemeral ports
throughout high 1067 to 1434 between systems.
But look, if you take a look at this, you can see it feels that there are basically not only
propagation, but basically that there's also something to do with the Microsoft SQL, MS SQL
that there's some sort of attempt going on and they look related. You're looking at
correlation now effectively.
Snort has basically said well look, I don't know about the ICMP packets, but it is basically
saying we're looking at suspicious traffic where a worm is attempting to go and do certain
things to move from one thing to another. And also attempting to do an overflow buffer
overflow probably.
If you take a look, also there's this ICMP destination unreachable. Now that ICMP is actually
looking pretty interesting, because that could be the worm attempting to discover where to
go next, where to spread next. So these are the types of things as an incident responder, the
types of data that you look at.
Here's another log. This has to do not from an intrusion detection system that was snort.
This is simply, not simply but from a proxy server. Squid is a long established proxy server
that basically allows you to look at web applications or layer seven kinds of information.
And once again you're kind of looking at how can you correlate certain bits of information
to create a narrative.
So I've started this talk talking about TTPs and talking about that sort of approach in this
Edmond Locard person. The whole concept is to come together with a proper useful
narrative. And so these are the types of raw data files that you'll be looking at. Another way
that you could do it, and I've got a live instance here of an application of called Kibana.
This here is something I've got the Elastic Stack working on a separate system and I've used
my web browser to access this. And this here is where I've noticed that what I do is I
conducted a couple of denial of service attacks against some systems and three attacks.
In fact an it discovered them right away. You can take a look at the raw log files here they
are kind of ugly and all that. Or you can look at the pretty ones. You're gonna need to
basically come up with a balance between looking at pretty trends in log files that could
mislead you, and you'll still have to read the raw log files in order to come up with a proper
correlation and the proper information so that you can then issue the right alert or not issue
the right alert.
So it will come down to your ability to read these log files and these data sources come up
with the right narrative, and then determine whether to trigger some form of alert.