0% found this document useful (0 votes)
2 views

notes_CSF1

notes of csf

Uploaded by

2k21CO401 Sachin
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

notes_CSF1

notes of csf

Uploaded by

2k21CO401 Sachin
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Notes

Computer Forensics Overview


Definition: Computer forensics is the process of identifying, preserving,
analyzing, and presenting digital evidence in a manner that is legally
admissible. It plays a critical role in solving cybercrimes, such as hacking,
fraud, and unauthorized data breaches. The goal is to recover and preserve
data from various digital devices (computers, phones, etc.) without altering it,
and present it in court if necessary.

Applications: Computer forensics is widely used by law enforcement


agencies, corporate investigators, and cybersecurity professionals to
investigate criminal cases and security breaches. It helps detect unauthorized
access, intellectual property theft, financial fraud, and data tampering.

Objective: The primary objective is to maintain the integrity of evidence


throughout the investigation process. By adhering to legal and procedural
standards, forensic experts ensure that the evidence remains admissible in
court. Additionally, forensics helps identify how a crime was committed and
who the responsible party is.

Key Goals of Computer Forensics


1. Evidence Presentation: Digital evidence must be presented in a way that it is
clear, understandable, and legally sound. This involves preparing technical
data in a simplified format for judges, juries, or corporate boards.

2. Forensic Investigation Process: A structured process is followed during


investigations. It includes identifying potential evidence, acquiring the data
without altering it, and analyzing it thoroughly before reporting findings. This
process ensures that no critical steps are missed.

3. Chain of Custody: The chain of custody refers to documenting the timeline of


evidence from the moment it is collected until it is presented in court. It

Notes 1
includes details of who handled the evidence, when it was transferred, and
how it was stored, ensuring that the integrity of the evidence is not
compromised.

Volatile Data Collection


Definition: Volatile data is temporary data that resides in active memory (RAM)
or cache and disappears when the system is powered down. It includes
information like open network connections, running processes, login sessions,
and memory dumps. This data is critical to investigations but is difficult to
retrieve once the device is turned off or restarted.

Purpose: The primary purpose of volatile data collection is to preserve data


that could provide insight into a cyber incident. Since volatile data can be lost
instantly, investigators must collect it first before proceeding to other aspects
of the investigation.

Process:

1. Create Response Toolkit: A set of tools (often stored on an external


device) that will be used to collect the volatile data. These tools should be
tested and verified to ensure they don’t alter the state of the system being
investigated.

2. Store Initial Response Data: Once a breach or issue is detected, the


volatile data is quickly saved to a secure location before the machine is
shut down.

3. In-depth Live Response: After the initial response, a more detailed


investigation is conducted on live systems, including deeper analysis of
memory contents and system logs.

Reasons for Collecting Volatile Data


Forensic Investigation: Volatile data helps recreate the actions leading up to
and during a cyber incident. Without it, key details about active processes or
hacker sessions may be lost.

Live System Analysis: By collecting data from a live system, investigators can
detect current states like network connections or user activity that might

Notes 2
vanish once the system is shut down.

Malware Detection: Malware often operates in memory without being saved


to disk. If volatile data is not captured, this malware might be missed entirely in
the investigation.

Incident Response: Collecting volatile data allows cybersecurity teams to


quickly respond to threats, minimize damage, and understand how the system
was compromised.

Live Response
Definition: Live response refers to actions taken while the system is still
running to analyze and mitigate threats. It involves gathering volatile data
remotely without shutting down the system, allowing investigators to capture
critical data in real time.

Steps:

1. Remote Shell Access: Investigators or responders gain access to the


system via a remote terminal (such as PowerShell or SSH) to execute
commands that gather forensic data.

2. Collect Forensic Data: Essential data such as network connections,


running processes, memory contents, and logs are collected.

3. Contain Threats: If active threats (like a hacker session or malware) are


detected, immediate actions can be taken to isolate or neutralize them
without waiting for a full post-incident analysis.

Live Response Tools: These include scripts or utilities designed to run in a live
environment without altering system states. Common tools are PowerShell
scripts to retrieve event logs, process data, and network connections, and
other command-line tools for file collection and script execution.

Initial Investigation and Toolkit Creation


Preparation: It’s crucial to ensure that the forensic tools used during an
investigation do not modify the target system. Any changes could corrupt
evidence, making it inadmissible in court or incomplete for analysis.

Notes 3
Steps:

1. Gather Trusted Tools: Use trusted forensic software that has been
validated for use in investigations. These tools should be pre-tested to
confirm they don’t leave any traces or modify data.

2. Label Media: Properly label the devices used to collect evidence (e.g.,
external hard drives, USB sticks). Include the case number, date, and
investigator details to maintain proper documentation.

3. Create Checksum: A checksum is a cryptographic hash (such as MD5 or


SHA256) used to verify the integrity of files and tools. Creating a
checksum of the toolkit ensures that the tools have not been altered after
collection.

Storing Data Collected During Initial Response


Options for Data Storage:

1. Removable Media: Data is often stored on external devices like USB drives
or external hard disks to prevent contamination of evidence on the target
system.

2. Netcat/Cryptcat: These utilities enable secure and encrypted data


transfer from the compromised system to a forensic workstation, ensuring
the data remains confidential.

3. Documentation: Document the data collection process in detail, including


timestamps and tools used. This ensures that every step is legally
defensible and transparent.

Volatile Data Collection Steps


Key Data to Collect:

1. System Date and Time: Ensuring time accuracy is critical for correlating
events during an investigation.

2. Logged-in Users: Identify who was logged in at the time of the incident.

3. Time/Date Stamps of Files: Helps track the creation, modification, and


access times of key files.

Notes 4
4. Running Processes: Capturing all running processes (using tools like
PsList) helps understand what was active during the breach.

5. Network Connections: Using Netstat or similar tools helps investigators


see open network connections, identifying potential malicious connections
or remote attackers.

Tools for Data Collection: Various command-line tools and utilities are used to
gather this data, such as:

1. Netstat: Displays active network connections and listening ports.

2. PsList: Provides a list of active processes running on the system.

3. Arp/Nbtstat: Shows recent IP-to-MAC address mappings, useful for


identifying devices communicating over the network.

In-depth Live Response


Objective: Once volatile data is captured, forensic experts dive deeper into
the system to analyze processes, memory, and logs. The key is to conduct this
investigation without modifying the original state of the system.

Key Actions:

1. System Event Logs: Using tools like Auditpol or Dumpel, investigators


gather security logs to see if any unauthorized access or activities took
place.

2. Password Collection: Tools like pwdump can be used to extract hashed


passwords from the system for later analysis.

3. RAM Dumping: Extracting the contents of system memory is vital for


detecting malware or capturing data that only exists in memory.

Memory Artifacts
Definition: Memory artifacts are traces of data stored in volatile memory
(RAM) that reveal the current state of a system, including running processes,
open files, and network connections. These are often overlooked but crucial
pieces of evidence.

Notes 5
Importance: Malware and cybercriminals often operate in memory, and by
analyzing memory artifacts, investigators can detect hidden or temporarily
active threats that don’t leave a footprint on disk.

Advanced Live Response with Microsoft Defender


Live Response Dashboard: This tool in Microsoft Defender allows incident
responders to initiate and manage live response sessions. It displays important
information like session initiators and allows forensic actions like file retrieval
and remote script execution.

Command Types:

1. Basic Commands: For simple forensic analysis tasks like listing files or
processes.

2. Advanced Commands: For more complex tasks such as memory dumps,


file extractions, and running custom scripts to gather specific data.

Limitations:
Session Limits: Only 25 live sessions can be active at a time.

Timeouts: Live response sessions time out after 30 minutes of inactivity to


prevent prolonged unmonitored access.

Command Time Limits: Commands have execution limits (10–30 minutes)


based on the complexity of the task.

File Size Limits: File transfers in live response are subject to size restrictions,
which might limit the amount of data retrievable during the session.

Live Data Collection from Unix Systems

3. (A) Creating a Response Toolkit


A response toolkit is essential when handling incidents in Unix systems to collect
and analyze live data without compromising evidence integrity. A trusted toolkit
should be compiled from a safe, isolated system, ensuring no malicious programs
are unintentionally used. It includes utilities like:

Notes 6
netstat , ps , w , and last to capture volatile data.

dd , tar , and cp for data duplication.

Cryptographic checksum tools like md5sum or sha256sum for data integrity


verification.

Data transmission tools like netcat (or cryptcat for secure transfers) for
sending data over networks.

3. (B) Storing Information Obtained During the Initial Response


When responding to incidents on Unix systems, it is crucial to store collected data
carefully to preserve its integrity and ensure that evidence can be analyzed later.
There are several storage options:

1. Local Hard Drive Storage:

Storing data on the affected system’s hard drive is a quick method, but it
risks overwriting other critical data.

Ideal when no other immediate storage medium is available, but use


caution.

2. External Media (USB Drives, CDs):

Portable media ensures the isolation of collected data from the system
under investigation.

USB drives can be write-protected to avoid further alterations to the


evidence.

3. Manual Recording:

In cases where digital storage is risky or unavailable, manually recording


information (on paper) ensures that key details are not lost.

4. Network Transfer using netcat or cryptcat :

Transferring collected data over the network to a secure forensic


workstation is ideal, especially when the local system is compromised.

cryptcatis preferred for encrypted transfer to protect data during


transmission.

Notes 7
Forensic Duplication
Forensic duplication ensures a bit-by-bit mirror image of the target media,
preserving the evidence without altering or destroying potential evidence:

Working Copies: Forensic duplication allows investigators to create copies for


analysis without affecting the original data, ensuring the integrity of the
evidence for court or audit.

Warranted Use: If there is suspicion of deleted or hidden material, or in severe


cases, forensic duplication is mandatory.

3. (C) Obtaining Volatile Data Prior to Forensic Duplication


Before creating a forensic image, volatile data must be collected because it can
disappear when the system is powered off or rebooted. Volatile data includes:

1. Currently open sockets

2. Running processes

3. System RAM contents

4. Unlinked files (those marked for deletion but still active in memory)

Why Console Access Is Critical:


Accessing the system locally (rather than remotely) avoids generating
network traffic that an attacker could monitor. It also guarantees the use of
trusted commands, not compromised by malicious software.

3. (C) (i) Collecting the Data


To gather volatile data, you should follow a systematic approach, starting with
basic system information and proceeding to open connections and running
processes. The steps include:

1. Execute a trusted shell: Ensure the shell is secure to prevent tampered or


infected environments.

2. Record system date and time using the date command:

date

Notes 8
3. Determine logged-in users using the w command to show active sessions:

4. Capture file modification/access times using ls with appropriate flags ( l


for details):

ls -lt

5. List open ports using netstat to check for unauthorized connections:

netstat -an

6. Map applications to ports using netstat -p to determine which applications


are associated with open sockets.

7. List running processes using ps -aux :

ps -aux

8. Record recent connections using netstat again for historical network activity.

9. Record current system time again for timestamps to limit the window of
manipulation.

10. Record the steps taken (manual recording or using the script command).

11. Generate cryptographic checksums using md5sum or sha256sum for integrity.

3. (C) (ii) Executing a Trusted Shell


Log in locally with root privileges at the victim’s console, avoiding network
access. This minimizes the risk of tipping off an attacker monitoring network
traffic and ensures the environment is clean.

3. (C) (iii) Recording the System Time and Date


Capturing the local system’s date and time ensures you can later correlate
timestamped events during the investigation. The command to capture this
information is:

date

Notes 9
This timestamp helps track when specific actions were taken during the
incident response.

3. (C) (iv) Determining Who Is Logged On to the System


Using the w command, you can determine:

The user IDs of currently logged-in users.

The system they logged on from.

Their current activity and session time.

Example:

This information is vital for identifying any suspicious user activities or


unauthorized access.

3. (C) (v) Recording File Modification, Access, and Inode Change


Times
Unix systems provide three key timestamps for each file:

1. Access Time (atime)

2. Modification Time (mtime)

3. Inode Change Time (ctime)

These times can be captured using the ls command:

ls -lt

These timestamps help track when files were last accessed or altered.

3. (C) (vi) Determining Which Ports Are Open


Using netstat , the open ports on a system can be identified:

Notes 10
netstat -an

This lists all open ports, helping identify possible communication channels that an
attacker may use.

3. (C) (vii) Listing Applications Associated with Open Ports


On Linux, you can map applications to their open ports using:

netstat -p

This helps determine which applications are responsible for open network
sockets, identifying potential unauthorized services.

3. (C) (viii) Determining the Running Processes


To get a snapshot of the current running processes, use:

ps -aux

This shows process IDs, users, and resources used, providing critical insight into
what is actively running on the system during the response.

3. (C) (ix) Listing Current and Recent Connections


You can also use netstat to find both current and recent network connections:

netstat -an

This helps identify which systems have connected to the compromised machine,
potentially highlighting the attacker’s machine.

3. (C) (x) Recording System Time Again


Record the system time again using date to mark the conclusion of your activities:

Notes 11
date

This ensures that any modifications to the system’s state are accurately
timestamped.

3. (C) (xi) Recording the Steps Taken


Use the script command or the history command to record all steps taken during
the response. This is critical for legal and procedural purposes:

script
history

3. (C) (xii) Recording Cryptographic Checksums


Finally, generate cryptographic checksums for all collected data to ensure it has
not been altered during the investigation:

md5sum <filename>
sha256sum <filename>

3. (D) Performing an In-Depth, Live Response

3. (D) (i) Detecting Loadable Kernel Module (LKM) Rootkits


Rootkits are used by attackers to replace or modify system binaries, allowing
them to hide their presence. Loadable Kernel Modules (LKMs) are a type of
rootkit that modifies the kernel itself, allowing an attacker to control a system
without detection.

LKMs are dynamically linked into the kernel after system boot, meaning they
can change the system's behavior without requiring a reboot.

To detect LKM rootkits, specialized tools like chkrootkit or rkhunter can be used.

3. (D) (ii) Obtaining System Logs

Notes 12
Most Unix systems store their log files in /var/log . These logs provide a history of
system activities, including login attempts and system errors.
Key log files include:

1. utmp: Tracks current login sessions.

2. wtmp: Keeps a history of logins/logouts.

3. lastlog: Records the last login time for each user.

These logs can be retrieved using dd and transferred securely using tools like
netcat or cryptcat .

3. (D) (iii) Obtaining Important Configuration Files


Critical configuration files should be reviewed to detect unauthorized changes or
backdoors. These files include:

1. /etc/passwd: Stores user account information.

2. /etc/shadow: Contains encrypted password hashes.

3. /etc/group: Lists group memberships.

4. /etc/hosts: Manages DNS entries.

5. /etc/inetd.conf or /etc/xinetd.conf: Lists network services.

6. crontab files: Lists scheduled tasks that could indicate malicious activity.

By analyzing these files,

Notes 13

You might also like