0% found this document useful (0 votes)
17 views

Cybersecurity Course Note

Uploaded by

Tris
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Cybersecurity Course Note

Uploaded by

Tris
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 66

Risk Management

A vulnerability is a weakness that can be exploited by a threat. Therefore,


organizations need to regularly inspect for vulnerabilities within their systems.
Some vulnerabilities include:
 ProxyLogon: A pre-authenticated vulnerability that affects the Microsoft
Exchange server. This means a threat actor can complete a user
authentication process to deploy malicious code from a remote location.
 ZeroLogon: A vulnerability in Microsoft’s Netlogon authentication
protocol. An authentication protocol is a way to verify a person's identity.
Netlogon is a service that ensures a user’s identity before allowing access
to a website's location.
 Log4Shell: Allows attackers to run Java code on someone else’s computer
or leak sensitive information. It does this by enabling a remote attacker to
take control of devices connected to the internet and run malicious code.
 PetitPotam: Affects Windows New Technology Local Area Network (LAN)
Manager (NTLM). It is a theft technique that allows a LAN-based attacker
to initiate an authentication request.
 Security logging and monitoring failures: Insufficient logging and
monitoring capabilities that result in attackers exploiting vulnerabilities
without the organization knowing it
 Server-side request forgery: Allows attackers to manipulate a server-
side application into accessing and updating backend resources. It can
also allow threat actors to steal data.

Specific frameworks and controls


There are many different frameworks and controls that organizations can use to
remain compliant with regulations and achieve their security goals. Frameworks
covered in this reading are the Cyber Threat Framework (CTF) and the
International Organization for Standardization/International Electrotechnical
Commission (ISO/IEC) 27001. Several common security controls, used alongside
these types of frameworks, are also explained.
Cyber Threat Framework (CTF)
According to the Office of the Director of National Intelligence, the CTF was
developed by the U.S. government to provide “a common language for
describing and communicating information about cyber threat activity.” By
providing a common language to communicate information about threat activity,
the CTF helps cybersecurity professionals analyze and share information more
efficiently. This allows organizations to improve their response to the constantly
evolving cybersecurity landscape and threat actors' many tactics and
techniques.
International Organization for Standardization/International
Electrotechnical Commission (ISO/IEC) 27001
An internationally recognized and used framework is ISO/IEC 27001. The ISO
27000 family of standards enables organizations of all sectors and sizes to
manage the security of assets, such as financial information, intellectual
property, employee data, and information entrusted to third parties. This
framework outlines requirements for an information security management
system, best practices, and controls that support an organization’s ability to
manage risks. Although the ISO/IEC 27001 framework does not require the use of
specific controls, it does provide a collection of controls that organizations can
use to improve their security posture.

Controls
Controls are used alongside frameworks to reduce the possibility and impact of a
security threat, risk, or vulnerability. Controls can be physical, technical, and
administrative and are typically used to prevent, detect, or correct security
issues.
Examples of physical controls:
 Gates, fences, and locks
 Security guards
 Closed-circuit television (CCTV), surveillance cameras, and motion
detectors
 Access cards or badges to enter office spaces
Examples of technical controls:
 Firewalls
 MFA
 Antivirus software
Examples of administrative controls:
 Separation of duties
 Authorization
 Asset classification

Security principles
In the workplace, security principles are embedded in your daily tasks. Whether
you are analyzing logs, monitoring a security information and event
management (SIEM) dashboard, or using a vulnerability scanner, you will use
these principles in some way.
Previously, you were introduced to several OWASP security principles. These
included:
 Minimize attack surface area: Attack surface refers to all the potential
vulnerabilities a threat actor could exploit.
 Principle of least privilege: Users have the least amount of access
required to perform their everyday tasks.
 Defense in depth: Organizations should have varying security controls
that mitigate risks and threats.
 Separation of duties: Critical actions should rely on multiple people,
each of whom follow the principle of least privilege.
 Keep security simple: Avoid unnecessarily complicated solutions.
Complexity makes security difficult.
 Fix security issues correctly: When security incidents occur, identify
the root cause, contain the impact, identify vulnerabilities, and conduct
tests to ensure that remediation is successful.

Additional OWASP security principles


Next, you’ll learn about four additional OWASP security principles that
cybersecurity analysts and their teams use to keep organizational operations and
people safe.
Establish secure defaults
This principle means that the optimal security state of an application is also its
default state for users; it should take extra work to make the application
insecure.
Fail securely
Fail securely means that when a control fails or stops, it should do so by
defaulting to its most secure option. For example, when a firewall fails it should
simply close all connections and block all new ones, rather than start accepting
everything.
Don’t trust services
Many organizations work with third-party partners. These outside partners often
have different security policies than the organization does. And the organization
shouldn’t explicitly trust that their partners’ systems are secure. For example, if
a third-party vendor tracks reward points for airline customers, the airline should
ensure that the balance is accurate before sharing that information with their
customers.
Avoid security by obscurity
The security of key systems should not rely on keeping details hidden. Consider
the following example from OWASP (2016): OWASP Mobile Top 10
The security of an application should not rely on keeping the source code secret.
Its security should rely upon many other factors, including reasonable password
policies, defense in depth, business transaction limits, solid network architecture,
and fraud and audit controls.

Goals and objectives of an audit


The goal of an audit is to ensure an organization's information technology (IT)
practices are meeting industry and organizational standards. The objective is to
identify and address areas of remediation and growth. Audits provide direction
and clarity by identifying what the current failures are and developing a plan to
correct them.
Security audits must be performed to safeguard data and avoid penalties and
fines from governmental agencies. The frequency of audits is dependent on local
laws and federal compliance regulations.
Factors that affect audits
Factors that determine the types of audits an organization implements include:
 Industry type
 Organization size
 Ties to the applicable government regulations
 A business’s geographical location
 A business decision to adhere to a specific regulatory compliance

The role of frameworks and controls in audits


Along with compliance, it’s important to mention the role of frameworks and
controls in security audits. Frameworks such as the National Institute of
Standards and Technology Cybersecurity Framework (NIST CSF) and the
international standard for information security (ISO 27000) series are designed to
help organizations prepare for regulatory compliance security audits. By
adhering to these and other relevant frameworks, organizations can save time
when conducting external and internal audits. Additionally, frameworks, when
used alongside controls, can support organizations’ ability to align with
regulatory compliance requirements and standards.
There are three main categories of controls to review during an audit, which are
administrative and/or managerial, technical, and physical controls.

Audit checklist
It’s necessary to create an audit checklist before conducting an audit. A checklist
is generally made up of the following areas of focus:
Identify the scope of the audit
 The audit should:
o List assets that will be assessed (e.g., firewalls are configured
correctly, PII is secure, physical assets are locked, etc.)
o Note how the audit will help the organization achieve its desired
goals
o Indicate how often an audit should be performed

o Include an evaluation of organizational policies, protocols, and


procedures to make sure they are working as intended and being
implemented by employees
Complete a risk assessment
 A risk assessment is used to evaluate identified organizational risks
related to budget, controls, internal processes, and external standards
(i.e., regulations).
Conduct the audit
 When conducting an internal audit, you will assess the security of the
identified assets listed in the audit scope.
Create a mitigation plan
 A mitigation plan is a strategy established to lower the level of risk and
potential costs, penalties, or other issues that can negatively affect the
organization’s security posture.
Communicate results to stakeholders
 The end result of this process is providing a detailed report of findings,
suggested improvements needed to lower the organization's level of risk,
and compliance regulations and standards the organization needs to
adhere to.

More about cybersecurity tools


Previously, you learned about several tools that are used by cybersecurity team
members to monitor for and identify potential security threats, risks, and
vulnerabilities. In this reading, you’ll learn more about common open-source and
proprietary cybersecurity tools that you may use as a cybersecurity professional.
Open-source tools
Open-source tools are often free to use and can be user friendly. The objective of
open-source tools is to provide users with software that is built by the public in a
collaborative way, which can result in the software being more secure.
Additionally, open-source tools allow for more customization by users, resulting
in a variety of new services built from the same open-source software package.
Software engineers create open-source projects to improve software and make it
available for anyone to use, as long as the specified license is respected. The
source code for open-source projects is readily available to users, as well as the
training material that accompanies them. Having these sources readily available
allows users to modify and improve project materials.
Proprietary tools
Proprietary tools are developed and owned by a person or company, and users
typically pay a fee for usage and training. The owners of proprietary tools are the
only ones who can access and modify the source code. This means that users
generally need to wait for updates to be made to the software, and at times they
might need to pay a fee for those updates. Proprietary software generally allows
users to modify a limited number of features to meet individual and
organizational needs. Examples of proprietary tools include Splunk® and
Chronicle SIEM tools.
Common misconceptions
There is a common misconception that open-source tools are less effective and
not as safe to use as proprietary tools. However, developers have been creating
open-source materials for years that have become industry standards. Although
it is true that threat actors have attempted to manipulate open-source tools,
because these tools are open source it is actually harder for people with
malicious intent to successfully cause harm. The wide exposure and immediate
access to the source code by well-intentioned and informed users and
professionals makes it less likely for issues to occur, because they can fix issues
as soon as they’re identified.
Examples of open-source tools
In security, there are many tools in use that are open-source and commonly
available. Two examples are Linux and Suricata.
Linux
Linux is an open-source operating system that is widely used. It allows you to
tailor the operating system to your needs using a command-line interface. An
operating system is the interface between computer hardware and the user.
It’s used to communicate with the hardware of a computer and manage software
applications.
There are multiple versions of Linux that exist to accomplish specific tasks. Linux
and its command-line interface will be discussed in detail, later in the certificate
program.
Suricata
Suricata is an open-source network analysis and threat detection software.
Network analysis and threat detection software is used to inspect network traffic
to identify suspicious behavior and generate network data logs. The detection
software finds activity across users, computers, or Internet Protocol (IP)
addresses to help uncover potential threats, risks, or vulnerabilities.
Suricata was developed by the Open Information Security Foundation (OISF).
OISF is dedicated to maintaining open-source use of the Suricata project to
ensure it’s free and publicly available. Suricata is widely used in the public and
private sector, and it integrates with many SIEM tools and other security tools.
Suricata will also be discussed in greater detail later in the program.

Splunk
Splunk offers different SIEM tool options: Splunk® Enterprise and Splunk® Cloud.
Both allow you to review an organization's data on dashboards. This helps
security professionals manage an organization's internal infrastructure by
collecting, searching, monitoring, and analyzing log data from multiple sources to
obtain full visibility into an organization’s everyday operations.
Review the following Splunk dashboards and their purposes:
Security posture dashboard
The security posture dashboard is designed for security operations centers
(SOCs). It displays the last 24 hours of an organization’s notable security-related
events and trends and allows security professionals to determine if security
infrastructure and policies are performing as designed. Security analysts can use
this dashboard to monitor and investigate potential threats in real time, such as
suspicious network activity originating from a specific IP address.
Executive summary dashboard
The executive summary dashboard analyzes and monitors the overall health of
the organization over time. This helps security teams improve security measures
that reduce risk. Security analysts might use this dashboard to provide high-level
insights to stakeholders, such as generating a summary of security incidents and
trends over a specific period of time.
Incident review dashboard
The incident review dashboard allows analysts to identify suspicious patterns
that can occur in the event of an incident. It assists by highlighting higher risk
items that need immediate review by an analyst. This dashboard can be very
helpful because it provides a visual timeline of the events leading up to an
incident.
Risk analysis dashboard
The risk analysis dashboard helps analysts identify risk for each risk object (e.g.,
a specific user, a computer, or an IP address). It shows changes in risk-related
activity or behavior, such as a user logging in outside of normal working hours or
unusually high network traffic from a specific computer. A security analyst might
use this dashboard to analyze the potential impact of vulnerabilities in critical
assets, which helps analysts prioritize their risk mitigation efforts.

Chronicle
Chronicle is a cloud-native SIEM tool from Google that retains, analyzes, and
searches log data to identify potential security threats, risks, and vulnerabilities.
Chronicle allows you to collect and analyze log data according to:
 A specific asset
 A domain name
 A user
 An IP address
Chronicle provides multiple dashboards that help analysts monitor an
organization’s logs, create filters and alerts, and track suspicious domain names.
Review the following Chronicle dashboards and their purposes:
Enterprise insights dashboard
The enterprise insights dashboard highlights recent alerts. It identifies suspicious
domain names in logs, known as indicators of compromise (IOCs). Each result is
labeled with a confidence score to indicate the likelihood of a threat. It also
provides a severity level that indicates the significance of each threat to the
organization. A security analyst might use this dashboard to monitor login or
data access attempts related to a critical asset—like an application or system—
from unusual locations or devices.
Data ingestion and health dashboard
The data ingestion and health dashboard shows the number of event logs, log
sources, and success rates of data being processed into Chronicle. A security
analyst might use this dashboard to ensure that log sources are correctly
configured and that logs are received without error. This helps ensure that log
related issues are addressed so that the security team has access to the log data
they need.
IOC matches dashboard
The IOC matches dashboard indicates the top threats, risks, and vulnerabilities to
the organization. Security professionals use this dashboard to observe domain
names, IP addresses, and device IOCs over time in order to identify trends. This
information is then used to direct the security team’s focus to the highest priority
threats. For example, security analysts can use this dashboard to search for
additional activity associated with an alert, such as a suspicious user login from
an unusual geographic location.
Main dashboard
The main dashboard displays a high-level summary of information related to the
organization’s data ingestion, alerting, and event activity over time. Security
professionals can use this dashboard to access a timeline of security events—
such as a spike in failed login attempts— to identify threat trends across log
sources, devices, IP addresses, and physical locations.
Rule detections dashboard
The rule detections dashboard provides statistics related to incidents with the
highest occurrences, severities, and detections over time. Security analysts can
use this dashboard to access a list of all the alerts triggered by a specific
detection rule, such as a rule designed to alert whenever a user opens a known
malicious attachment from an email. Analysts then use those statistics to help
manage recurring incidents and establish mitigation tactics to reduce an
organization's level of risk.
User sign in overview dashboard
The user sign in overview dashboard provides information about user access
behavior across the organization. Security analysts can use this dashboard to
access a list of all user sign-in events to identify unusual user activity, such as a
user signing in from multiple locations at the same time. This information is then
used to help mitigate threats, risks, and vulnerabilities to user accounts and the
organization’s applications.

Playbook overview
A playbook is a manual that provides details about any operational action.
Essentially, a playbook provides a predefined and up-to-date list of steps to
perform when responding to an incident.
Playbooks are accompanied by a strategy. The strategy outlines expectations of
team members who are assigned a task, and some playbooks also list the
individuals responsible. The outlined expectations are accompanied by a plan.
The plan dictates how the specific task outlined in the playbook must be
completed.
Playbooks should be treated as living documents, which means that they are
frequently updated by security team members to address industry changes and
new threats. Playbooks are generally managed as a collaborative effort, since
security team members have different levels of expertise.
Updates are often made if:
 A failure is identified, such as an oversight in the outlined policies and
procedures, or in the playbook itself.
 There is a change in industry standards, such as changes in laws or
regulatory compliance.
 The cybersecurity landscape changes due to evolving threat actor tactics
and techniques.
Types of playbooks
Playbooks sometimes cover specific incidents and vulnerabilities. These might
include ransomware, vishing, business email compromise (BEC), and other
attacks previously discussed. Incident and vulnerability response playbooks are
very common, but they are not the only types of playbooks organizations
develop.
Each organization has a different set of playbook tools, methodologies, protocols,
and procedures that they adhere to, and different individuals are involved at
each step of the response process, depending on the country they are in. For
example, incident notification requirements from government-imposed laws and
regulations, along with compliance standards, affect the content in the
playbooks. These requirements are subject to change based on where the
incident originated and the type of data affected.
Incident and vulnerability response playbooks
Incident and vulnerability response playbooks are commonly used by entry-level
cybersecurity professionals. They are developed based on the goals outlined in
an organization’s business continuity plan. A business continuity plan is an
established path forward allowing a business to recover and continue to operate
as normal, despite a disruption like a security breach.
These two types of playbooks are similar in that they both contain predefined
and up-to-date lists of steps to perform when responding to an incident.
Following these steps is necessary to ensure that you, as a security professional,
are adhering to legal and organizational standards and protocols. These
playbooks also help minimize errors and ensure that important actions are
performed within a specific timeframe.
When an incident, threat, or vulnerability occurs or is identified, the level of risk
to the organization depends on the potential damage to its assets. A basic
formula for determining the level of risk is that risk equals the likelihood of a
threat. For this reason, a sense of urgency is essential. Following the steps
outlined in playbooks is also important if any forensic task is being carried out.
Mishandling data can easily compromise forensic data, rendering it unusable.
Common steps included in incident and vulnerability playbooks include:
 Preparation
 Detection
 Analysis
 Containment
 Eradication
 Recovery from an incident
Additional steps include performing post-incident activities, and a coordination of
efforts throughout the investigation and incident and vulnerability response
stages.

Networks
The TCP/IP model
The TCP/IP model is a framework used to visualize how data is organized and
transmitted across a network. This model helps network engineers and network
security analysts conceptualize processes on the network and communicate
where disruptions or security threats occur.
The TCP/IP model has four layers: the network access layer, internet layer,
transport layer, and application layer. When troubleshooting issues on the
network, security professionals can analyze which layers were impacted by an
attack based on what processes were involved in an incident.

Network access layer


The network access layer, sometimes called the data link layer, deals with the
creation of data packets and their transmission across a network. This layer
corresponds to the physical hardware involved in network transmission. Hubs,
modems, cables, and wiring are all considered part of this layer. The address
resolution protocol (ARP) is part of the network access layer. Since MAC
addresses are used to identify hosts on the same physical network, ARP is
needed to map IP addresses to MAC addresses for local network communication.
Internet layer
The internet layer, sometimes referred to as the network layer, is responsible for
ensuring the delivery to the destination host, which potentially resides on a
different network. It ensures IP addresses are attached to data packets to
indicate the location of the sender and receiver. The internet layer also
determines which protocol is responsible for delivering the data packets and
ensures the delivery to the destination host. Here are some of the common
protocols that operate at the internet layer:
 Internet Protocol (IP). IP sends the data packets to the correct
destination and relies on the Transmission Control Protocol/User Datagram
Protocol (TCP/UDP) to deliver them to the corresponding service. IP
packets allow communication between two networks. They are routed
from the sending network to the receiving network. TCP in particular
retransmits any data that is lost or corrupt.
 Internet Control Message Protocol (ICMP). The ICMP shares error
information and status updates of data packets. This is useful for detecting
and troubleshooting network errors. The ICMP reports information about
packets that were dropped or that disappeared in transit, issues with
network connectivity, and packets redirected to other routers.
TCP is a connection-based protocol and UDP is connectionless. TCP is more
reliable but transmits data more slowly, whilst UDP is less reliable but transmits
data more quickly.
Transport layer
The transport layer is responsible for delivering data between two systems or
networks and includes protocols to control the flow of traffic across a network.
TCP and UDP are the two transport protocols that occur at this layer.
Transmission Control Protocol
The Transmission Control Protocol (TCP) is an internet communication
protocol that allows two devices to form a connection and stream data. It
ensures that data is reliably transmitted to the destination service. TCP contains
the port number of the intended destination service, which resides in the TCP
header of a TCP/IP packet.
User Datagram Protocol
The User Datagram Protocol (UDP) is a connectionless protocol that does not
establish a connection between devices before transmissions. It is used by
applications that are not concerned with the reliability of the transmission. Data
sent over UDP is not tracked as extensively as data sent using TCP. Because UDP
does not establish network connections, it is used mostly for performance
sensitive applications that operate in real time, such as video streaming.
Application layer
The application layer in the TCP/IP model is similar to the application,
presentation, and session layers of the OSI model. The application layer is
responsible for making network requests or responding to requests. This layer
defines which internet services and applications any user can access. Protocols in
the application layer determine how the data packets will interact with receiving
devices. Some common protocols used on this layer are:
 Hypertext transfer protocol (HTTP)
 Simple mail transfer protocol (SMTP)
 Secure shell (SSH)
 File transfer protocol (FTP)
 Domain name system (DNS)
Application layer protocols rely on underlying layers to transfer the data across
the network.
Summary: Network Protocols
 Transmission control protocol (TCP) – Establishes connections between two
devices.
 Internet protocol (IP) – Used for routing and addressing data packets as
they travel between devices on a network.

The TCP/IP model vs. the OSI model


The TCP/IP model is a framework used to visualize how data is organized and
transmitted across a network. This model helps network engineers and security
analysts conceptualize processes on the network and communicate where
disruptions or security threats occur.
The TCP/IP model has 4 layers: the network access layer, internet layer, transport
layer, and application layer.
The OSI model is a standardized concept that describes the seven layers
computers use to communicate and send data over the network.
The OSI model has 7 layers: the physical layer, data link layer, network layer,
transport layer, session layer, presentation layer and application layer.
Network and security professionals often use these models to communicate with
each other about potential sources of problems or security threats when they
occur. When analyzing network events, security professionals can determine
what layer or layers an attack occurred in based on what processes were
involved in the incident.
Layer 7: Application layer
The application layer includes processes that directly involve the everyday user.
This layer includes all of the networking protocols that software applications use
to connect a user to the internet. This characteristic is the identifying feature of
the application layer—user connection to the internet via applications and
requests.
An example of a type of communication that happens at the application layer is
using a web browser. The internet browser uses HTTP or HTTPS to send and
receive information from the website server. The email application uses simple
mail transfer protocol (SMTP) to send and receive email information. Also, web
browsers use the domain name system (DNS) protocol to translate website
domain names into IP addresses which identify the web server that hosts the
information for the website.
Layer 6: Presentation layer
Functions at the presentation layer involve data translation and encryption for
the network. This layer adds to and replaces data with formats that can be
understood by applications (layer 7) on both sending and receiving systems.
Formats at the user end may be different from those of the receiving system.
Processes at the presentation layer require the use of a standardized format.
Some formatting functions that occur at layer 6 include encryption, compression,
and confirmation that the character code set can be interpreted on the receiving
system. One example of encryption that takes place at this layer is SSL, which
encrypts data between web servers and browsers as part of websites with
HTTPS.
Layer 5: Session layer
A session describes when a connection is established between two devices. An
open session allows the devices to communicate with each other. Session layer
protocols keep the session open while data is being transferred and terminate
the session once the transmission is complete.
The session layer is also responsible for activities such as authentication,
reconnection, and setting checkpoints during a data transfer. If a session is
interrupted, checkpoints ensure that the transmission picks up at the last session
checkpoint when the connection resumes. Sessions include a request and
response between applications. Functions in the session layer respond to
requests for service from processes in the presentation layer (layer 6) and send
requests for services to the transport layer (layer 4).
Layer 4: Transport layer
The transport layer is responsible for delivering data between devices. This layer
also handles the speed of data transfer, flow of the transfer, and breaking data
down into smaller segments to make them easier to transport. These segments
need to be reassembled at their destination so they can be processed at the
session layer (layer 5). The speed and rate of the transmission also has to match
the connection speed of the destination system. TCP and UDP are transport layer
protocols.
Layer 3: Network layer
The network layer oversees receiving the frames from the data link layer (layer
2) and delivers them to the intended destination. The intended destination can
be found based on the address that resides in the frame of the data packets.
Data packets allow communication between two networks. These packets include
IP addresses that tell routers where to send them. They are routed from the
sending network to the receiving network.
Layer 2: Data link layer
The data link layer organizes sending and receiving data packets within a single
network. The data link layer is home to switches on the local network and
network interface cards on local devices.
Protocols like network control protocol (NCP), high-level data link control (HDLC),
and synchronous data link control protocol (SDLC) are used at the data link layer.
Layer 1: Physical layer
As the name suggests, the physical layer corresponds to the physical hardware
involved in network transmission. Hubs, modems, and the cables and wiring that
connect them are all considered part of the physical layer. To travel across an
ethernet or coaxial cable, a data packet needs to be translated into a stream of
0s and 1s. The stream of 0s and 1s are sent across the physical wiring and
cables, received, and then passed on to higher levels of the OSI model.
Operations at the network layer
Functions at the network layer organize the addressing and delivery of data
packets across the network from the host device to the destination device. The
destination IP address is contained within the header of each data packet. This
address will be stored for future routing purposes in routing tables along the
packet’s path to its destination.
A data packet is also referred to as an IP packet for TCP connections or a
datagram for UDP connections. A router uses the IP address to route packets
from network to network based on information contained in the IP header of a
data packet. Header information communicates more than just the address of
the destination. It also includes information such as the source IP address, the
size of the packet, and which protocol will be used for the data portion of the
packet.

Format of an IPv4 packet


 An IPv4 header format is determined by the IPv4 protocol and includes the
IP routing information that devices use to direct the packet. The size of the
IPv4 header ranges from 20 to 60 bytes. The first 20 bytes are a fixed set
of information containing data such as the source and destination IP
address, header length, and total length of the packet. The last set of
bytes can range from 0 to 40 and consists of the options field.
 The length of the data section of an IPv4 packet can vary greatly in size.
However, the maximum possible size of an IPv4 packet is 65,535 bytes. It
contains the message being transferred over the internet, like website
information or email text.

There are 13 fields within the header of an IPv4 packet:


 Version (VER): This 4 bit component tells receiving devices what protocol
the packet is using. The packet used in the illustration above is an IPv4
packet.
 IP Header Length (HLEN or IHL): HLEN is the packet’s header length.
This value indicates where the packet header ends and the data segment
begins.
 Type of Service (ToS): Routers prioritize packets for delivery to maintain
quality of service on the network. The ToS field provides the router with
this information.
 Total Length: This field communicates the total length of the entire IP
packet, including the header and data. The maximum size of an IPv4
packet is 65,535 bytes.
 Identification: IPv4 packets can be up to 65, 535 bytes, but most
networks have a smaller limit. In these cases, the packets are divided, or
fragmented, into smaller IP packets. The identification field provides a
unique identifier for all the fragments of the original IP packet so that they
can be reassembled once they reach their destination.
 Flags: This field provides the routing device with more information about
whether the original packet has been fragmented and if there are more
fragments in transit.
 Fragmentation Offset: The fragment offset field tells routing devices
where in the original packet the fragment belongs.
 Time to Live (TTL): TTL prevents data packets from being forwarded by
routers indefinitely. It contains a counter that is set by the source. The
counter is decremented by one as it passes through each router along its
path. When the TTL counter reaches zero, the router currently holding the
packet will discard the packet and return an ICMP Time Exceeded error
message to the sender.
 Protocol: The protocol field tells the receiving device which protocol will
be used for the data portion of the packet.
 Header Checksum: The header checksum field contains a checksum that
can be used to detect corruption of the IP header in transit. Corrupted
packets are discarded.
 Source IP Address: The source IP address is the IPv4 address of the
sending device.
 Destination IP Address: The destination IP address is the IPv4 address
of the destination device.
 Options: The options field allows for security options to be applied to the
packet if the HLEN value is greater than five. The field communicates
these options to the routing devices.

Difference between IPv4 and IPv6


Some of the key differences between IPv4 and IPv6 include the length and the
format of the addresses. IPv4 addresses are made up of four decimal numbers
separated by periods, each number ranging from 0 to 255. Together the numbers
span 4 bytes, and allow for up to 4.3 billion possible addresses. An example of an
IPv4 address would be: 198.51.100.0. IPv6 addresses are made of eight
hexadecimal numbers separated by colons, each number consisting of up to four
hexadecimal digits. Together, all numbers span 16 bytes, and allow for up to 340
undecillion addresses (340 followed by 36 zeros). An example of an IPv6 address
would be: 2002:0db8:0000:0000:0000:ff21:0023:1234.
Note: to represent one or more consecutive sets of all zeros, you can replace
the zeros with a double colon "::", so the above IPv6 address would be
"2002:0db8::ff21:0023:1234."
There are also some differences in the layout of an IPv6 packet header. The IPv6
header format is much simpler than IPv4. For example, the IPv4 Header includes
the IHL, Identification, and Flags fields, whereas the IPv6 does not. The IPv6
header only introduces the Flow Label field, where the Flow Label identifies a
packet as requiring special handling by other IPv6 routers.

There are some important security differences between IPv4 and IPv6. IPv6 offers
more efficient routing and eliminates private address collisions that can occur on
IPv4 when two devices on the same network are attempting to use the same
address.

Ports
When data packets are sent and received across a network, they are assigned a
port.
Within the operating system of a network device, a port is a software-based
location that organizes the sending and receiving of data between devices on a
network.
Ports divide network traffic into segments based on the service they will
perform between two devices.
The computers sending and receiving these data segments know how
to prioritize and process these segments based on their port number. Data
packets include instructions that tell the receiving device what to do with the
information. These instructions come in the form of a port number.
Port numbers allow computers to split the network traffic and prioritize the
operations they will perform with the data.
Some common port numbers are: port 25, which is used for e-mail, port 443,
which is used for secure internet communication, and port 20, for large file
transfers.

Overview of network protocols


A network protocol is a set of rules used by two or more devices on a network
to describe the order of delivery and the structure of data. Network protocols
serve as instructions that come with the information in the data packet. These
instructions tell the receiving device what to do with the data.
Some protocols have vulnerabilities that malicious actors exploit. For example, a
nefarious actor could use the Domain Name System (DNS) protocol, which
resolves web addresses to IP addresses, to divert traffic from a legitimate
website to a malicious website containing malware.
Three categories of network protocols
Network protocols can be divided into three main categories: communication
protocols, management protocols, and security protocols. There are dozens of
different network protocols, but you don’t need to memorize all of them for an
entry-level security analyst role. However, it’s important for you to know the
ones listed in this reading.
Communication protocols
Communication protocols govern the exchange of information in network
transmission. They dictate how the data is transmitted between devices and the
timing of the communication. They also include methods to recover data lost in
transit. Here are a few of them.
 Transmission Control Protocol (TCP) is an internet communication
protocol that allows two devices to form a connection and stream data.
TCP uses a three-way handshake process. First, the device sends a
synchronize (SYN) request to a server. Then the server responds with a
SYN/ACK packet to acknowledge receipt of the device's request. Once the
server receives the final ACK packet from the device, a TCP connection is
established. In the TCP/IP model, TCP occurs at the transport layer.
 User Datagram Protocol (UDP) is a connectionless protocol that does
not establish a connection between devices before a transmission. This
makes it less reliable than TCP. But it also means that it works well for
transmissions that need to get to their destination quickly. For example,
one use of UDP is for sending DNS requests to local DNS servers. In the
TCP/IP model, UDP occurs at the transport layer.
 Hypertext Transfer Protocol (HTTP) is an application layer protocol
that provides a method of communication between clients and website
servers. HTTP uses port 80. HTTP is considered insecure, so it is being
replaced on most websites by a secure version, called HTTPS that uses
encryption from SSL/TLS for communication. However, there are still many
websites that use the insecure HTTP protocol. In the TCP/IP model, HTTP
occurs at the application layer.
 Domain Name System (DNS) is a protocol that translates internet
domain names into IP addresses. When a client computer wishes to access
a website domain using their internet browser, a query is sent to a
dedicated DNS server. The DNS server then looks up the IP address that
corresponds to the website domain. DNS normally uses UDP on port 53.
However, if the DNS reply to a request is large, it will switch to using the
TCP protocol. In the TCP/IP model, DNS occurs at the application layer.
Management Protocols
The next category of network protocols is management protocols. Management
protocols are used for monitoring and managing activity on a network. They
include protocols for error reporting and optimizing performance on the network.
 Simple Network Management Protocol (SNMP) is a network protocol
used for monitoring and managing devices on a network. SNMP can reset
a password on a network device or change its baseline configuration. It
can also send requests to network devices for a report on how much of the
network’s bandwidth is being used up. In the TCP/IP model, SNMP occurs
at the application layer.
 Internet Control Message Protocol (ICMP) is an internet protocol used
by devices to tell each other about data transmission errors across the
network. ICMP is used by a receiving device to send a report to the
sending device about the data transmission. ICMP is commonly used as a
quick way to troubleshoot network connectivity and latency by issuing the
“ping” command on a Linux operating system. In the TCP/IP model, ICMP
occurs at the internet layer.
Security Protocols
Security protocols are network protocols that ensure that data is sent and
received securely across a network. Security protocols use encryption algorithms
to protect data in transit. Below are some common security protocols.
 Hypertext Transfer Protocol Secure (HTTPS) is a network protocol
that provides a secure method of communication between clients and
website servers. HTTPS is a secure version of HTTP that uses secure
sockets layer/transport layer security (SSL/TLS) encryption on all
transmissions so that malicious actors cannot read the information
contained. HTTPS uses port 443. In the TCP/IP model, HTTPS occurs at the
application layer.
 Secure File Transfer Protocol (SFTP) is a secure protocol used to
transfer files from one device to another over a network. SFTP uses secure
shell (SSH), typically through TCP port 22. SSH uses Advanced Encryption
Standard (AES) and other types of encryption to ensure that unintended
recipients cannot intercept the transmissions. In the TCP/IP model, SFTP
occurs at the application layer. SFTP is used often with cloud storage.
Every time a user uploads or downloads a file from cloud storage, the file
is transferred using the SFTP protocol.

Additional network protocols


Network Address Translation
The devices on your local home or office network each have a private IP address
that they use to communicate directly with each other. However, in order for the
devices with private IP addresses to communicate with the public internet, they
need to have a single public IP address that represents all devices on the LAN to
the public. For outgoing messages, the router can replace a private source IP
address with its public IP address and perform the reverse operation for
responses. This process is known as Network Address Translation (NAT) and it
generally requires a router or firewall to be specifically configured to perform
NAT. NAT is a part of layer 2 (internet layer) and layer 3 (transport layer) of the
TCP/IP model.

Dynamic Host Configuration Protocol


Dynamic Host Configuration Protocol (DHCP) is in the management family of
network protocols. DHCP is an application layer protocol used on a network to
configure devices. It works with the router to assign a unique IP address to each
device and provide the addresses of the appropriate DNS server and default
gateway for each device. DHCP servers operate on UDP port 67 while DHCP
clients operate on UDP port 68.
Address Resolution Protocol
By now, you are familiar with IP and MAC addresses. You’ve learned that each
device on a network has a public IP address, a private IP address, and a MAC
address that identify it on the network. A device’s IP address may change over
time, but its MAC address is permanent because it is unique to a device's
network interface card. The MAC address is used to communicate with devices
within the same network, but sometimes, the MAC address is unknown. This is
why the Address Resolution Protocol (ARP) is needed. ARP is mainly a network
access layer protocol in the TCP/IP model used to translate the IP addresses that
are found in data packets into the MAC address of the hardware device.
Each device on the network performs ARP and keeps track of matching IP and
MAC addresses in an ARP cache. ARP does not have a specific port number since
it is a layer 2 protocol and port numbers are associated with the layer 7
application layer.
Telnet
Telnet is an application layer protocol that is used to connect with a remote
system. Telnet sends all information in clear text. It uses command line prompts
to control another device similar to secure shell (SSH), but Telnet is not as secure
as SSH. Telnet can be used to connect to local or remote devices and uses TCP
port 23.
Secure shell
Secure shell protocol (SSH) is used to create a secure connection with a remote
system. This application layer protocol provides an alternative for secure
authentication and encrypted communication. SSH operates over the TCP port 22
and is a replacement for less secure protocols, such as Telnet.
Post office protocol
Post office protocol (POP) is an application layer (layer 4 of the TCP/IP model)
protocol used to manage and retrieve email from a mail server. POP3 is the most
commonly used version of POP. Many organizations have a dedicated mail server
on the network that handles incoming and outgoing mail for users on the
network. User devices will send requests to the remote mail server and download
email messages locally. If you have ever refreshed your email application and
had new emails populate in your inbox, you are experiencing POP and internet
message access protocol (IMAP) in action. Unencrypted, plaintext authentication
uses TCP/UDP port 110 and encrypted emails use Secure Sockets Layer/Transport
Layer Security (SSL/TLS) over TCP/UDP port 995. When using POP, mail has to
finish downloading on a local device before it can be read. After downloading, the
mail may or may not be deleted from the mail server, so it does not guarantee
that a user can sync the same email across multiple devices.
Internet Message Access Protocol (IMAP)
IMAP is used for incoming email. It downloads the headers of emails and the
message content. The content also remains on the email server, which allows
users to access their email from multiple devices. IMAP uses TCP port 143 for
unencrypted email and TCP port 993 over the TLS protocol. Using IMAP allows
users to partially read email before it is finished downloading. Since the mail is
kept on the mail server, it allows a user to sync emails across multiple devices.
Simple Mail Transfer Protocol
Simple Mail Transfer Protocol (SMTP) is used to transmit and route email from the
sender to the recipient’s address. SMTP works with Message Transfer Agent
(MTA) software, which searches DNS servers to resolve email addresses to IP
addresses, to ensure emails reach their intended destination. SMTP uses
TCP/UDP port 25 for unencrypted emails and TCP/UDP port 587 using TLS for
encrypted emails. The TCP port 25 is often used by high-volume spam. SMTP
helps to filter out spam by regulating how many emails a source can send at a
time.
Protocols and port numbers
Remember that port numbers are used by network devices to determine what
should be done with the information contained in each data packet once they
reach their destination. Firewalls can filter out unwanted traffic based on port
numbers. For example, an organization may configure a firewall to only allow
access to TCP port 995 (POP3) by IP addresses belonging to the organization.
As a security analyst, you will need to know about many of the protocols and port
numbers mentioned in this course. They may be used to determine your
technical knowledge in interviews, so it’s a good idea to memorize them. You will
also learn about new protocols on the job in a security position.

Introduction to wireless communication protocols


Wi-Fi refers to a set of standards that define communication for wireless LANs.
Wi-Fi standards and protocols are based on the 802.11 family of internet
communication standards determined by the Institute of Electrical and
Electronics Engineers (IEEE).
Wi-Fi communications are secured by wireless networking protocols. Wireless
security protocols have evolved over the years.
Wired Equivalent Privacy
Wired equivalent privacy (WEP) is a wireless security protocol designed to
provide users with the same level of privacy on wireless network connections as
they have on wired network connections. WEP was developed in 1999 and is the
oldest of the wireless security standards.
Wi-Fi Protected Access
Wi-Fi Protected Access (WPA) was developed in 2003 to improve upon WEP,
address the security issues that it presented, and replace it. WPA was always
intended to be a transitional measure so backwards compatibility could be
established with older hardware.
The flaws with WEP were in the protocol itself and how the encryption was used.
WPA addressed this weakness by using a protocol called Temporal Key Integrity
Protocol (TKIP). WPA encryption algorithm uses larger secret keys than WEPs,
making it more difficult to guess the key by trial and error.
WPA also includes a message integrity check that includes a message
authentication tag with each transmission. If a malicious actor attempts to alter
the transmission in any way or resend at another time, WPA’s message integrity
check will identify the attack and reject the transmission.
Despite the security improvements of WPA, it still has vulnerabilities. Malicious
actors can use a key reinstallation attack (or KRACK attack) to decrypt
transmissions using WPA. Attackers can insert themselves in the WPA
authentication handshake process and insert a new encryption key instead of the
dynamic one assigned by WPA. If they set the new key to all zeros, it is as if the
transmission is not encrypted at all.
Because of this significant vulnerability, WPA was replaced with an updated
version of the protocol called WPA2.
WPA2 & WPA3
WPA2
The second version of Wi-Fi Protected Access—known as WPA2—was released in
2004. WPA2 improves upon WPA by using the Advanced Encryption Standard
(AES). WPA2 also improves upon WPA’s use of TKIP. WPA2 uses the Counter Mode
Cipher Block Chain Message Authentication Code Protocol (CCMP), which
provides encapsulation and ensures message authentication and integrity.
Because of the strength of WPA2, it is considered the security standard for all Wi-
Fi transmissions today. WPA2, like its predecessor, is vulnerable to KRACK
attacks. This led to the development of WPA3 in 2018.
Personal
WPA2 personal mode is best suited for home networks for a variety of reasons. It
is easy to implement, initial setup takes less time for personal than enterprise
version. The global passphrase for WPA2 personal version needs to be applied to
each individual computer and access point in a network. This makes it ideal for
home networks, but unmanageable for organizations.
Enterprise
WPA2 enterprise mode works best for business applications. It provides the
necessary security for wireless networks in business settings. The initial setup is
more complicated than WPA2 personal mode, but enterprise mode offers
individualized and centralized control over the Wi-Fi access to a business
network. This means that network administrators can grant or remove user
access to a network at any time. Users never have access to encryption keys,
this prevents potential attackers from recovering network keys on individual
computers.
WPA3
WPA3 is a secure Wi-Fi protocol and is growing in usage as more WPA3
compatible devices are released. These are the key differences between WPA2
and WPA3:
 WPA3 addresses the authentication handshake vulnerability to KRACK
attacks, which is present in WPA2.
 WPA3 uses Simultaneous Authentication of Equals (SAE), a password-
authenticated, cipher-key-sharing agreement. This prevents attackers from
downloading data from wireless network connections to their systems to
attempt to decode it.
 WPA3 has increased encryption to make passwords more secure by using
128-bit encryption, with WPA3-Enterprise mode offering optional 192-bit
encryption.

Subnetting and CIDR


Overview of subnetting
Subnetting is the subdivision of a network into logical groups called subnets. It
works like a network inside a network. Subnetting divides up a network address
range into smaller subnets within the network. These smaller subnets form based
on the IP addresses and network mask of the devices on the network.
This makes the network more efficient and can also be used to create security
zones. If devices on the same subnet communicate with each other, the switch
changes the transmissions to stay on the same subnet, improving speed and
efficiency of the communications.
Classless Inter-Domain Routing notation for subnetting
Classless Inter-Domain Routing (CIDR) is a method of assigning subnet masks to
IP addresses to create a subnet. Classless addressing replaces classful
addressing. Classful addressing was used in the 1980s as a system of grouping IP
addresses into classes (Class A to Class E). Each class included a limited number
of IP addresses, which were depleted as the number of devices connecting to the
internet outgrew the classful range in the 1990s.
CIDR allows cybersecurity professionals to segment classful networks into
smaller chunks. CIDR IP addresses are formatted like IPv4 addresses, but they
include a slash (“/’”) followed by a number at the end of the address, This extra
number is called the IP network prefix. For example, a regular IPv4 address
uses198.51.100.0 format, whereas a CIDR IP address would use 198.51.100.0/24.
This CIDR address encompasses all IP addresses between 198.51.100.0 and
198.51.100.255.
The system of CIDR addressing reduces the number of entries in routing tables
and provides more available IP addresses within networks.
Security benefits of subnetting
Subnetting allows network professionals and analysts to create a network within
their own network without requesting another network IP address from their
internet service provider. This process uses network bandwidth more efficiently
and improves network performance. Subnetting is one component of creating
isolated subnetworks through physical isolation, routing configuration, and
firewalls.

Common network protocols


There are three main categories of network protocols: communication protocols,
management protocols, and security protocols.
1. Communication protocols are used to establish connections between
servers. Examples include TCP, UDP, and Simple Mail Transfer Protocol
(SMTP), which provides a framework for email communication.
2. Management protocols are used to troubleshoot network issues. One
example is the Internet Control Message Protocol (ICMP).
3. Security protocols provide encryption for data in transit. Examples include
IPSec and SSL/TLS.
Some other commonly used protocols are:
 HyperText Transfer Protocol (HTTP). HTTP is an application layer
communication protocol. This allows the browser and the web server to
communicate with one another.
 Domain Name System (DNS). DNS is an application layer protocol that
translates, or maps, host names to IP addresses.
 Address Resolution Protocol (ARP). ARP is a network layer communication
protocol that maps IP addresses to physical machines or a MAC address
recognized on the local area network.
Wi-Fi
This section of the course also introduced various wireless security protocols,
including WEP, WPA, WPA2, and WPA3. WPA3 encrypts traffic with the Advanced
Encryption Standard (AES) cipher as it travels from your device to the wireless
access point. WPA2 and WPA3 offer two modes: personal and enterprise.

Network security tools and practices


Firewalls
Previously, you learned that firewalls are network virtual appliances (NVAs) or
hardware devices that inspect and can filter network traffic before it’s permitted
to enter the private network. Traditional firewalls are configured with rules that
tell it what types of data packets are allowed based on the port number and IP
address of the data packet.
There are two main categories of firewalls.
 Stateless: A class of firewall that operates based on predefined rules and
does not keep track of information from data packets
 Stateful: A class of firewall that keeps track of information passing
through it and proactively filters out threats. Unlike stateless firewalls,
which require rules to be configured in two directions, a stateful firewall
only requires a rule in one direction. This is because it uses a "state table"
to track connections, so it can match return traffic to an existing session
Next generation firewalls (NGFWs) are the most technologically advanced firewall
protection. They exceed the security offered by stateful firewalls because they
include deep packet inspection (a kind of packet sniffing that examines data
packets and takes actions if threats exist) and intrusion prevention features that
detect security threats and notify firewall administrators. NGFWs can inspect
traffic at the application layer of the TCP/IP model and are typically application
aware. Unlike traditional firewalls that block traffic based on IP address and
ports, NGFWs rules can be configured to block or allow traffic based on the
application. Some NGFWs have additional features like Malware Sandboxing,
Network Anti-Virus, and URL and DNS Filtering.
Proxy servers
A proxy server is another way to add security to your private network. Proxy
servers utilize network address translation (NAT) to serve as a barrier between
clients on the network and external threats. Forward proxies handle queries from
internal clients when they access resources external to the network. Reverse
proxies function opposite of forward proxies; they handle requests from external
systems to services on the internal network. Some proxy servers can also be
configured with rules, like a firewall. For example, you can create filters to block
websites identified as containing malware.
Virtual Private Networks (VPN)
A VPN is a service that encrypts data in transit and disguises your IP address.
VPNs use a process called encapsulation. Encapsulation wraps your unencrypted
data in an encrypted data packet, which allows your data to be sent across the
public network while remaining anonymous. Enterprises and other organizations
use VPNs to help protect communications from users’ devices to corporate
resources. Some of these resources include servers or virtual machines that host
business applications. Individuals also use VPNs to increase personal privacy.
VPNs protect user privacy by concealing personal information, including IP
addresses, from external servers. A reputable VPN also minimizes its own access
to user internet activity by using strong encryption and other security measures.
Organizations are increasingly using a combination of VPN and SD-WAN
capabilities to secure their networks. A software-defined wide area network (SD-
WAN) is a virtual WAN service that allows organizations to securely connect users
to applications across multiple locations and over large geographical distances.

VPN protocols: Wireguard and IPSec


VPNs provide a server that acts as a gateway between a computer and the
internet. This server creates a path similar to a virtual tunnel that hides the
computer’s IP address and encrypts the data in transit to the internet. The main
purpose of a VPN is to create a secure connection between a computer and a
network. Additionally, a VPN allows trusted connections to be established on non-
trusted networks. VPN protocols determine how the secure network tunnel is
formed. Different VPN providers provide different VPN protocols.
This reading will cover the differences between remote access and site-to-site
VPNs, and two VPN protocols: WireGuard VPN and IPSec VPN.
Remote access and site-to-site VPNs
Individual users use remote access VPNs to establish a connection between a
personal device and a VPN server. Remote access VPNs encrypt data sent or
received through a personal device. The connection between the user and the
remote access VPN is established through the internet.
Enterprises use site-to-site VPNs largely to extend their network to other
networks and locations. This is particularly useful for organizations that have
many offices across the globe. IPSec is commonly used in site-to-site VPNs to
create an encrypted tunnel between the primary network and the remote
network. One disadvantage of site-to-site VPNs is how complex they can be to
configure and manage compared to remote VPNs.
WireGuard VPN vs. IPSec VPN
WireGuard VPN
WireGuard is a high-speed VPN protocol, with advanced encryption, to protect
users when they are accessing the internet. It’s designed to be simple to set up
and maintain. WireGuard can be used for both site-to-site connection and client-
server connections. WireGuard is relatively newer than IPSec, and is used by
many people due to the fact that its download speed is enhanced by using fewer
lines of code. WireGuard is also open source, which makes it easier for users to
deploy and debug. This protocol is useful for processes that require faster
download speeds, such as streaming video content or downloading large files.
IPSec VPN
IPSec is another VPN protocol that may be used to set up VPNs. Most VPN
providers use IPSec to encrypt and authenticate data packets in order to
establish secure, encrypted connections. Since IPSec is one of the earlier VPN
protocols, many operating systems support IPSec from VPN providers.
Although IPSec and WireGuard are both VPN protocols, IPSec is older and more
complex than WireGuard. Some clients may prefer IPSec due to its longer history
of use, extensive security testing, and widespread adoption. However, others
may prefer WireGuard because of its potential for better performance and
simpler configuration.
Key Takeaways
A VPN protocol is similar to a network protocol: It’s a set of rules or instructions
that will determine how data moves between endpoints. There are two types of
VPNs: remote access and site-to-site. Remote access VPNs establish a connection
between a personal device and a VPN server and encrypt or decrypt data
exchanged with a personal device. Enterprises use site-to-site VPNs largely to
extend their network to different locations and networks. IPSec can be used to
create site-to-site connections and WireGuard can be used for both site-to-site
and remote access connections.

How intrusions compromise your system


Network interception attacks
Network interception attacks work by intercepting network traffic and stealing
valuable information or interfering with the transmission in some way.
Malicious actors can use hardware or software tools to capture and inspect data
in transit. This is referred to as packet sniffing. In addition to seeing information
that they are not entitled to, malicious actors can also intercept network traffic
and alter it. These attacks can cause damage to an organization’s network by
inserting malicious code modifications or altering the message and interrupting
network operations. For example, an attacker can intercept a bank transfer and
change the account receiving the funds to one that the attacker controls.
Later in this course you will learn more about malicious packet sniffing, and other
types of network interception attacks: on-path attacks and replay attacks.
Backdoor attacks
In cybersecurity, backdoors are weaknesses intentionally left by programmers or
system and network administrators that bypass normal access control
mechanisms. Backdoors are intended to help programmers conduct
troubleshooting or administrative tasks. However, backdoors can also be
installed by attackers after they’ve compromised an organization to ensure they
have persistent access.
Once the hacker has entered an insecure network through a backdoor, they can
cause extensive damage: installing malware, performing a denial of service
(DoS) attack, stealing private information or changing other security settings
that leaves the system vulnerable to other attacks. A DoS attack is an attack
that targets a network or server and floods it with network traffic.
Possible impacts on an organization
As you’ve learned already, network attacks can have a significant negative
impact on an organization. Let’s examine some potential consequences.
 Financial: When a system is taken offline with a DoS attack or some other
tactic, they prevent a company from performing tasks that generate
revenue. Depending on the size of an organization, interrupted operations
can cost millions of dollars. Reparation costs to rebuild software
infrastructure and to pay large sums associated with potential ransomware
can be financially difficult. In addition, if a malicious actor gets access to
the personal information of the company’s clients or customers, the
company may face heavy litigation and settlement costs if customers seek
legal recourse.
 Reputation: Attacks can also have a negative impact on the reputation of
an organization. If it becomes public knowledge that a company has
experienced a cyber attack, the public may become concerned about the
security practices of the organization. They may stop trusting the
company with their personal information and choose a competitor to fulfill
their needs.
 Public safety: If an attack occurs on a government network, this can
potentially impact the safety and welfare of the citizens of a country. In
recent years, defense agencies across the globe are investing heavily in
combating cyber warfare tactics. If a malicious actor gained access to a
power grid, a public water system, or even a military defense
communication system, the public could face physical harm due to a
network intrusion attack.

Read tcpdump logs


A network protocol analyzer, sometimes called a packet sniffer or a packet
analyzer, is a tool designed to capture and analyze data traffic within a network.
They are commonly used as investigative tools to monitor networks and identify
suspicious activity. There are a wide variety of network protocol analyzers
available, but some of the most common analyzers include:
 SolarWinds NetFlow Traffic Analyzer
 ManageEngine OpManager
 Azure Network Watcher
 Wireshark
 tcpdump
This reading will focus exclusively on tcpdump, though you can apply what you
learn here to many of the other network protocol analyzers you'll use as a
cybersecurity analyst to defend against any network intrusions. In an upcoming
activity, you’ll review a tcpdump data traffic log and identify a DoS attack to
practice these skills.
tcpdump
tcpdump is a command-line network protocol analyzer. It is popular,
lightweight–meaning it uses little memory and has a low CPU usage–and uses the
open-source libpcap library. tcpdump is text based, meaning all commands in
tcpdump are executed in the terminal. It can also be installed on other Unix-
based operating systems, such as macOS®. It is preinstalled on many Linux
distributions.
tcpdump provides a brief packet analysis and converts key information about
network traffic into formats easily read by humans. It prints information about
each packet directly into your terminal. tcpdump also displays the source IP
address, destination IP addresses, and the port numbers being used in the
communications.
Interpreting output
tcpdump prints the output of the command as the sniffed packets in the
command line, and optionally to a log file, after a command is executed. The
output of a packet capture contains many pieces of important information about
the network traffic.

Some information you receive from a packet capture includes:


 Timestamp: The output begins with the timestamp, formatted as hours,
minutes, seconds, and fractions of a second.
 Source IP: The packet’s origin is provided by its source IP address.
 Source port: This port number is where the packet originated.
 Destination IP: The destination IP address is where the packet is being
transmitted to.
 Destination port: This port number is where the packet is being
transmitted to.
Note: By default, tcpdump will attempt to resolve host addresses to hostnames.
It'll also replace port numbers with commonly associated services that use these
ports.
Common uses
tcpdump and other network protocol analyzers are commonly used to capture
and view network communications and to collect statistics about the network,
such as troubleshooting network performance issues. They can also be used to:
 Establish a baseline for network traffic patterns and network utilization
metrics.
 Detect and identify malicious traffic
 Create customized alerts to send the right notifications when network
issues or security threats arise.
 Locate unauthorized instant messaging (IM), traffic, or wireless access
points.
However, attackers can also use network protocol analyzers maliciously to gain
information about a specific network. For example, attackers can capture data
packets that contain sensitive information, such as account usernames and
passwords. As a cybersecurity analyst, It’s important to understand the purpose
and uses of network protocol analyzers.
Key takeaways
Network protocol analyzers, like tcpdump, are common tools that can be used to
monitor network traffic patterns and investigate suspicious activity. tcpdump is a
command-line network protocol analyzer that is compatible with Linux/Unix and
macOS®. When you run a tcpdump command, the tool will output packet routing
information, like the timestamp, source IP address and port number, and the
destination IP address and port number. Unfortunately, attackers can also use
network protocol analyzers to capture data packets that contain sensitive
information, such as account usernames and passwords.

Overview of interception tactics


In the previous course items, you learned how packet sniffing and IP spoofing are
used in network attacks. Because these attacks intercept data packets as they
travel across the network, they are called interception attacks.
This reading will introduce you to some specific attacks that use packet sniffing
and IP spoofing. You will learn how hackers use these tactics and how security
analysts can counter the threat of interception attacks.
A closer review of packet sniffing
As you learned in a previous video, packet sniffing is the practice of capturing
and inspecting data packets across a network. On a private network, data
packets are directed to the matching destination device on the network.
The device’s Network Interface Card (NIC) is a piece of hardware that connects
the device to a network. The NIC reads the data transmission, and if it contains
the device’s MAC address, it accepts the packet and sends it to the device to
process the information based on the protocol. This occurs in all standard
network operations. However, a NIC can be set to promiscuous mode, which
means that it accepts all traffic on the network, even the packets that aren’t
addressed to the NIC’s device. You’ll learn more about NIC’s later in the program.
Malicious actors might use software like Wireshark to capture the data on a
private network and store it for later use. They can then use the personal
information to their own advantage. Alternatively, they might use the IP and MAC
addresses of authorized users of the private network to perform IP spoofing.
A closer review of IP spoofing
After a malicious actor has sniffed packets on the network, they can impersonate
the IP and MAC addresses of authorized devices to perform an IP spoofing
attack. Firewalls can prevent IP spoofing attacks by configuring it to refuse
unauthorized IP packets and suspicious traffic. Next, you’ll examine a few
common IP spoofing attacks that are important to be familiar with as a security
analyst.
On-path attack
An on-path attack happens when a hacker intercepts the communication
between two devices or servers that have a trusted relationship. The
transmission between these two trusted network devices could contain valuable
information like usernames and passwords that the malicious actor can collect.
An on-path attack is sometimes referred to as a meddler-in-the middle attack
because the hacker is hiding in the middle of communications between two
trusted parties.
Or, it could be that the intercepted transmission contains a DNS system look-up.
You’ll recall from an earlier video that a DNS server translates website domain
names into IP addresses. If a malicious actor intercepts a transmission containing
a DNS lookup, they could spoof the DNS response from the server and redirect a
domain name to a different IP address, perhaps one that contains malicious code
or other threats. The most important way to protect against an on-path attack is
to encrypt your data in transit, e.g. using TLS.
Smurf attack
A smurf attack is a network attack that is performed when an attacker sniffs an
authorized user’s IP address and floods it with packets. Once the spoofed packet
reaches the broadcast address, it is sent to all of the devices and servers on the
network.
In a smurf attack, IP spoofing is combined with another denial of service (DoS)
technique to flood the network with unwanted traffic. For example, the spoofed
packet could include an Internet Control Message Protocol (ICMP) ping. As you
learned earlier, ICMP is used to troubleshoot a network. But if too many ICMP
messages are transmitted, the ICMP echo responses overwhelm the servers on
the network and they shut down. This creates a denial of service and can bring
an organization’s operations to a halt.
An important way to protect against a smurf attack is to use an advanced firewall
that can monitor any unusual traffic on the network. Most next generation
firewalls (NGFW) include features that detect network anomalies to ensure that
oversized broadcasts are detected before they have a chance to bring down the
network.
DoS attack
As you’ve learned, once the malicious actor has sniffed the network traffic, they
can impersonate an authorized user. A Denial of Service attack is a class of
attacks where the attacker prevents the compromised system from performing
legitimate activity or responding to legitimate traffic. Unlike IP spoofing,
however, the attacker will not receive a response from the targeted host.
Everything about the data packet is authorized including the IP address in the
header of the packet. In IP spoofing attacks, the malicious actor uses IP packets
containing fake IP addresses. The attackers keep sending IP packets containing
fake IP addresses until the network server crashes.
Pro Tip: Remember the principle of defense-in-depth. There isn’t one perfect
strategy for stopping each kind of attack. You can layer your defense by using
multiple strategies. In this case, using industry standard encryption will
strengthen your security and help you defend from DoS attacks on more than
one level.
Key takeaways
This reading covered several types of common IP spoofing attacks. You learned
about how packet sniffing is performed and how gathering information from
intercepting data transmissions can give malicious actors opportunities for IP
spoofing. Whether it is an on-path attack, IP spoofing attack, or a smurf attack,
analysts need to ensure that mitigation strategies are in place to limit the threat
and prevent security breaches.

Brute force attacks and OS hardening


In this reading, you’ll learn about brute force attacks. You’ll consider how
vulnerabilities can be assessed using virtual machines and sandboxes, and learn
ways to prevent brute force attacks using a combination of authentication
measures. Implementing various OS hardening tasks can help prevent brute
force attacks. An attacker can use a brute force attack to gain access and
compromise a network.
Usernames and passwords are among the most common and important security
controls in place today. They are used and enforced on everything that stores or
accesses sensitive or private information, like personal phones, computers, and
restricted applications within an organization. However, a major issue with
relying on login credentials as a critical line of defense is that they’re vulnerable
to being stolen and guessed by malicious actors.
Brute force attacks
A brute force attack is a trial-and-error process of discovering private
information. There are different types of brute force attacks that malicious actors
use to guess passwords, including:
 Simple brute force attacks. When attackers try to guess a user's login
credentials, it’s considered a simple brute force attack. They might do this
by entering any combination of usernames and passwords that they can
think of until they find the one that works.
 Dictionary attacks use a similar technique. In dictionary attacks, attackers
use a list of commonly used passwords and stolen credentials from
previous breaches to access a system. These are called “dictionary”
attacks because attackers originally used a list of words from the
dictionary to guess the passwords, before complex password rules became
a common security practice.
Using brute force to access a system can be a tedious and time consuming
process, especially when it’s done manually. There are a range of tools attackers
use to conduct their attacks.
Assessing vulnerabilities
Before a brute force attack or other cybersecurity incident occurs, companies
can run a series of tests on their network or web applications to assess
vulnerabilities. Analysts can use virtual machines and sandboxes to test
suspicious files, check for vulnerabilities before an event occurs, or to simulate a
cybersecurity incident.
Virtual machines (VMs)
Virtual machines (VMs) are software versions of physical computers. VMs provide
an additional layer of security for an organization because they can be used to
run code in an isolated environment, preventing malicious code from affecting
the rest of the computer or system. VMs can also be deleted and replaced by a
pristine image after testing malware.
VMs are useful when investigating potentially infected machines or running
malware in a constrained environment. Using a VM may prevent damage to your
system in the event its tools are used improperly. VMs also give you the ability to
revert to a previous state. However, there are still some risks involved with VMs.
There’s still a small risk that a malicious program can escape virtualization and
access the host machine.
You can test and explore applications easily with VMs, and it’s easy to switch
between different VMs from your computer. This can also help in streamlining
many security tasks.
Sandbox environments
A sandbox is a type of testing environment that allows you to execute software
or programs separate from your network. They are commonly used for testing
patches, identifying and addressing bugs, or detecting cybersecurity
vulnerabilities. Sandboxes can also be used to evaluate suspicious software,
evaluate files containing malicious code, and simulate attack scenarios.
Sandboxes can be stand-alone physical computers that are not connected to a
network; however, it is often more time- and cost-effective to use software or
cloud-based virtual machines as sandbox environments. Note that some malware
authors know how to write code to detect if the malware is executed in a VM or
sandbox environment. Attackers can program their malware to behave as
harmless software when run inside these types of testing environments.
Prevention measures
Some common measures organizations use to prevent brute force attacks and
similar attacks from occurring include:
 Salting and hashing: Hashing converts information into a unique value
that can then be used to determine its integrity. It is a one-way function,
meaning it is impossible to decrypt and obtain the original text. Salting
adds random characters to hashed passwords. This increases the length
and complexity of hash values, making them more secure.
 Multi-factor authentication (MFA) and two-factor authentication
(2FA): MFA is a security measure which requires a user to verify their
identity in two or more ways to access a system or network. This
verification happens using a combination of authentication factors: a
username and password, fingerprints, facial recognition, or a one-time
password (OTP) sent to a phone number or email. 2FA is similar to MFA,
except it uses only two forms of verification.
 CAPTCHA and reCAPTCHA: CAPTCHA stands for Completely Automated
Public Turing test to tell Computers and Humans Apart. It asks users to
complete a simple test that proves they are human. This helps prevent
software from trying to brute force a password. reCAPTCHA is a free
CAPTCHA service from Google that helps protect websites from bots and
malicious software.
 Password policies: Organizations use password policies to standardize
good password practices throughout the business. Policies can include
guidelines on how complex a password should be, how often users need to
update passwords, whether passwords can be reused or not, and if there
are limits to how many times a user can attempt to log in before their
account is suspended.
Key takeaways
Brute force attacks are a trial-and-error process of guessing passwords. Attacks
can be launched manually or through software tools. Methods include simple
brute force attacks and dictionary attacks. To protect against brute force attacks,
cybersecurity analysts can use sandboxes to test suspicious files, check for
vulnerabilities, or to simulate real attacks and virtual machines to conduct
vulnerability tests. Some common measures to prevent brute force attacks
include: hashing and salting, MFA and/or 2FA, CAPTCHA and reCAPTCHA, and
password policies.

Network security applications


This section of the course covers the topic of network hardening and monitoring.
Each device, tool, or security strategy put in place by security analysts further
protects—or hardens—the network until the network owner is satisfied with the
level of security. This approach of adding layers of security to a network is
referred to as defense in depth.
In this reading, you are going to learn about the role of four devices used to
secure a network—firewalls, intrusion detection systems, intrusion prevention
systems, and security incident and event management tools. Network security
professionals have the choice to use any or all of these devices and tools
depending on the level of security that they hope to achieve.
This reading will discuss the benefits of layered security. Each tool mentioned is
an additional layer of defense that can incrementally harden a network, starting
with the minimum level of security (provided by just a firewall), to the highest
level of security (provided by combining a firewall, an intrusion detection and
prevention device, and security event monitoring).

Firewall
So far in this course, you learned about stateless firewalls, stateful firewalls, and
next-generation firewalls (NGFWs), and the security advantages of each of them.
Most firewalls are similar in their basic functions. Firewalls allow or block traffic
based on a set of rules. As data packets enter a network, the packet header is
inspected and allowed or denied based on its port number. NGFWs are also able
to inspect packet payloads. Each system should have its own firewall, regardless
of the network firewall.
Intrusion Detection System
An intrusion detection system (IDS) is an application that monitors system
activity and alerts on possible intrusions. An IDS alerts administrators based on
the signature of malicious traffic.
The IDS is configured to detect known attacks. IDS systems often sniff data
packets as they move across the network and analyze them for the
characteristics of known attacks. Some IDS systems review not only for
signatures of known attacks, but also for anomalies that could be the sign of
malicious activity. When the IDS discovers an anomaly, it sends an alert to the
network administrator who can then investigate further.
The limitations to IDS systems are that they can only scan for known attacks or
obvious anomalies. New and sophisticated attacks might not be caught. The
other limitation is that the IDS doesn’t actually stop the incoming traffic if it
detects something awry. It’s up to the network administrator to catch the
malicious activity before it does anything damaging to the network.

When combined with a firewall, an IDS adds another layer of defense. The IDS is
placed behind the firewall and before entering the LAN, which allows the IDS to
analyze data streams after network traffic that is disallowed by the firewall has
been filtered out. This is done to reduce noise in IDS alerts, also referred to as
false positives.
Intrusion Prevention System
An intrusion prevention system (IPS) is an application that monitors system
activity for intrusive activity and takes action to stop the activity. It offers even
more protection than an IDS because it actively stops anomalies when they are
detected, unlike the IDS that simply reports the anomaly to a network
administrator.
An IPS searches for signatures of known attacks and data anomalies. An IPS
reports the anomaly to security analysts and blocks a specific sender or drops
network packets that seem suspect.
The IPS (like an IDS) sits behind the firewall in the network architecture. This
offers a high level of security because risky data streams are disrupted before
they even reach sensitive parts of the network. However, one potential limitation
is that it is inline: If it breaks, the connection between the private network and
the internet breaks. Another limitation of IPS is the possibility of false positives,
which can result in legitimate traffic getting dropped.
Full packet capture devices
Full packet capture devices can be incredibly useful for network administrators
and security professionals. These devices allow you to record and analyze all of
the data that is transmitted over your network. They also aid in investigating
alerts created by an IDS.
Security Information and Event Management
A security information and event management system (SIEM) is an
application that collects and analyzes log data to monitor critical activities in an
organization. SIEM tools work in real time to report suspicious activity in a
centralized dashboard. SIEM tools additionally analyze network log data sourced
from IDSs, IPSs, firewalls, VPNs, proxies, and DNS logs. SIEM tools are a way to
aggregate security event data so that it all appears in one place for security
analysts to analyze. This is referred to as a single pane of glass.
Below, you can review an example of a dashboard from Google Cloud’s SIEM tool,
Chronicle. Chronicle is a cloud-native tool designed to retain, analyze, and
search data.
Splunk is another common SIEM tool. Splunk offers different SIEM tool options:
Splunk Enterprise and Splunk Cloud. Both options include detailed dashboards
which help security professionals to review and analyze an organization's data.
There are also other similar SIEM tools available, and it's important for security
professionals to research the different tools to determine which one is most
beneficial to the organization.
A SIEM tool doesn’t replace the expertise of security analysts, or of the network-
and system-hardening activities covered in this course, but they’re used in
combination with other security methods. Security analysts often work in a
Security Operations Center (SOC) where they can monitor the activity across the
network. They can then use their expertise and experience to determine how to
respond to the information on the dashboard and decide when the events meet
the criteria to be escalated to oversight.

Devices / Tools Advantages Disadvantages

Firewall A firewall allows or blocks A firewall is only able to filter


traffic based on a set of rules. packets based on information
provided in the header of the
Devices / Tools Advantages Disadvantages

packets.

Intrusion An IDS detects and alerts An IDS can only scan for known
Detection admins about possible attacks or obvious anomalies; new
System (IDS) intrusions, attacks, and other and sophisticated attacks might not
malicious traffic. be caught. It doesn’t actually stop
the incoming traffic.

Intrusion An IPS monitors system An IPS is an inline appliance. If it


Prevention activity for intrusions and fails, the connection between the
System (IPS) anomalies and takes action to private network and the internet
stop them. breaks. It might detect false
positives and block legitimate
traffic.

Security A SIEM tool collects and A SIEM tool only reports on possible
Information and analyzes log data from security issues. It does not take any
Event multiple network machines. It actions to stop or prevent
Management aggregates security events for suspicious events.
(SIEM) monitoring in a central
dashboard.

Key takeaways
Each of these devices or tools cost money to purchase, install, and maintain. An
organization might need to hire additional personnel to monitor the security
tools, as in the case of a SIEM. Decision-makers are tasked with selecting the
appropriate level of security based on cost and risk to the organization. You will
learn more about choosing levels of security later in the course.
Secure the cloud
Earlier in this course, you were introduced to cloud computing. Cloud
computing is a model for allowing convenient and on-demand network access
to a shared pool of configurable computing resources. These resources can be
configured and released with minimal management effort or interaction with the
service provider.
Just like any other IT infrastructure, a cloud infrastructure needs to be secured.
This reading will address some main security considerations that are unique to
the cloud and introduce you to the shared responsibility model used for security
in the cloud. Many organizations that use cloud resources and infrastructure
express concerns about the privacy of their data and resources. This concern is
addressed through cryptography and other additional security measures, which
will be discussed later in this course.
Cloud security considerations
Many organizations choose to use cloud services because of the ease of
deployment, speed of deployment, cost savings, and scalability of these options.
Cloud computing presents unique security challenges that cybersecurity analysts
need to be aware of.
Identity access management
Identity access management (IAM) is a collection of processes and
technologies that helps organizations manage digital identities in their
environment. This service also authorizes how users can use different cloud
resources. A common problem that organizations face when using the cloud is
the loose configuration of cloud user roles. An improperly configured user role
increases risk by allowing unauthorized users to have access to critical cloud
operations.
Configuration
The number of available cloud services adds complexity to the network. Each
service must be carefully configured to meet security and compliance
requirements. This presents a particular challenge when organizations perform
an initial migration into the cloud. When this change occurs on their network,
they must ensure that every process moved into the cloud has been configured
correctly. If network administrators and architects are not meticulous in correctly
configuring the organization’s cloud services, they could leave the network open
to compromise. Misconfigured cloud services are a common source of cloud
security issues.
Attack surface
Cloud service providers (CSPs) offer numerous applications and services for
organizations at a low cost.
Every service or application on a network carries its own set of risks and
vulnerabilities and increases an organization’s overall attack surface. An
increased attack surface must be compensated for with increased security
measures.
Cloud networks that utilize many services introduce lots of entry points into an
organization’s network. However, if the network is designed correctly, utilizing
several services does not introduce more entry points into an organization’s
network design. These entry points can be used to introduce malware onto the
network and pose other security vulnerabilities. It is important to note that CSPs
often defer to more secure options, and have undergone more scrutiny than a
traditional on-premises network.
Zero-day attacks
Zero-day attacks are an important security consideration for organizations using
cloud or traditional on-premise network solutions. A zero day attack is an exploit
that was previously unknown. CSPs are more likely to know about a zero day
attack occurring before a traditional IT organization does. CSPs have ways of
patching hypervisors and migrating workloads to other virtual machines. These
methods ensure the customers are not impacted by the attack. There are also
several tools available for patching at the operating system level that
organizations can use.
Visibility and tracking
Network administrators have access to every data packet crossing the network
with both on-premise and cloud networks. They can sniff and inspect data
packets to learn about network performance or to check for possible threats and
attacks.
This kind of visibility is also offered in the cloud through flow logs and tools, such
as packet mirroring. CSPs take responsibility for security in the cloud, but they do
not allow the organizations that use their infrastructure to monitor traffic on the
CSP’s servers. Many CSPs offer strong security measures to protect their
infrastructure. Still, this situation might be a concern for organizations that are
accustomed to having full access to their network and operations. CSPs pay for
third-party audits to verify how secure a cloud network is and identify potential
vulnerabilities. The audits can help organizations identify whether any
vulnerabilities originate from on-premise infrastructure and if there are any
compliance lapses from their CSP.
Things change fast in the cloud
CSPs are large organizations that work hard to stay up-to-date with technology
advancements. For organizations that are used to being in control of any
adjustments made to their network, this can be a potential challenge to keep up
with. Cloud service updates can affect security considerations for the
organizations using them. For example, connection configurations might need to
be changed based on the CSP’s updates.
Organizations that use CSPs usually have to update their IT processes. It is
possible for organizations to continue following established best practices for
changes, configurations, and other security considerations. However, an
organization might have to adopt a different approach in a way that aligns with
changes made by the CSP.
Cloud networking offers various options that might appear attractive to a small
company—options that they could never afford to build on their own premises.
However, it is important to consider that each service adds complexity to the
security profile of the organization, and they will need security personnel to
monitor all of the cloud services.
Shared responsibility model
A commonly accepted cloud security principle is the shared responsibility model.
The shared responsibility model states that the CSP must take responsibility
for security involving the cloud infrastructure, including physical data centers,
hypervisors, and host operating systems. The company using the cloud service is
responsible for the assets and processes that they store or operate in the cloud.
The shared responsibility model ensures that both the CSP and the users agree
about where their responsibility for security begins and ends. A problem occurs
when organizations assume that the CSP is taking care of security that they have
not taken responsibility for. One example of this is cloud applications and
configurations. The CSP takes responsibility for securing the cloud, but it is the
organization’s responsibility to ensure that services are configured properly
according to the security requirements of their organization.
Key takeaways
It is essential to know the security considerations that are unique to the cloud
and understanding the shared responsibility model for cloud security.
Organizations are responsible for correctly configuring and maintaining best
security practices for their cloud services. The shared responsibility model
ensures that both the CSP and users agree about what the organization is
responsible for and what the CSP is responsible for when securing the cloud
infrastructure.
Cryptography and cloud security
Earlier in this course, you were introduced to the concepts of the shared
responsibility model and identity and access management (IAM). Similar to on-
premise networks, cloud networks also need to be secured through a mixture of
security hardening practices and cryptography.
This reading will address common cloud security hardening practices, what to
consider when implementing cloud security measures, and the fundamentals of
cryptography. Since cloud infrastructure is becoming increasingly common, it’s
important to understand how cloud networks operate and how to secure them.
Cloud security hardening
There are various techniques and tools that can be used to secure cloud network
infrastructure and resources. Some common cloud security hardening techniques
include incorporating IAM, hypervisors, baselining, cryptography, and
cryptographic erasure.
Identity access management (IAM)
Identity access management (IAM) is a collection of processes and
technologies that helps organizations manage digital identities in their
environment. This service also authorizes how users can leverage different cloud
resources.
Hypervisors
A hypervisor abstracts the host’s hardware from the operating software
environment. There are two types of hypervisors. Type one hypervisors run on
the hardware of the host computer. An example of a type one hypervisor is
VMware®'s ESXi. Type two hypervisors operate on the software of the host
computer. An example of a type two hypervisor is VirtualBox. Cloud service
providers (CSPs) commonly use type one hypervisors. CSPs are responsible for
managing the hypervisor and other virtualization components. The CSP ensures
that cloud resources and cloud environments are available, and it provides
regular patches and updates. Vulnerabilities in hypervisors or misconfigurations
can lead to virtual machine escapes (VM escapes). A VM escape is an exploit
where a malicious actor gains access to the primary hypervisor, potentially the
host computer and other VMs. As a CSP customer, you will rarely deal with
hypervisors directly.
Baselining
Baselining for cloud networks and operations cover how the cloud environment is
configured and set up. A baseline is a fixed reference point. This reference point
can be used to compare changes made to a cloud environment. Proper
configuration and setup can greatly improve the security and performance of a
cloud environment. Examples of establishing a baseline in a cloud environment
include: restricting access to the admin portal of the cloud environment, enabling
password management, enabling file encryption, and enabling threat detection
services for SQL databases.
Cryptography in the cloud
Cryptography can be applied to secure data that is processed and stored in a
cloud environment. Cryptography uses encryption and secure key management
systems to provide data integrity and confidentiality. Cryptographic encryption is
one of the key ways to secure sensitive data and information in the cloud.
Encryption is the process of scrambling information into ciphertext, which is not
readable to anyone without the encryption key. Encryption primarily originated
from manually encoding messages and information using an algorithm to convert
any given letter or number to a new value. Modern encryption relies on the
secrecy of a key, rather than the secrecy of an algorithm. Cryptography is an
important tool that helps secure cloud networks and data at rest to prevent
unauthorized access. You’ll learn more about cryptography in-depth in an
upcoming course.
Cryptographic erasure
Cryptographic erasure is a method of erasing the encryption key for the
encrypted data. When destroying data in the cloud, more traditional methods of
data destruction are not as effective. Crypto-shredding is a newer technique
where the cryptographic keys used for decrypting the data are destroyed. This
makes the data undecipherable and prevents anyone from decrypting the data.
When crypto-shredding, all copies of the key need to be destroyed so no one has
any opportunity to access the data in the future.
Key Management
Modern encryption relies on keeping the encryption keys secure. Below are the
measures you can take to further protect your data when using cloud
applications:
 Trusted platform module (TPM). TPM is a computer chip that can securely
store passwords, certificates, and encryption keys.
 Cloud hardware security module (CloudHSM). CloudHSM is a computing
device that provides secure storage for cryptographic keys and processes
cryptographic operations, such as encryption and decryption.
Organizations and customers do not have access to the cloud service provider
(CSP) directly, but they can request audits and security reports by contacting the
CSP. Customers typically do not have access to the specific encryption keys that
CSPs use to encrypt the customers’ data. However, almost all CSPs allow
customers to provide their own encryption keys, depending on the service the
customer is accessing. In turn, the customer is responsible for their encryption
keys and ensuring the keys remain confidential. The CSP is limited in how they
can help the customer if the customer’s keys are compromised or destroyed. One
key benefit of the shared responsibility model is that the customer is not entirely
responsible for maintenance of the cryptographic infrastructure. Organizations
can assess and monitor the risk involved with allowing the CSP to manage the
infrastructure by reviewing a CSPs audit and security controls. For federal
contractors, FEDRAMP provides a list of verified CSPs.
Key takeaways
Cloud security hardening is a critical component to consider when assessing the
security of various public cloud environments and improving the security within
your organization. Identity access management (IAM), correctly configuring a
baseline for the cloud environment, securing hypervisors, cryptography, and
cryptographic erasure are all methods to use to further secure cloud
infrastructure.
Linux & SQL
Requests to the operating system
Operating systems are a critical component of a computer. They make
connections between applications and hardware to allow users to perform tasks.
In this reading, you’ll explore this complex process further and consider it using a
new analogy and a new example.
Booting the computer
When you boot, or turn on, your computer, either a BIOS or UEFI microchip is
activated. The Basic Input/Output System (BIOS) is a microchip that contains
loading instructions for the computer and is prevalent in older systems. The
Unified Extensible Firmware Interface (UEFI) is a microchip that contains
loading instructions for the computer and replaces BIOS on more modern
systems.
The BIOS and UEFI chips both perform the same function for booting the
computer. BIOS was the standard chip until 2007, when UEFI chips increased in
use. Now, most new computers include a UEFI chip. UEFI provides enhanced
security features.
The BIOS or UEFI microchips contain a variety of loading instructions for the
computer to follow. For example, one of the loading instructions is to verify the
health of the computer’s hardware.
The last instruction from the BIOS or UEFI activates the bootloader. The
bootloader is a software program that boots the operating system. Once the
operating system has finished booting, your computer is ready for use.
Completing a task
As previously discussed, operating systems help us use computers more
efficiently. Once a computer has gone through the booting process, completing a
task on a computer is a four-part process.

User
The first part of the process is the user. The user initiates the process by having
something they want to accomplish on the computer. Right now, you’re a user!
You’ve initiated the process of accessing this reading.
Application
The application is the software program that users interact with to complete a
task. For example, if you want to calculate something, you would use the
calculator application. If you want to write a report, you would use a word
processing application. This is the second part of the process.
Operating system
The operating system receives the user’s request from the application. It’s the
operating system’s job to interpret the request and direct its flow. In order to
complete the task, the operating system sends it on to applicable components of
the hardware.
Hardware
The hardware is where all the processing is done to complete the tasks initiated
by the user. For example, when a user wants to calculate a number, the CPU
figures out the answer. As another example, when a user wants to save a file,
another component of the hardware, the hard drive, handles this task.
After the work is done by the hardware, it sends the output back through the
operating system to the application so that it can display the results to the user.
The OS at work behind the scenes
Consider once again how a computer is similar to a car. There are processes that
someone won’t directly observe when operating a car, but they do feel it move
forward when they press the gas pedal. It’s the same with a computer. Important
work happens inside a computer that you don’t experience directly. This work
involves the operating system.
You can explore this through another analogy. The process of using an operating
system is also similar to ordering at a restaurant. At a restaurant you place an
order and get your food, but you don’t see what’s happening in the kitchen when
the cooks prepare the food.
Ordering food is similar to using an application on a computer. When you order
your food, you make a specific request like “a small soup, very hot.” When you
use an application, you also make specific requests like “print three double-sided
copies of this document.”
You can compare the food you receive to what happens when the hardware
sends output. You receive the food that you ordered. You receive the document
that you wanted to print.
Finally, the kitchen is like the OS. You don’t know what happens in the kitchen,
but it’s critical in interpreting the request and ensuring you receive what you
ordered. Similarly, though the work of the OS is not directly transparent to you,
it’s critical in completing your tasks.
An example: Downloading a file from an internet browser
Previously, you explored how operating systems, applications, and hardware
work together by examining a task involving a calculation. You can expand this
understanding by exploring how the OS completes another task, downloading a
file from an internet browser:
 First, the user decides they want to download a file that they found online,
so they click on a download button near the file in the internet browser
application.
 Then, the internet browser communicates this action to the OS.
 The OS sends the request to download the file to the appropriate hardware
for processing.
 The hardware begins downloading the file, and the OS sends this
information to the internet browser application. The internet browser then
informs the user when the file has been downloaded.

Virtualization technology
You've explored a lot about operating systems. One more aspect to consider is
that operating systems can run on virtual machines. In this reading, you’ll learn
about virtual machines and the general concept of virtualization. You’ll explore
how virtual machines work and the benefits of using them.
What is a virtual machine?
A virtual machine (VM) is a virtual version of a physical computer. Virtual
machines are one example of virtualization. Virtualization is the process of using
software to create virtual representations of various physical machines. The term
“virtual” refers to machines that don’t exist physically, but operate like they do
because their software simulates physical hardware. Virtual systems don’t use
dedicated physical hardware. Instead, they use software-defined versions of the
physical hardware. This means that a single virtual machine has a virtual CPU,
virtual storage, and other virtual hardware. Virtual systems are just code.

You can run multiple virtual machines using the physical hardware of a single
computer. This involves dividing the resources of the host computer to be shared
across all physical and virtual components. For example, Random Access
Memory (RAM) is a hardware component used for short-term memory. If a
computer has 16GB of RAM, it can host three virtual machines so that the
physical computer and virtual machines each have 4GB of RAM. Also, each of
these virtual machines would have their own operating system and function
similarly to a typical computer.
Benefits of virtual machines
Security professionals commonly use virtualization and virtual machines.
Virtualization can increase security for many tasks and can also increase
efficiency.
Security
One benefit is that virtualization can provide an isolated environment, or a
sandbox, on the physical host machine. When a computer has multiple virtual
machines, these virtual machines are “guests” of the computer. Specifically, they
are isolated from the host computer and other guest virtual machines. This
provides a layer of security, because virtual machines can be kept separate from
the other systems. For example, if an individual virtual machine becomes
infected with malware, it can be dealt with more securely because it’s isolated
from the other machines. A security professional could also intentionally place
malware on a virtual machine to examine it in a more secure environment.
Note: Although using virtual machines is useful when investigating potentially
infected machines or running malware in a constrained environment, there are
still some risks. For example, a malicious program can escape virtualization and
access the host machine. This is why you should never completely trust
virtualized systems.
Efficiency
Using virtual machines can also be an efficient and convenient way to perform
security tasks. You can open multiple virtual machines at once and switch easily
between them. This allows you to streamline security tasks, such as testing and
exploring various applications.
You can compare the efficiency of a virtual machine to a city bus. A single city
bus has a lot of room and is an efficient way to transport many people
simultaneously. If city buses didn’t exist, then everyone on the bus would have to
drive their own cars. This uses more gas, cars, and other resources than riding
the city bus.
Similar to how many people can ride one bus, many virtual machines can be
hosted on the same physical machine. That way, separate physical machines
aren't needed to perform certain tasks.
Managing virtual machines
Virtual machines can be managed with a software called a hypervisor.
Hypervisors help users manage multiple virtual machines and connect the virtual
and physical hardware. Hypervisors also help with allocating the shared
resources of the physical host machine to one or more virtual machines.
One hypervisor that is useful for you to be familiar with is the Kernel-based
Virtual Machine (KVM). KVM is an open-source hypervisor that is supported by
most major Linux distributions. It is built into the Linux kernel, which means it
can be used to create virtual machines on any machine running a Linux
operating system without the need for additional software.
Other forms of virtualization
In addition to virtual machines, there are other forms of virtualization. Some of
these virtualization technologies do not use operating systems. For example,
multiple virtual servers can be created from a single physical server. Virtual
networks can also be created to more efficiently use the hardware of a physical
network.
Key takeaways
Virtual machines are virtual versions of physical computers and are one example
of virtualization. Virtualization is a key technology in the security industry, and
it’s important for security analysts to understand the basics. There are many
benefits to using virtual machines, such as isolation of malware and other
security risks. However, it’s important to remember there’s still a risk of
malicious software escaping their virtualized environments.
Linux architecture explained
Understanding the Linux architecture is important for a security analyst. When
you understand how a system is organized, it makes it easier to understand how
it functions. In this reading, you’ll learn more about the individual components in
the Linux architecture. A request to complete a task starts with the user and then
flows through applications, the shell, the Filesystem Hierarchy Standard, the
kernel, and the hardware.
User
The user is the person interacting with a computer. They initiate and manage
computer tasks. Linux is a multi-user system, which means that multiple users
can use the same resources at the same time.
Applications
An application is a program that performs a specific task. There are many
different applications on your computer. Some applications typically come pre-
installed on your computer, such as calculators or calendars. Other applications
might have to be installed, such as some web browsers or email clients. In Linux,
you'll often use a package manager to install applications. A package manager
is a tool that helps users install, manage, and remove packages or applications.
A package is a piece of software that can be combined with other packages to
form an application.
Shell
The shell is the command-line interpreter. Everything entered into the shell is
text based. The shell allows users to give commands to the kernel and receive
responses from it. You can think of the shell as a translator between you and
your computer. The shell translates the commands you enter so that the
computer can perform the tasks you want.
Filesystem Hierarchy Standard (FHS)
The Filesystem Hierarchy Standard (FHS) is the component of the Linux OS
that organizes data. It specifies the location where data is stored in the operating
system.
A directory is a file that organizes where other files are stored. Directories are
sometimes called “folders,” and they can contain files or other directories. The
FHS defines how directories, directory contents, and other storage is organized
so the operating system knows where to find specific data.
Kernel
The kernel is the component of the Linux OS that manages processes and
memory. It communicates with the applications to route commands. The Linux
kernel is unique to the Linux OS and is critical for allocating resources in the
system. The kernel controls all major functions of the hardware, which can help
get tasks expedited more efficiently.
Hardware
The hardware is the physical components of a computer. You might be familiar
with some hardware components, such as hard drives or CPUs. Hardware is
categorized as either peripheral or internal.
Peripheral devices
Peripheral devices are hardware components that are attached and controlled
by the computer system. They are not core components needed to run the
computer system. Peripheral devices can be added or removed freely. Examples
of peripheral devices include monitors, printers, the keyboard, and the mouse.
Internal hardware
Internal hardware are the components required to run the computer. Internal
hardware includes a main circuit board and all components attached to it. This
main circuit board is also called the motherboard. Internal hardware includes the
following:
 The Central Processing Unit (CPU) is a computer’s main processor,
which is used to perform general computing tasks on a computer. The CPU
executes the instructions provided by programs, which enables these
programs to run.
 Random Access Memory (RAM) is a hardware component used for
short-term memory. It’s where data is stored temporarily as you perform
tasks on your computer. For example, if you’re writing a report on your
computer, the data needed for this is stored in RAM. After you’ve finished
writing the report and closed down that program, this data is deleted from
RAM. Information in RAM cannot be accessed once the computer has been
turned off. The CPU takes the data from RAM to run programs.
 The hard drive is a hardware component used for long-term memory. It’s
where programs and files are stored for the computer to access later.
Information on the hard drive can be accessed even after a computer has
been turned off and on again. A computer can have multiple hard drives.
Key takeaways
It’s important for security analysts to understand the Linux architecture and how
these components are organized. The components of the Linux architecture are
the user, applications, shell, Filesystem Hierarchy Standard, kernel, and
hardware. Each of these components is important in how Linux functions.
More Linux distributions
Previously, you were introduced to the different distributions of Linux. This
included KALI LINUX ™. (KALI LINUX ™ is a trademark of OffSec.) In addition to
KALI LINUX ™, there are multiple other Linux distributions that security analysts
should be familiar with. In this reading, you’ll learn about additional Linux
distributions.
KALI LINUX ™
KALI LINUX ™ is an open-source distribution of Linux that is widely used in the
security industry. This is because KALI LINUX ™, which is Debian-based, is pre-
installed with many useful tools for penetration testing and digital forensics. A
penetration test is a simulated attack that helps identify vulnerabilities in
systems, networks, websites, applications, and processes. Digital forensics is
the practice of collecting and analyzing data to determine what has happened
after an attack. These are key activities in the security industry.
However, KALI LINUX ™ is not the only Linux distribution that is used in
cybersecurity.
Ubuntu
Ubuntu is an open-source, user-friendly distribution that is widely used in
security and other industries. It has both a command-line interface (CLI) and a
graphical user interface (GUI). Ubuntu is also Debian-derived and includes
common applications by default. Users can also download many more
applications from a package manager, including security-focused tools. Because
of its wide use, Ubuntu has an especially large number of community resources
to support users.
Ubuntu is also widely used for cloud computing. As organizations migrate to
cloud servers, cybersecurity work may more regularly involve Ubuntu
derivatives.
Parrot
Parrot is an open-source distribution that is commonly used for security. Similar
to KALI LINUX ™, Parrot comes with pre-installed tools related to penetration
testing and digital forensics. Like both KALI LINUX ™ and Ubuntu, it is based on
Debian.
Parrot is also considered to be a user-friendly Linux distribution. This is because it
has a GUI that many find easy to navigate. This is in addition to Parrot’s CLI.
Red Hat® Enterprise Linux®
Red Hat Enterprise Linux is a subscription-based distribution of Linux built for
enterprise use. Red Hat is not free, which is a major difference from the
previously mentioned distributions. Because it’s built and supported for
enterprise use, Red Hat also offers a dedicated support team for customers to
call about issues.
CentOS
CentOS is an open-source distribution that is closely related to Red Hat. It uses
source code published by Red Hat to provide a similar platform. However,
CentOS does not offer the same enterprise support that Red Hat provides and is
supported through the community.

Package managers for installing applications


Previously, you learned about Linux distributions and that different distributions
derive from different sources, such as Debian or Red Hat Enterprise Linux
distribution. You were also introduced to package managers, and learned that
Linux applications are commonly distributed through package managers. In this
reading, you’ll apply this knowledge to learn more about package managers.
Introduction to package managers
A package is a piece of software that can be combined with other packages to
form an application. Some packages may be large enough to form applications
on their own.
Packages contain the files necessary for an application to be installed. These files
include dependencies, which are supplemental files used to run an application.
Package managers can help resolve any issues with dependencies and perform
other management tasks. A package manager is a tool that helps users install,
manage, and remove packages or applications. Linux uses multiple package
managers.
Note: It’s important to use the most recent version of a package when possible.
The most recent version has the most up-to-date bug fixes and security patches.
These help keep your system more secure.
Types of package managers
Many commonly used Linux distributions are derived from the same parent
distribution. For example, KALI LINUX ™, Ubuntu, and Parrot all come from
Debian. CentOS comes from Red Hat.
This knowledge is useful when installing applications because certain package
managers work with certain distributions. For example, the Red Hat Package
Manager (RPM) can be used for Linux distributions derived from Red Hat, and
package managers such as dpkg can be used for Linux distributions derived from
Debian.
Different package managers typically use different file extensions. For example,
Red Hat Package Manager (RPM) has files which use the .rpm file extension,
such as Package-Version-Release_Architecture.rpm. Package managers for
Debian-derived Linux distributions, such as dpkg, have files which use the .deb
file extension, such as Package_Version-Release_Architecture.deb.
Package management tools
In addition to package managers like RPM and dpkg, there are also package
management tools that allow you to easily work with packages through the shell.
Package management tools are sometimes utilized instead of package managers
because they allow users to more easily perform basic tasks, such as installing a
new package. Two notable tools are the Advanced Package Tool (APT) and
Yellowdog Updater Modified (YUM).
Advanced Package Tool (APT)
APT is a tool used with Debian-derived distributions. It is run from the command-
line interface to manage, search, and install packages.
Yellowdog Updater Modified (YUM)
YUM is a tool used with Red Hat-derived distributions. It is run from the
command-line interface to manage, search, and install packages. YUM works
with .rpm files.
Key takeaways
A package is a piece of software that can be combined with other packages to
form an application. Packages can be managed using a package manager. There
are multiple package managers and package management tools for different
Linux distributions. Package management tools allow users to easily work with
packages through the shell. Debian-derived Linux distributions use package
managers like dpkg as well as package management tools like Advanced
Package Tool (APT). Red Hat-derived distributions use the Red Hat Package
Manager (RPM) or tools like Yellowdog Updater Modified (YUM).
Different types of shells
Knowing how to work with Linux shells is an important skill for cybersecurity
professionals. Shells can be used for many common tasks. Previously, you were
introduced to shells and their functions. This reading will review shells and
introduce you to different types, including the one that you'll use in this course.
Communicate through a shell
As you explored previously, the shell is the command-line interpreter. You can
think of a shell as a translator between you and the computer system. Shells
allow you to give commands to the computer and receive responses from it.
When you enter a command into a shell, the shell executes many internal
processes to interpret your command, send it to the kernel, and return your
results.
Types of shells
The many different types of Linux shells include the following:
 Bourne-Again Shell (bash)
 C Shell (csh)
 Korn Shell (ksh)
 Enhanced C shell (tcsh)
 Z Shell (zsh)
All Linux shells use common Linux commands, but they can differ in other
features. For example, ksh and bash use the dollar sign ($) to indicate where
users type in their commands. Other shells, such as zsh, use the percent sign
(%) for this purpose.
Bash
Bash is the default shell in most Linux distributions. It’s considered a user-
friendly shell. You can use bash for basic Linux commands as well as larger
projects.
Bash is also the most popular shell in the cybersecurity profession. You’ll use
bash throughout this course as you learn and practice Linux commands.
Key takeaways
Shells are a fundamental part of the Linux operating system. Shells allow you to
give commands to the computer and receive responses from it. They can be
thought of as a translator between you and your computer system. There are
many different types of shells, but the bash shell is the most commonly used
shell in the cybersecurity profession. You’ll learn how to enter Linux commands
through the bash shell later in this course.

Navigate Linux and read file content


In this reading, you’ll review how to navigate the file system using Linux
commands in Bash. You’ll further explore the organization of the Linux Filesystem
Hierarchy Standard, review several common Linux commands for navigation and
reading file content, and learn a couple of new commands.
Filesystem Hierarchy Standard (FHS)
Previously, you learned that the Filesystem Hierarchy Standard (FHS) is the
component of Linux that organizes data. The FHS is important because it defines
how directories, directory contents, and other storage is organized in the
operating system.
This diagram illustrates the hierarchy of relationships under the FHS:

Under the FHS, a file’s location can be described by a file path. A file path is the
location of a file or directory. In the file path, the different levels of the hierarchy
are separated by a forward slash (/).
Root directory
The root directory is the highest-level directory in Linux, and it’s always
represented with a forward slash (/). All subdirectories branch off the root
directory. Subdirectories can continue branching out to as many levels as
necessary.
Standard FHS directories
Directly below the root directory, you’ll find standard FHS directories. In the
diagram, home, bin, and etc are standard FHS directories. Here are a few
examples of what standard directories contain:
 /home: Each user in the system gets their own home directory.
 /bin: This directory stands for “binary” and contains binary files and other
executables. Executables are files that contain a series of commands a
computer needs to follow to run programs and perform other functions.
 /etc: This directory stores the system’s configuration files.
 /tmp: This directory stores many temporary files. The /tmp directory is
commonly used by attackers because anyone in the system can modify
data in these files.
 /mnt: This directory stands for “mount” and stores media, such as USB
drives and hard drives.
Pro Tip: You can use the man hier command to learn more about the FHS and
its standard directories.
User-specific subdirectories
Under home are subdirectories for specific users. In the diagram, these users are
analyst and analyst2. Each user has their own personal subdirectories, such as
projects, logs, or reports.
Note: When the path leads to a subdirectory below the user’s home directory,
the user’s home directory can be represented as the tilde (~). For example,
/home/analyst/logs can also be represented as ~/logs.
You can navigate to specific subdirectories using their absolute or relative file
paths. The absolute file path is the full file path, which starts from the root. For
example, /home/analyst/projects is an absolute file path. The relative file
path is the file path that starts from a user's current directory.
Note: Relative file paths can use a dot (.) to represent the current directory, or
two dots (..) to represent the parent of the current directory. An example of a
relative file path could be ../projects.
Key commands for navigating the file system
The following Linux commands can be used to navigate the file system: pwd, ls,
and cd.
pwd
The pwd command prints the working directory to the screen. Or in other words,
it returns the directory that you’re currently in.
The output gives you the absolute path to this directory. For example, if you’re in
your home directory and your username is analyst, entering pwd returns
/home/analyst.
Pro Tip: To learn what your username is, use the whoami command. The
whoami command returns the username of the current user. For example, if
your username is analyst, entering whoami returns analyst.
ls
The ls command displays the names of the files and directories in the current
working directory. For example, in the video, ls returned directories such as logs,
and a file called updates.txt.
Note: If you want to return the contents of a directory that’s not your current
working directory, you can add an argument after ls with the absolute or relative
file path to the desired directory. For example, if you’re in the /home/analyst
directory but want to list the contents of its projects subdirectory, you can enter
ls /home/analyst/projects or just ls projects.
cd
The cd command navigates between directories. When you need to change
directories, you should use this command.
To navigate to a subdirectory of the current directory, you can add an argument
after cd with the subdirectory name. For example, if you’re in the
/home/analyst directory and want to navigate to its projects subdirectory, you
can enter cd projects.
You can also navigate to any specific directory by entering the absolute file path.
For example, if you’re in /home/analyst/projects, entering cd
/home/analyst/logs changes your current directory to /home/analyst/logs.
Pro Tip: You can use the relative file path and enter cd .. to go up one level in
the file structure. For example, if the current directory is
/home/analyst/projects, entering cd .. would change your working directory to
/home/analyst.
Common commands for reading file content
The following Linux commands are useful for reading file content: cat, head,
tail, and less.
cat
The cat command displays the content of a file. For example, entering cat
updates.txt returns everything in the updates.txt file.
head
The head command displays just the beginning of a file, by default 10 lines. The
head command can be useful when you want to know the basic contents of a file
but don’t need the full contents. Entering head updates.txt returns only the
first 10 lines of the updates.txt file.
Pro Tip: If you want to change the number of lines returned by head, you can
specify the number of lines by including -n. For example, if you only want to
display the first five lines of the updates.txt file, enter head -n 5 updates.txt.
tail
The tail command does the opposite of head. This command can be used to
display just the end of a file, by default 10 lines. Entering tail updates.txt
returns only the last 10 lines of the updates.txt file.
Pro Tip: You can use tail to read the most recent information in a log file.
less
The less command returns the content of a file one page at a time. For example,
entering less updates.txt changes the terminal window to display the contents
of updates.txt one page at a time. This allows you to easily move forward and
backward through the content.
Once you’ve accessed your content with the less command, you can use several
keyboard controls to move through the file:
 Space bar: Move forward one page
 b: Move back one page
 Down arrow: Move forward one line
 Up arrow: Move back one line
 q: Quit and return to the previous terminal window
Key takeaways
It’s important for security analysts to be able to navigate Linux and the file
system of the FHS. Some key commands for navigating the file system include
pwd, ls, and cd. Reading file content is also an important skill in the security
profession. This can be done with commands such as cat, head, tail, and less.
Filter content in Linux
Filtering for information
You previously explored how filtering for information is an important skill for
security analysts. Filtering is selecting data that match a certain condition. For
example, if you had a virus in your system that only affected the .txt files, you
could use filtering to find these files quickly. Filtering allows you to search based
on specific criteria, such as file extension or a string of text.
grep
The grep command searches a specified file and returns all lines in the file
containing a specified string or text. The grep command commonly takes two
arguments: a specific string to search for and a specific file to search through.
For example, entering grep OS updates.txt returns all lines containing OS in
the updates.txt file. In this example, OS is the specific string to search for, and
updates.txt is the specific file to search through.
Let’s look at another example: grep error time_logs.txt. Here grep is used to
search for the text pattern. error is the term you are looking for in the
time_logs.txt file. When you run this command, grep will scan the time_logs.txt
file and print only the lines containing the word error.
Piping
The pipe command is accessed using the pipe character (|). Piping sends the
standard output of one command as standard input to another command for
further processing. As a reminder, standard output is information returned by
the OS through the shell, and standard input is information received by the OS
via the command line.
The pipe character (|) is located in various places on a keyboard. On many
keyboards, it’s located on the same key as the backslash character (\). On some
keyboards, the | can look different and have a small space through the middle of
the line. If you can’t find the |, search online for its location on your particular
keyboard.
When used with grep, the pipe can help you find directories and files containing
a specific word in their names. For example, ls /home/analyst/reports | grep
users returns the file and directory names in the reports directory that contain
users. Before the pipe, ls indicates to list the names of the files and directories
in reports. Then, it sends this output to the command after the pipe. In this
case, grep users returns all of the file or directory names containing users from
the input it received.
Note: Piping is a general form of redirection in Linux and can be used for
multiple tasks other than filtering. You can think of piping as a general tool that
you can use whenever you want the output of one command to become the
input of another command.
find
The find command searches for directories and files that meet specified criteria.
There’s a wide range of criteria that can be specified with find. For example, you
can search for files and directories that
 Contain a specific string in the name,
 Are a certain file size, or
 Were last modified within a certain time frame.
When using find, the first argument after find indicates where to start searching.
For example, entering find /home/analyst/projects searches for everything
starting at the projects directory.
After this first argument, you need to indicate your criteria for the search. If you
don’t include a specific search criteria with your second argument, your search
will likely return a lot of directories and files.
Specifying criteria involves options. Options modify the behavior of a command
and commonly begin with a hyphen (-).
-name and -iname
One key criteria analysts might use with find is to find file or directory names
that contain a specific string. The specific string you’re searching for must be
entered in quotes after the -name or -iname options. The difference between
these two options is that -name is case-sensitive, and -iname is not.
For example, you might want to find all files in the projects directory that
contain the word “log” in the file name. To do this, you’d enter find
/home/analyst/projects -name "*log*". You could also enter find
/home/analyst/projects -iname "*log*".
In these examples, the output would be all files in the projects directory that
contain log surrounded by zero or more characters. The "*log*" portion of the
command is the search criteria that indicates to search for the string “log”. When
-name is the option, files with names that include Log or LOG, for example,
wouldn’t be returned because this option is case-sensitive. However, they would
be returned when -iname is the option.
Note: An asterisk (*) is used as a wildcard to represent zero or more unknown
characters.
-mtime
Security analysts might also use find to find files or directories last modified
within a certain time frame. The -mtime option can be used for this search. For
example, entering find /home/analyst/projects -mtime -3 returns all files and
directories in the projects directory that have been modified within the past
three days.
The -mtime option search is based on days, so entering -mtime +1 indicates all
files or directories last modified more than one day ago, and entering -mtime -1
indicates all files or directories last modified less than one day ago.
Note: The option -mmin can be used instead of -mtime if you want to base the
search on minutes rather than days.
Key takeaways
Filtering for information using Linux commands is an important skill for security
analysts so that they can customize data to fit their needs. Three key Linux
commands for this are grep, piping (|), and find. These commands can be used
to navigate and filter for information in the file system.

Manage directories and files


Previously, you explored how to manage the file system using Linux commands.
The following commands were introduced: mkdir, rmdir, touch, rm, mv, and
cp. In this reading, you’ll review these commands, the nano text editor, and
learn another way to write to files.
Creating and modifying directories
mkdir
The mkdir command creates a new directory. Like all of the commands
presented in this reading, you can either provide the new directory as the
absolute file path, which starts from the root, or as a relative file path, which
starts from your current directory.
For example, if you want to create a new directory called network in your
/home/analyst/logs directory, you can enter mkdir
/home/analyst/logs/network to create this new directory. If you’re already in
the /home/analyst/logs directory, you can also create this new directory by
entering mkdir network.
Pro Tip: You can use the ls command to confirm the new directory was added.
rmdir
The rmdir command removes, or deletes, a directory. For example, entering
rmdir /home/analyst/logs/network would remove this empty directory from
the file system.
Note: The rmdir command cannot delete directories with files or subdirectories
inside. For example, entering rmdir /home/analyst returns an error message.
Creating and modifying files
touch and rm
The touch command creates a new file. This file won’t have any content inside.
If your current directory is /home/analyst/reports, entering touch
permissions.txt creates a new file in the reports subdirectory called
permissions.txt.
The rm command removes, or deletes, a file. This command should be used
carefully because it’s not easy to recover files deleted with rm. To remove the
permissions file you just created, enter rm permissions.txt.
Pro Tip: You can verify that permissions.txt was successfully created or
removed by entering ls.
mv and cp
You can also use mv and cp when working with files. The mv command moves a
file or directory to a new location, and the cp command copies a file or directory
into a new location. The first argument after mv or cp is the file or directory you
want to move or copy, and the second argument is the location you want to
move or copy it to.
To move permissions.txt into the logs subdirectory, enter mv
permissions.txt /home/analyst/logs. Moving a file removes the file from its
original location. However, copying a file doesn’t remove it from its original
location. To copy permissions.txt into the logs subdirectory while also keeping
it in its original location, enter cp permissions.txt /home/analyst/logs.
Note: The mv command can also be used to rename files. To rename a file, pass
the new name in as the second argument instead of the new location. For
example, entering mv permissions.txt perm.txt renames the
permissions.txt file to perm.txt.
nano text editor
nano is a command-line file editor that is available by default in many Linux
distributions. Many beginners find it easy to use, and it’s widely used in the
security profession. You can perform multiple basic tasks in nano, such as
creating new files and modifying file contents.
To open an existing file in nano from the directory that contains it, enter nano
followed by the file name. For example, entering nano permissions.txt from
the /home/analyst/reports directory opens a new nano editing window with the
permissions.txt file open for editing. You can also provide the absolute file path
to the file if you’re not in the directory that contains it.
You can also create a new file in nano by entering nano followed by a new file
name. For example, entering nano authorized_users.txt from the
/home/analyst/reports directory creates the authorized_users.txt file within
that directory and opens it in a new nano editing window.
Since there isn't an auto-saving feature in nano, it’s important to save your work
before exiting. To save a file in nano, use the keyboard shortcut Ctrl + O. You’ll
be prompted to confirm the file name before saving. To exit out of nano, use the
keyboard shortcut Ctrl + X.
Note: Vim and Emacs are also popular command-line text editors.
Standard output redirection
There’s an additional way you can write to files. Previously, you learned about
standard input and standard output. Standard input is information received by
the OS via the command line, and standard output is information returned by
the OS through the shell.
You’ve also learned about piping. Piping sends the standard output of one
command as standard input to another command for further processing. It uses
the pipe character (|).
In addition to the pipe (|), you can also use the right angle bracket (>) and
double right angle bracket (>>) operators to redirect standard output.
When used with echo, the > and >> operators can be used to send the output
of echo to a specified file rather than the screen. The difference between the two
is that > overwrites your existing file, and >> adds your content to the end of
the existing file instead of overwriting it. The > operator should be used
carefully, because it’s not easy to recover overwritten files.
When you’re inside the directory containing the permissions.txt file, entering
echo "last updated date" >> permissions.txt adds the string “last updated
date” to the file contents. Entering echo "time" > permissions.txt after this
command overwrites the entire file contents of permissions.txt with the string
“time”.
Note: Both the > and >> operators will create a new file if one doesn’t already
exist with your specified name.
Key takeaways
Knowing how to manage the file system in Linux is an important skill for security
analysts. Useful commands for this include: mkdir, rmdir, touch, rm, mv, and
cp. When security analysts need to write to files, they can use the nano text
editor, or the > and >> operators.

You might also like