Cybersecurity Course Note
Cybersecurity Course Note
Controls
Controls are used alongside frameworks to reduce the possibility and impact of a
security threat, risk, or vulnerability. Controls can be physical, technical, and
administrative and are typically used to prevent, detect, or correct security
issues.
Examples of physical controls:
Gates, fences, and locks
Security guards
Closed-circuit television (CCTV), surveillance cameras, and motion
detectors
Access cards or badges to enter office spaces
Examples of technical controls:
Firewalls
MFA
Antivirus software
Examples of administrative controls:
Separation of duties
Authorization
Asset classification
Security principles
In the workplace, security principles are embedded in your daily tasks. Whether
you are analyzing logs, monitoring a security information and event
management (SIEM) dashboard, or using a vulnerability scanner, you will use
these principles in some way.
Previously, you were introduced to several OWASP security principles. These
included:
Minimize attack surface area: Attack surface refers to all the potential
vulnerabilities a threat actor could exploit.
Principle of least privilege: Users have the least amount of access
required to perform their everyday tasks.
Defense in depth: Organizations should have varying security controls
that mitigate risks and threats.
Separation of duties: Critical actions should rely on multiple people,
each of whom follow the principle of least privilege.
Keep security simple: Avoid unnecessarily complicated solutions.
Complexity makes security difficult.
Fix security issues correctly: When security incidents occur, identify
the root cause, contain the impact, identify vulnerabilities, and conduct
tests to ensure that remediation is successful.
Audit checklist
It’s necessary to create an audit checklist before conducting an audit. A checklist
is generally made up of the following areas of focus:
Identify the scope of the audit
The audit should:
o List assets that will be assessed (e.g., firewalls are configured
correctly, PII is secure, physical assets are locked, etc.)
o Note how the audit will help the organization achieve its desired
goals
o Indicate how often an audit should be performed
Splunk
Splunk offers different SIEM tool options: Splunk® Enterprise and Splunk® Cloud.
Both allow you to review an organization's data on dashboards. This helps
security professionals manage an organization's internal infrastructure by
collecting, searching, monitoring, and analyzing log data from multiple sources to
obtain full visibility into an organization’s everyday operations.
Review the following Splunk dashboards and their purposes:
Security posture dashboard
The security posture dashboard is designed for security operations centers
(SOCs). It displays the last 24 hours of an organization’s notable security-related
events and trends and allows security professionals to determine if security
infrastructure and policies are performing as designed. Security analysts can use
this dashboard to monitor and investigate potential threats in real time, such as
suspicious network activity originating from a specific IP address.
Executive summary dashboard
The executive summary dashboard analyzes and monitors the overall health of
the organization over time. This helps security teams improve security measures
that reduce risk. Security analysts might use this dashboard to provide high-level
insights to stakeholders, such as generating a summary of security incidents and
trends over a specific period of time.
Incident review dashboard
The incident review dashboard allows analysts to identify suspicious patterns
that can occur in the event of an incident. It assists by highlighting higher risk
items that need immediate review by an analyst. This dashboard can be very
helpful because it provides a visual timeline of the events leading up to an
incident.
Risk analysis dashboard
The risk analysis dashboard helps analysts identify risk for each risk object (e.g.,
a specific user, a computer, or an IP address). It shows changes in risk-related
activity or behavior, such as a user logging in outside of normal working hours or
unusually high network traffic from a specific computer. A security analyst might
use this dashboard to analyze the potential impact of vulnerabilities in critical
assets, which helps analysts prioritize their risk mitigation efforts.
Chronicle
Chronicle is a cloud-native SIEM tool from Google that retains, analyzes, and
searches log data to identify potential security threats, risks, and vulnerabilities.
Chronicle allows you to collect and analyze log data according to:
A specific asset
A domain name
A user
An IP address
Chronicle provides multiple dashboards that help analysts monitor an
organization’s logs, create filters and alerts, and track suspicious domain names.
Review the following Chronicle dashboards and their purposes:
Enterprise insights dashboard
The enterprise insights dashboard highlights recent alerts. It identifies suspicious
domain names in logs, known as indicators of compromise (IOCs). Each result is
labeled with a confidence score to indicate the likelihood of a threat. It also
provides a severity level that indicates the significance of each threat to the
organization. A security analyst might use this dashboard to monitor login or
data access attempts related to a critical asset—like an application or system—
from unusual locations or devices.
Data ingestion and health dashboard
The data ingestion and health dashboard shows the number of event logs, log
sources, and success rates of data being processed into Chronicle. A security
analyst might use this dashboard to ensure that log sources are correctly
configured and that logs are received without error. This helps ensure that log
related issues are addressed so that the security team has access to the log data
they need.
IOC matches dashboard
The IOC matches dashboard indicates the top threats, risks, and vulnerabilities to
the organization. Security professionals use this dashboard to observe domain
names, IP addresses, and device IOCs over time in order to identify trends. This
information is then used to direct the security team’s focus to the highest priority
threats. For example, security analysts can use this dashboard to search for
additional activity associated with an alert, such as a suspicious user login from
an unusual geographic location.
Main dashboard
The main dashboard displays a high-level summary of information related to the
organization’s data ingestion, alerting, and event activity over time. Security
professionals can use this dashboard to access a timeline of security events—
such as a spike in failed login attempts— to identify threat trends across log
sources, devices, IP addresses, and physical locations.
Rule detections dashboard
The rule detections dashboard provides statistics related to incidents with the
highest occurrences, severities, and detections over time. Security analysts can
use this dashboard to access a list of all the alerts triggered by a specific
detection rule, such as a rule designed to alert whenever a user opens a known
malicious attachment from an email. Analysts then use those statistics to help
manage recurring incidents and establish mitigation tactics to reduce an
organization's level of risk.
User sign in overview dashboard
The user sign in overview dashboard provides information about user access
behavior across the organization. Security analysts can use this dashboard to
access a list of all user sign-in events to identify unusual user activity, such as a
user signing in from multiple locations at the same time. This information is then
used to help mitigate threats, risks, and vulnerabilities to user accounts and the
organization’s applications.
Playbook overview
A playbook is a manual that provides details about any operational action.
Essentially, a playbook provides a predefined and up-to-date list of steps to
perform when responding to an incident.
Playbooks are accompanied by a strategy. The strategy outlines expectations of
team members who are assigned a task, and some playbooks also list the
individuals responsible. The outlined expectations are accompanied by a plan.
The plan dictates how the specific task outlined in the playbook must be
completed.
Playbooks should be treated as living documents, which means that they are
frequently updated by security team members to address industry changes and
new threats. Playbooks are generally managed as a collaborative effort, since
security team members have different levels of expertise.
Updates are often made if:
A failure is identified, such as an oversight in the outlined policies and
procedures, or in the playbook itself.
There is a change in industry standards, such as changes in laws or
regulatory compliance.
The cybersecurity landscape changes due to evolving threat actor tactics
and techniques.
Types of playbooks
Playbooks sometimes cover specific incidents and vulnerabilities. These might
include ransomware, vishing, business email compromise (BEC), and other
attacks previously discussed. Incident and vulnerability response playbooks are
very common, but they are not the only types of playbooks organizations
develop.
Each organization has a different set of playbook tools, methodologies, protocols,
and procedures that they adhere to, and different individuals are involved at
each step of the response process, depending on the country they are in. For
example, incident notification requirements from government-imposed laws and
regulations, along with compliance standards, affect the content in the
playbooks. These requirements are subject to change based on where the
incident originated and the type of data affected.
Incident and vulnerability response playbooks
Incident and vulnerability response playbooks are commonly used by entry-level
cybersecurity professionals. They are developed based on the goals outlined in
an organization’s business continuity plan. A business continuity plan is an
established path forward allowing a business to recover and continue to operate
as normal, despite a disruption like a security breach.
These two types of playbooks are similar in that they both contain predefined
and up-to-date lists of steps to perform when responding to an incident.
Following these steps is necessary to ensure that you, as a security professional,
are adhering to legal and organizational standards and protocols. These
playbooks also help minimize errors and ensure that important actions are
performed within a specific timeframe.
When an incident, threat, or vulnerability occurs or is identified, the level of risk
to the organization depends on the potential damage to its assets. A basic
formula for determining the level of risk is that risk equals the likelihood of a
threat. For this reason, a sense of urgency is essential. Following the steps
outlined in playbooks is also important if any forensic task is being carried out.
Mishandling data can easily compromise forensic data, rendering it unusable.
Common steps included in incident and vulnerability playbooks include:
Preparation
Detection
Analysis
Containment
Eradication
Recovery from an incident
Additional steps include performing post-incident activities, and a coordination of
efforts throughout the investigation and incident and vulnerability response
stages.
Networks
The TCP/IP model
The TCP/IP model is a framework used to visualize how data is organized and
transmitted across a network. This model helps network engineers and network
security analysts conceptualize processes on the network and communicate
where disruptions or security threats occur.
The TCP/IP model has four layers: the network access layer, internet layer,
transport layer, and application layer. When troubleshooting issues on the
network, security professionals can analyze which layers were impacted by an
attack based on what processes were involved in an incident.
There are some important security differences between IPv4 and IPv6. IPv6 offers
more efficient routing and eliminates private address collisions that can occur on
IPv4 when two devices on the same network are attempting to use the same
address.
Ports
When data packets are sent and received across a network, they are assigned a
port.
Within the operating system of a network device, a port is a software-based
location that organizes the sending and receiving of data between devices on a
network.
Ports divide network traffic into segments based on the service they will
perform between two devices.
The computers sending and receiving these data segments know how
to prioritize and process these segments based on their port number. Data
packets include instructions that tell the receiving device what to do with the
information. These instructions come in the form of a port number.
Port numbers allow computers to split the network traffic and prioritize the
operations they will perform with the data.
Some common port numbers are: port 25, which is used for e-mail, port 443,
which is used for secure internet communication, and port 20, for large file
transfers.
Firewall
So far in this course, you learned about stateless firewalls, stateful firewalls, and
next-generation firewalls (NGFWs), and the security advantages of each of them.
Most firewalls are similar in their basic functions. Firewalls allow or block traffic
based on a set of rules. As data packets enter a network, the packet header is
inspected and allowed or denied based on its port number. NGFWs are also able
to inspect packet payloads. Each system should have its own firewall, regardless
of the network firewall.
Intrusion Detection System
An intrusion detection system (IDS) is an application that monitors system
activity and alerts on possible intrusions. An IDS alerts administrators based on
the signature of malicious traffic.
The IDS is configured to detect known attacks. IDS systems often sniff data
packets as they move across the network and analyze them for the
characteristics of known attacks. Some IDS systems review not only for
signatures of known attacks, but also for anomalies that could be the sign of
malicious activity. When the IDS discovers an anomaly, it sends an alert to the
network administrator who can then investigate further.
The limitations to IDS systems are that they can only scan for known attacks or
obvious anomalies. New and sophisticated attacks might not be caught. The
other limitation is that the IDS doesn’t actually stop the incoming traffic if it
detects something awry. It’s up to the network administrator to catch the
malicious activity before it does anything damaging to the network.
When combined with a firewall, an IDS adds another layer of defense. The IDS is
placed behind the firewall and before entering the LAN, which allows the IDS to
analyze data streams after network traffic that is disallowed by the firewall has
been filtered out. This is done to reduce noise in IDS alerts, also referred to as
false positives.
Intrusion Prevention System
An intrusion prevention system (IPS) is an application that monitors system
activity for intrusive activity and takes action to stop the activity. It offers even
more protection than an IDS because it actively stops anomalies when they are
detected, unlike the IDS that simply reports the anomaly to a network
administrator.
An IPS searches for signatures of known attacks and data anomalies. An IPS
reports the anomaly to security analysts and blocks a specific sender or drops
network packets that seem suspect.
The IPS (like an IDS) sits behind the firewall in the network architecture. This
offers a high level of security because risky data streams are disrupted before
they even reach sensitive parts of the network. However, one potential limitation
is that it is inline: If it breaks, the connection between the private network and
the internet breaks. Another limitation of IPS is the possibility of false positives,
which can result in legitimate traffic getting dropped.
Full packet capture devices
Full packet capture devices can be incredibly useful for network administrators
and security professionals. These devices allow you to record and analyze all of
the data that is transmitted over your network. They also aid in investigating
alerts created by an IDS.
Security Information and Event Management
A security information and event management system (SIEM) is an
application that collects and analyzes log data to monitor critical activities in an
organization. SIEM tools work in real time to report suspicious activity in a
centralized dashboard. SIEM tools additionally analyze network log data sourced
from IDSs, IPSs, firewalls, VPNs, proxies, and DNS logs. SIEM tools are a way to
aggregate security event data so that it all appears in one place for security
analysts to analyze. This is referred to as a single pane of glass.
Below, you can review an example of a dashboard from Google Cloud’s SIEM tool,
Chronicle. Chronicle is a cloud-native tool designed to retain, analyze, and
search data.
Splunk is another common SIEM tool. Splunk offers different SIEM tool options:
Splunk Enterprise and Splunk Cloud. Both options include detailed dashboards
which help security professionals to review and analyze an organization's data.
There are also other similar SIEM tools available, and it's important for security
professionals to research the different tools to determine which one is most
beneficial to the organization.
A SIEM tool doesn’t replace the expertise of security analysts, or of the network-
and system-hardening activities covered in this course, but they’re used in
combination with other security methods. Security analysts often work in a
Security Operations Center (SOC) where they can monitor the activity across the
network. They can then use their expertise and experience to determine how to
respond to the information on the dashboard and decide when the events meet
the criteria to be escalated to oversight.
packets.
Intrusion An IDS detects and alerts An IDS can only scan for known
Detection admins about possible attacks or obvious anomalies; new
System (IDS) intrusions, attacks, and other and sophisticated attacks might not
malicious traffic. be caught. It doesn’t actually stop
the incoming traffic.
Security A SIEM tool collects and A SIEM tool only reports on possible
Information and analyzes log data from security issues. It does not take any
Event multiple network machines. It actions to stop or prevent
Management aggregates security events for suspicious events.
(SIEM) monitoring in a central
dashboard.
Key takeaways
Each of these devices or tools cost money to purchase, install, and maintain. An
organization might need to hire additional personnel to monitor the security
tools, as in the case of a SIEM. Decision-makers are tasked with selecting the
appropriate level of security based on cost and risk to the organization. You will
learn more about choosing levels of security later in the course.
Secure the cloud
Earlier in this course, you were introduced to cloud computing. Cloud
computing is a model for allowing convenient and on-demand network access
to a shared pool of configurable computing resources. These resources can be
configured and released with minimal management effort or interaction with the
service provider.
Just like any other IT infrastructure, a cloud infrastructure needs to be secured.
This reading will address some main security considerations that are unique to
the cloud and introduce you to the shared responsibility model used for security
in the cloud. Many organizations that use cloud resources and infrastructure
express concerns about the privacy of their data and resources. This concern is
addressed through cryptography and other additional security measures, which
will be discussed later in this course.
Cloud security considerations
Many organizations choose to use cloud services because of the ease of
deployment, speed of deployment, cost savings, and scalability of these options.
Cloud computing presents unique security challenges that cybersecurity analysts
need to be aware of.
Identity access management
Identity access management (IAM) is a collection of processes and
technologies that helps organizations manage digital identities in their
environment. This service also authorizes how users can use different cloud
resources. A common problem that organizations face when using the cloud is
the loose configuration of cloud user roles. An improperly configured user role
increases risk by allowing unauthorized users to have access to critical cloud
operations.
Configuration
The number of available cloud services adds complexity to the network. Each
service must be carefully configured to meet security and compliance
requirements. This presents a particular challenge when organizations perform
an initial migration into the cloud. When this change occurs on their network,
they must ensure that every process moved into the cloud has been configured
correctly. If network administrators and architects are not meticulous in correctly
configuring the organization’s cloud services, they could leave the network open
to compromise. Misconfigured cloud services are a common source of cloud
security issues.
Attack surface
Cloud service providers (CSPs) offer numerous applications and services for
organizations at a low cost.
Every service or application on a network carries its own set of risks and
vulnerabilities and increases an organization’s overall attack surface. An
increased attack surface must be compensated for with increased security
measures.
Cloud networks that utilize many services introduce lots of entry points into an
organization’s network. However, if the network is designed correctly, utilizing
several services does not introduce more entry points into an organization’s
network design. These entry points can be used to introduce malware onto the
network and pose other security vulnerabilities. It is important to note that CSPs
often defer to more secure options, and have undergone more scrutiny than a
traditional on-premises network.
Zero-day attacks
Zero-day attacks are an important security consideration for organizations using
cloud or traditional on-premise network solutions. A zero day attack is an exploit
that was previously unknown. CSPs are more likely to know about a zero day
attack occurring before a traditional IT organization does. CSPs have ways of
patching hypervisors and migrating workloads to other virtual machines. These
methods ensure the customers are not impacted by the attack. There are also
several tools available for patching at the operating system level that
organizations can use.
Visibility and tracking
Network administrators have access to every data packet crossing the network
with both on-premise and cloud networks. They can sniff and inspect data
packets to learn about network performance or to check for possible threats and
attacks.
This kind of visibility is also offered in the cloud through flow logs and tools, such
as packet mirroring. CSPs take responsibility for security in the cloud, but they do
not allow the organizations that use their infrastructure to monitor traffic on the
CSP’s servers. Many CSPs offer strong security measures to protect their
infrastructure. Still, this situation might be a concern for organizations that are
accustomed to having full access to their network and operations. CSPs pay for
third-party audits to verify how secure a cloud network is and identify potential
vulnerabilities. The audits can help organizations identify whether any
vulnerabilities originate from on-premise infrastructure and if there are any
compliance lapses from their CSP.
Things change fast in the cloud
CSPs are large organizations that work hard to stay up-to-date with technology
advancements. For organizations that are used to being in control of any
adjustments made to their network, this can be a potential challenge to keep up
with. Cloud service updates can affect security considerations for the
organizations using them. For example, connection configurations might need to
be changed based on the CSP’s updates.
Organizations that use CSPs usually have to update their IT processes. It is
possible for organizations to continue following established best practices for
changes, configurations, and other security considerations. However, an
organization might have to adopt a different approach in a way that aligns with
changes made by the CSP.
Cloud networking offers various options that might appear attractive to a small
company—options that they could never afford to build on their own premises.
However, it is important to consider that each service adds complexity to the
security profile of the organization, and they will need security personnel to
monitor all of the cloud services.
Shared responsibility model
A commonly accepted cloud security principle is the shared responsibility model.
The shared responsibility model states that the CSP must take responsibility
for security involving the cloud infrastructure, including physical data centers,
hypervisors, and host operating systems. The company using the cloud service is
responsible for the assets and processes that they store or operate in the cloud.
The shared responsibility model ensures that both the CSP and the users agree
about where their responsibility for security begins and ends. A problem occurs
when organizations assume that the CSP is taking care of security that they have
not taken responsibility for. One example of this is cloud applications and
configurations. The CSP takes responsibility for securing the cloud, but it is the
organization’s responsibility to ensure that services are configured properly
according to the security requirements of their organization.
Key takeaways
It is essential to know the security considerations that are unique to the cloud
and understanding the shared responsibility model for cloud security.
Organizations are responsible for correctly configuring and maintaining best
security practices for their cloud services. The shared responsibility model
ensures that both the CSP and users agree about what the organization is
responsible for and what the CSP is responsible for when securing the cloud
infrastructure.
Cryptography and cloud security
Earlier in this course, you were introduced to the concepts of the shared
responsibility model and identity and access management (IAM). Similar to on-
premise networks, cloud networks also need to be secured through a mixture of
security hardening practices and cryptography.
This reading will address common cloud security hardening practices, what to
consider when implementing cloud security measures, and the fundamentals of
cryptography. Since cloud infrastructure is becoming increasingly common, it’s
important to understand how cloud networks operate and how to secure them.
Cloud security hardening
There are various techniques and tools that can be used to secure cloud network
infrastructure and resources. Some common cloud security hardening techniques
include incorporating IAM, hypervisors, baselining, cryptography, and
cryptographic erasure.
Identity access management (IAM)
Identity access management (IAM) is a collection of processes and
technologies that helps organizations manage digital identities in their
environment. This service also authorizes how users can leverage different cloud
resources.
Hypervisors
A hypervisor abstracts the host’s hardware from the operating software
environment. There are two types of hypervisors. Type one hypervisors run on
the hardware of the host computer. An example of a type one hypervisor is
VMware®'s ESXi. Type two hypervisors operate on the software of the host
computer. An example of a type two hypervisor is VirtualBox. Cloud service
providers (CSPs) commonly use type one hypervisors. CSPs are responsible for
managing the hypervisor and other virtualization components. The CSP ensures
that cloud resources and cloud environments are available, and it provides
regular patches and updates. Vulnerabilities in hypervisors or misconfigurations
can lead to virtual machine escapes (VM escapes). A VM escape is an exploit
where a malicious actor gains access to the primary hypervisor, potentially the
host computer and other VMs. As a CSP customer, you will rarely deal with
hypervisors directly.
Baselining
Baselining for cloud networks and operations cover how the cloud environment is
configured and set up. A baseline is a fixed reference point. This reference point
can be used to compare changes made to a cloud environment. Proper
configuration and setup can greatly improve the security and performance of a
cloud environment. Examples of establishing a baseline in a cloud environment
include: restricting access to the admin portal of the cloud environment, enabling
password management, enabling file encryption, and enabling threat detection
services for SQL databases.
Cryptography in the cloud
Cryptography can be applied to secure data that is processed and stored in a
cloud environment. Cryptography uses encryption and secure key management
systems to provide data integrity and confidentiality. Cryptographic encryption is
one of the key ways to secure sensitive data and information in the cloud.
Encryption is the process of scrambling information into ciphertext, which is not
readable to anyone without the encryption key. Encryption primarily originated
from manually encoding messages and information using an algorithm to convert
any given letter or number to a new value. Modern encryption relies on the
secrecy of a key, rather than the secrecy of an algorithm. Cryptography is an
important tool that helps secure cloud networks and data at rest to prevent
unauthorized access. You’ll learn more about cryptography in-depth in an
upcoming course.
Cryptographic erasure
Cryptographic erasure is a method of erasing the encryption key for the
encrypted data. When destroying data in the cloud, more traditional methods of
data destruction are not as effective. Crypto-shredding is a newer technique
where the cryptographic keys used for decrypting the data are destroyed. This
makes the data undecipherable and prevents anyone from decrypting the data.
When crypto-shredding, all copies of the key need to be destroyed so no one has
any opportunity to access the data in the future.
Key Management
Modern encryption relies on keeping the encryption keys secure. Below are the
measures you can take to further protect your data when using cloud
applications:
Trusted platform module (TPM). TPM is a computer chip that can securely
store passwords, certificates, and encryption keys.
Cloud hardware security module (CloudHSM). CloudHSM is a computing
device that provides secure storage for cryptographic keys and processes
cryptographic operations, such as encryption and decryption.
Organizations and customers do not have access to the cloud service provider
(CSP) directly, but they can request audits and security reports by contacting the
CSP. Customers typically do not have access to the specific encryption keys that
CSPs use to encrypt the customers’ data. However, almost all CSPs allow
customers to provide their own encryption keys, depending on the service the
customer is accessing. In turn, the customer is responsible for their encryption
keys and ensuring the keys remain confidential. The CSP is limited in how they
can help the customer if the customer’s keys are compromised or destroyed. One
key benefit of the shared responsibility model is that the customer is not entirely
responsible for maintenance of the cryptographic infrastructure. Organizations
can assess and monitor the risk involved with allowing the CSP to manage the
infrastructure by reviewing a CSPs audit and security controls. For federal
contractors, FEDRAMP provides a list of verified CSPs.
Key takeaways
Cloud security hardening is a critical component to consider when assessing the
security of various public cloud environments and improving the security within
your organization. Identity access management (IAM), correctly configuring a
baseline for the cloud environment, securing hypervisors, cryptography, and
cryptographic erasure are all methods to use to further secure cloud
infrastructure.
Linux & SQL
Requests to the operating system
Operating systems are a critical component of a computer. They make
connections between applications and hardware to allow users to perform tasks.
In this reading, you’ll explore this complex process further and consider it using a
new analogy and a new example.
Booting the computer
When you boot, or turn on, your computer, either a BIOS or UEFI microchip is
activated. The Basic Input/Output System (BIOS) is a microchip that contains
loading instructions for the computer and is prevalent in older systems. The
Unified Extensible Firmware Interface (UEFI) is a microchip that contains
loading instructions for the computer and replaces BIOS on more modern
systems.
The BIOS and UEFI chips both perform the same function for booting the
computer. BIOS was the standard chip until 2007, when UEFI chips increased in
use. Now, most new computers include a UEFI chip. UEFI provides enhanced
security features.
The BIOS or UEFI microchips contain a variety of loading instructions for the
computer to follow. For example, one of the loading instructions is to verify the
health of the computer’s hardware.
The last instruction from the BIOS or UEFI activates the bootloader. The
bootloader is a software program that boots the operating system. Once the
operating system has finished booting, your computer is ready for use.
Completing a task
As previously discussed, operating systems help us use computers more
efficiently. Once a computer has gone through the booting process, completing a
task on a computer is a four-part process.
User
The first part of the process is the user. The user initiates the process by having
something they want to accomplish on the computer. Right now, you’re a user!
You’ve initiated the process of accessing this reading.
Application
The application is the software program that users interact with to complete a
task. For example, if you want to calculate something, you would use the
calculator application. If you want to write a report, you would use a word
processing application. This is the second part of the process.
Operating system
The operating system receives the user’s request from the application. It’s the
operating system’s job to interpret the request and direct its flow. In order to
complete the task, the operating system sends it on to applicable components of
the hardware.
Hardware
The hardware is where all the processing is done to complete the tasks initiated
by the user. For example, when a user wants to calculate a number, the CPU
figures out the answer. As another example, when a user wants to save a file,
another component of the hardware, the hard drive, handles this task.
After the work is done by the hardware, it sends the output back through the
operating system to the application so that it can display the results to the user.
The OS at work behind the scenes
Consider once again how a computer is similar to a car. There are processes that
someone won’t directly observe when operating a car, but they do feel it move
forward when they press the gas pedal. It’s the same with a computer. Important
work happens inside a computer that you don’t experience directly. This work
involves the operating system.
You can explore this through another analogy. The process of using an operating
system is also similar to ordering at a restaurant. At a restaurant you place an
order and get your food, but you don’t see what’s happening in the kitchen when
the cooks prepare the food.
Ordering food is similar to using an application on a computer. When you order
your food, you make a specific request like “a small soup, very hot.” When you
use an application, you also make specific requests like “print three double-sided
copies of this document.”
You can compare the food you receive to what happens when the hardware
sends output. You receive the food that you ordered. You receive the document
that you wanted to print.
Finally, the kitchen is like the OS. You don’t know what happens in the kitchen,
but it’s critical in interpreting the request and ensuring you receive what you
ordered. Similarly, though the work of the OS is not directly transparent to you,
it’s critical in completing your tasks.
An example: Downloading a file from an internet browser
Previously, you explored how operating systems, applications, and hardware
work together by examining a task involving a calculation. You can expand this
understanding by exploring how the OS completes another task, downloading a
file from an internet browser:
First, the user decides they want to download a file that they found online,
so they click on a download button near the file in the internet browser
application.
Then, the internet browser communicates this action to the OS.
The OS sends the request to download the file to the appropriate hardware
for processing.
The hardware begins downloading the file, and the OS sends this
information to the internet browser application. The internet browser then
informs the user when the file has been downloaded.
Virtualization technology
You've explored a lot about operating systems. One more aspect to consider is
that operating systems can run on virtual machines. In this reading, you’ll learn
about virtual machines and the general concept of virtualization. You’ll explore
how virtual machines work and the benefits of using them.
What is a virtual machine?
A virtual machine (VM) is a virtual version of a physical computer. Virtual
machines are one example of virtualization. Virtualization is the process of using
software to create virtual representations of various physical machines. The term
“virtual” refers to machines that don’t exist physically, but operate like they do
because their software simulates physical hardware. Virtual systems don’t use
dedicated physical hardware. Instead, they use software-defined versions of the
physical hardware. This means that a single virtual machine has a virtual CPU,
virtual storage, and other virtual hardware. Virtual systems are just code.
You can run multiple virtual machines using the physical hardware of a single
computer. This involves dividing the resources of the host computer to be shared
across all physical and virtual components. For example, Random Access
Memory (RAM) is a hardware component used for short-term memory. If a
computer has 16GB of RAM, it can host three virtual machines so that the
physical computer and virtual machines each have 4GB of RAM. Also, each of
these virtual machines would have their own operating system and function
similarly to a typical computer.
Benefits of virtual machines
Security professionals commonly use virtualization and virtual machines.
Virtualization can increase security for many tasks and can also increase
efficiency.
Security
One benefit is that virtualization can provide an isolated environment, or a
sandbox, on the physical host machine. When a computer has multiple virtual
machines, these virtual machines are “guests” of the computer. Specifically, they
are isolated from the host computer and other guest virtual machines. This
provides a layer of security, because virtual machines can be kept separate from
the other systems. For example, if an individual virtual machine becomes
infected with malware, it can be dealt with more securely because it’s isolated
from the other machines. A security professional could also intentionally place
malware on a virtual machine to examine it in a more secure environment.
Note: Although using virtual machines is useful when investigating potentially
infected machines or running malware in a constrained environment, there are
still some risks. For example, a malicious program can escape virtualization and
access the host machine. This is why you should never completely trust
virtualized systems.
Efficiency
Using virtual machines can also be an efficient and convenient way to perform
security tasks. You can open multiple virtual machines at once and switch easily
between them. This allows you to streamline security tasks, such as testing and
exploring various applications.
You can compare the efficiency of a virtual machine to a city bus. A single city
bus has a lot of room and is an efficient way to transport many people
simultaneously. If city buses didn’t exist, then everyone on the bus would have to
drive their own cars. This uses more gas, cars, and other resources than riding
the city bus.
Similar to how many people can ride one bus, many virtual machines can be
hosted on the same physical machine. That way, separate physical machines
aren't needed to perform certain tasks.
Managing virtual machines
Virtual machines can be managed with a software called a hypervisor.
Hypervisors help users manage multiple virtual machines and connect the virtual
and physical hardware. Hypervisors also help with allocating the shared
resources of the physical host machine to one or more virtual machines.
One hypervisor that is useful for you to be familiar with is the Kernel-based
Virtual Machine (KVM). KVM is an open-source hypervisor that is supported by
most major Linux distributions. It is built into the Linux kernel, which means it
can be used to create virtual machines on any machine running a Linux
operating system without the need for additional software.
Other forms of virtualization
In addition to virtual machines, there are other forms of virtualization. Some of
these virtualization technologies do not use operating systems. For example,
multiple virtual servers can be created from a single physical server. Virtual
networks can also be created to more efficiently use the hardware of a physical
network.
Key takeaways
Virtual machines are virtual versions of physical computers and are one example
of virtualization. Virtualization is a key technology in the security industry, and
it’s important for security analysts to understand the basics. There are many
benefits to using virtual machines, such as isolation of malware and other
security risks. However, it’s important to remember there’s still a risk of
malicious software escaping their virtualized environments.
Linux architecture explained
Understanding the Linux architecture is important for a security analyst. When
you understand how a system is organized, it makes it easier to understand how
it functions. In this reading, you’ll learn more about the individual components in
the Linux architecture. A request to complete a task starts with the user and then
flows through applications, the shell, the Filesystem Hierarchy Standard, the
kernel, and the hardware.
User
The user is the person interacting with a computer. They initiate and manage
computer tasks. Linux is a multi-user system, which means that multiple users
can use the same resources at the same time.
Applications
An application is a program that performs a specific task. There are many
different applications on your computer. Some applications typically come pre-
installed on your computer, such as calculators or calendars. Other applications
might have to be installed, such as some web browsers or email clients. In Linux,
you'll often use a package manager to install applications. A package manager
is a tool that helps users install, manage, and remove packages or applications.
A package is a piece of software that can be combined with other packages to
form an application.
Shell
The shell is the command-line interpreter. Everything entered into the shell is
text based. The shell allows users to give commands to the kernel and receive
responses from it. You can think of the shell as a translator between you and
your computer. The shell translates the commands you enter so that the
computer can perform the tasks you want.
Filesystem Hierarchy Standard (FHS)
The Filesystem Hierarchy Standard (FHS) is the component of the Linux OS
that organizes data. It specifies the location where data is stored in the operating
system.
A directory is a file that organizes where other files are stored. Directories are
sometimes called “folders,” and they can contain files or other directories. The
FHS defines how directories, directory contents, and other storage is organized
so the operating system knows where to find specific data.
Kernel
The kernel is the component of the Linux OS that manages processes and
memory. It communicates with the applications to route commands. The Linux
kernel is unique to the Linux OS and is critical for allocating resources in the
system. The kernel controls all major functions of the hardware, which can help
get tasks expedited more efficiently.
Hardware
The hardware is the physical components of a computer. You might be familiar
with some hardware components, such as hard drives or CPUs. Hardware is
categorized as either peripheral or internal.
Peripheral devices
Peripheral devices are hardware components that are attached and controlled
by the computer system. They are not core components needed to run the
computer system. Peripheral devices can be added or removed freely. Examples
of peripheral devices include monitors, printers, the keyboard, and the mouse.
Internal hardware
Internal hardware are the components required to run the computer. Internal
hardware includes a main circuit board and all components attached to it. This
main circuit board is also called the motherboard. Internal hardware includes the
following:
The Central Processing Unit (CPU) is a computer’s main processor,
which is used to perform general computing tasks on a computer. The CPU
executes the instructions provided by programs, which enables these
programs to run.
Random Access Memory (RAM) is a hardware component used for
short-term memory. It’s where data is stored temporarily as you perform
tasks on your computer. For example, if you’re writing a report on your
computer, the data needed for this is stored in RAM. After you’ve finished
writing the report and closed down that program, this data is deleted from
RAM. Information in RAM cannot be accessed once the computer has been
turned off. The CPU takes the data from RAM to run programs.
The hard drive is a hardware component used for long-term memory. It’s
where programs and files are stored for the computer to access later.
Information on the hard drive can be accessed even after a computer has
been turned off and on again. A computer can have multiple hard drives.
Key takeaways
It’s important for security analysts to understand the Linux architecture and how
these components are organized. The components of the Linux architecture are
the user, applications, shell, Filesystem Hierarchy Standard, kernel, and
hardware. Each of these components is important in how Linux functions.
More Linux distributions
Previously, you were introduced to the different distributions of Linux. This
included KALI LINUX ™. (KALI LINUX ™ is a trademark of OffSec.) In addition to
KALI LINUX ™, there are multiple other Linux distributions that security analysts
should be familiar with. In this reading, you’ll learn about additional Linux
distributions.
KALI LINUX ™
KALI LINUX ™ is an open-source distribution of Linux that is widely used in the
security industry. This is because KALI LINUX ™, which is Debian-based, is pre-
installed with many useful tools for penetration testing and digital forensics. A
penetration test is a simulated attack that helps identify vulnerabilities in
systems, networks, websites, applications, and processes. Digital forensics is
the practice of collecting and analyzing data to determine what has happened
after an attack. These are key activities in the security industry.
However, KALI LINUX ™ is not the only Linux distribution that is used in
cybersecurity.
Ubuntu
Ubuntu is an open-source, user-friendly distribution that is widely used in
security and other industries. It has both a command-line interface (CLI) and a
graphical user interface (GUI). Ubuntu is also Debian-derived and includes
common applications by default. Users can also download many more
applications from a package manager, including security-focused tools. Because
of its wide use, Ubuntu has an especially large number of community resources
to support users.
Ubuntu is also widely used for cloud computing. As organizations migrate to
cloud servers, cybersecurity work may more regularly involve Ubuntu
derivatives.
Parrot
Parrot is an open-source distribution that is commonly used for security. Similar
to KALI LINUX ™, Parrot comes with pre-installed tools related to penetration
testing and digital forensics. Like both KALI LINUX ™ and Ubuntu, it is based on
Debian.
Parrot is also considered to be a user-friendly Linux distribution. This is because it
has a GUI that many find easy to navigate. This is in addition to Parrot’s CLI.
Red Hat® Enterprise Linux®
Red Hat Enterprise Linux is a subscription-based distribution of Linux built for
enterprise use. Red Hat is not free, which is a major difference from the
previously mentioned distributions. Because it’s built and supported for
enterprise use, Red Hat also offers a dedicated support team for customers to
call about issues.
CentOS
CentOS is an open-source distribution that is closely related to Red Hat. It uses
source code published by Red Hat to provide a similar platform. However,
CentOS does not offer the same enterprise support that Red Hat provides and is
supported through the community.
Under the FHS, a file’s location can be described by a file path. A file path is the
location of a file or directory. In the file path, the different levels of the hierarchy
are separated by a forward slash (/).
Root directory
The root directory is the highest-level directory in Linux, and it’s always
represented with a forward slash (/). All subdirectories branch off the root
directory. Subdirectories can continue branching out to as many levels as
necessary.
Standard FHS directories
Directly below the root directory, you’ll find standard FHS directories. In the
diagram, home, bin, and etc are standard FHS directories. Here are a few
examples of what standard directories contain:
/home: Each user in the system gets their own home directory.
/bin: This directory stands for “binary” and contains binary files and other
executables. Executables are files that contain a series of commands a
computer needs to follow to run programs and perform other functions.
/etc: This directory stores the system’s configuration files.
/tmp: This directory stores many temporary files. The /tmp directory is
commonly used by attackers because anyone in the system can modify
data in these files.
/mnt: This directory stands for “mount” and stores media, such as USB
drives and hard drives.
Pro Tip: You can use the man hier command to learn more about the FHS and
its standard directories.
User-specific subdirectories
Under home are subdirectories for specific users. In the diagram, these users are
analyst and analyst2. Each user has their own personal subdirectories, such as
projects, logs, or reports.
Note: When the path leads to a subdirectory below the user’s home directory,
the user’s home directory can be represented as the tilde (~). For example,
/home/analyst/logs can also be represented as ~/logs.
You can navigate to specific subdirectories using their absolute or relative file
paths. The absolute file path is the full file path, which starts from the root. For
example, /home/analyst/projects is an absolute file path. The relative file
path is the file path that starts from a user's current directory.
Note: Relative file paths can use a dot (.) to represent the current directory, or
two dots (..) to represent the parent of the current directory. An example of a
relative file path could be ../projects.
Key commands for navigating the file system
The following Linux commands can be used to navigate the file system: pwd, ls,
and cd.
pwd
The pwd command prints the working directory to the screen. Or in other words,
it returns the directory that you’re currently in.
The output gives you the absolute path to this directory. For example, if you’re in
your home directory and your username is analyst, entering pwd returns
/home/analyst.
Pro Tip: To learn what your username is, use the whoami command. The
whoami command returns the username of the current user. For example, if
your username is analyst, entering whoami returns analyst.
ls
The ls command displays the names of the files and directories in the current
working directory. For example, in the video, ls returned directories such as logs,
and a file called updates.txt.
Note: If you want to return the contents of a directory that’s not your current
working directory, you can add an argument after ls with the absolute or relative
file path to the desired directory. For example, if you’re in the /home/analyst
directory but want to list the contents of its projects subdirectory, you can enter
ls /home/analyst/projects or just ls projects.
cd
The cd command navigates between directories. When you need to change
directories, you should use this command.
To navigate to a subdirectory of the current directory, you can add an argument
after cd with the subdirectory name. For example, if you’re in the
/home/analyst directory and want to navigate to its projects subdirectory, you
can enter cd projects.
You can also navigate to any specific directory by entering the absolute file path.
For example, if you’re in /home/analyst/projects, entering cd
/home/analyst/logs changes your current directory to /home/analyst/logs.
Pro Tip: You can use the relative file path and enter cd .. to go up one level in
the file structure. For example, if the current directory is
/home/analyst/projects, entering cd .. would change your working directory to
/home/analyst.
Common commands for reading file content
The following Linux commands are useful for reading file content: cat, head,
tail, and less.
cat
The cat command displays the content of a file. For example, entering cat
updates.txt returns everything in the updates.txt file.
head
The head command displays just the beginning of a file, by default 10 lines. The
head command can be useful when you want to know the basic contents of a file
but don’t need the full contents. Entering head updates.txt returns only the
first 10 lines of the updates.txt file.
Pro Tip: If you want to change the number of lines returned by head, you can
specify the number of lines by including -n. For example, if you only want to
display the first five lines of the updates.txt file, enter head -n 5 updates.txt.
tail
The tail command does the opposite of head. This command can be used to
display just the end of a file, by default 10 lines. Entering tail updates.txt
returns only the last 10 lines of the updates.txt file.
Pro Tip: You can use tail to read the most recent information in a log file.
less
The less command returns the content of a file one page at a time. For example,
entering less updates.txt changes the terminal window to display the contents
of updates.txt one page at a time. This allows you to easily move forward and
backward through the content.
Once you’ve accessed your content with the less command, you can use several
keyboard controls to move through the file:
Space bar: Move forward one page
b: Move back one page
Down arrow: Move forward one line
Up arrow: Move back one line
q: Quit and return to the previous terminal window
Key takeaways
It’s important for security analysts to be able to navigate Linux and the file
system of the FHS. Some key commands for navigating the file system include
pwd, ls, and cd. Reading file content is also an important skill in the security
profession. This can be done with commands such as cat, head, tail, and less.
Filter content in Linux
Filtering for information
You previously explored how filtering for information is an important skill for
security analysts. Filtering is selecting data that match a certain condition. For
example, if you had a virus in your system that only affected the .txt files, you
could use filtering to find these files quickly. Filtering allows you to search based
on specific criteria, such as file extension or a string of text.
grep
The grep command searches a specified file and returns all lines in the file
containing a specified string or text. The grep command commonly takes two
arguments: a specific string to search for and a specific file to search through.
For example, entering grep OS updates.txt returns all lines containing OS in
the updates.txt file. In this example, OS is the specific string to search for, and
updates.txt is the specific file to search through.
Let’s look at another example: grep error time_logs.txt. Here grep is used to
search for the text pattern. error is the term you are looking for in the
time_logs.txt file. When you run this command, grep will scan the time_logs.txt
file and print only the lines containing the word error.
Piping
The pipe command is accessed using the pipe character (|). Piping sends the
standard output of one command as standard input to another command for
further processing. As a reminder, standard output is information returned by
the OS through the shell, and standard input is information received by the OS
via the command line.
The pipe character (|) is located in various places on a keyboard. On many
keyboards, it’s located on the same key as the backslash character (\). On some
keyboards, the | can look different and have a small space through the middle of
the line. If you can’t find the |, search online for its location on your particular
keyboard.
When used with grep, the pipe can help you find directories and files containing
a specific word in their names. For example, ls /home/analyst/reports | grep
users returns the file and directory names in the reports directory that contain
users. Before the pipe, ls indicates to list the names of the files and directories
in reports. Then, it sends this output to the command after the pipe. In this
case, grep users returns all of the file or directory names containing users from
the input it received.
Note: Piping is a general form of redirection in Linux and can be used for
multiple tasks other than filtering. You can think of piping as a general tool that
you can use whenever you want the output of one command to become the
input of another command.
find
The find command searches for directories and files that meet specified criteria.
There’s a wide range of criteria that can be specified with find. For example, you
can search for files and directories that
Contain a specific string in the name,
Are a certain file size, or
Were last modified within a certain time frame.
When using find, the first argument after find indicates where to start searching.
For example, entering find /home/analyst/projects searches for everything
starting at the projects directory.
After this first argument, you need to indicate your criteria for the search. If you
don’t include a specific search criteria with your second argument, your search
will likely return a lot of directories and files.
Specifying criteria involves options. Options modify the behavior of a command
and commonly begin with a hyphen (-).
-name and -iname
One key criteria analysts might use with find is to find file or directory names
that contain a specific string. The specific string you’re searching for must be
entered in quotes after the -name or -iname options. The difference between
these two options is that -name is case-sensitive, and -iname is not.
For example, you might want to find all files in the projects directory that
contain the word “log” in the file name. To do this, you’d enter find
/home/analyst/projects -name "*log*". You could also enter find
/home/analyst/projects -iname "*log*".
In these examples, the output would be all files in the projects directory that
contain log surrounded by zero or more characters. The "*log*" portion of the
command is the search criteria that indicates to search for the string “log”. When
-name is the option, files with names that include Log or LOG, for example,
wouldn’t be returned because this option is case-sensitive. However, they would
be returned when -iname is the option.
Note: An asterisk (*) is used as a wildcard to represent zero or more unknown
characters.
-mtime
Security analysts might also use find to find files or directories last modified
within a certain time frame. The -mtime option can be used for this search. For
example, entering find /home/analyst/projects -mtime -3 returns all files and
directories in the projects directory that have been modified within the past
three days.
The -mtime option search is based on days, so entering -mtime +1 indicates all
files or directories last modified more than one day ago, and entering -mtime -1
indicates all files or directories last modified less than one day ago.
Note: The option -mmin can be used instead of -mtime if you want to base the
search on minutes rather than days.
Key takeaways
Filtering for information using Linux commands is an important skill for security
analysts so that they can customize data to fit their needs. Three key Linux
commands for this are grep, piping (|), and find. These commands can be used
to navigate and filter for information in the file system.