Cybersecurity Notes
Cybersecurity Notes
As you’ve learned, cybersecurity (also known as security) is the practice of ensuring confidentiality, integrity, and
availability of information by protecting networks, devices, people, and data from unauthorized access or criminal
exploitation. In this reading, you’ll be introduced to some key terms used in the cybersecurity profession. Then, you’ll
be provided with a resource that’s useful for staying informed about changes to cybersecurity terminology.
There are many terms and concepts that are important for security professionals to know. Being familiar with them can
help you better identify the threats that can harm organizations and people alike. A security analyst or cybersecurity
analyst focuses on monitoring networks for breaches. They also help develop strategies to secure an organization and
research information technology (IT) security trends to remain alert and informed about potential threats. Additionally,
an analyst works to prevent incidents. In order for analysts to effectively do these types of tasks, they need to develop
knowledge of the following key concepts.
Compliance is the process of adhering to internal standards and external regulations and enables organizations to
avoid fines and security breaches.
Security frameworks are guidelines used for building plans to help mitigate risks and threats to data and privacy.
Security controls are safeguards designed to reduce specific security risks. They are used with security frameworks
to establish a strong security posture.
Security posture is an organization’s ability to manage its defense of critical assets and data and react to change. A
strong security posture leads to lower risk for the organization.
A threat actor, or malicious attacker, is any person or group who presents a security risk. This risk can relate to
computers, applications, networks, and data.
An internal threat can be a current or former employee, an external vendor, or a trusted partner who poses a security
risk. At times, an internal threat is accidental. For example, an employee who accidentally clicks on a malicious email
link would be considered an accidental threat. Other times, the internal threat actor intentionally engages in risky
activities, such as unauthorized data access.
Network security is the practice of keeping an organization's network infrastructure secure from unauthorized
access. This includes data, services, systems, and devices that are stored in an organization’s network.
Cloud security is the process of ensuring that assets stored in the cloud are properly configured, or set up correctly,
and access to those assets is limited to authorized users. The cloud is a network made up of a collection of servers or
computers that store resources and data in remote physical locations known as data centers that can be accessed via
the internet. Cloud security is a growing subfield of cybersecurity that specifically focuses on the protection of data,
applications, and infrastructure in the cloud.
Programming is a process that can be used to create a specific set of instructions for a computer to execute tasks.
These tasks can include:
Previously, you learned that cybersecurity analysts need to develop certain core skills to be successful at work.
Transferable skills are skills from other areas of study or practice that can apply to different careers. Technical
skills may apply to several professions, as well; however, they typically require knowledge of specific tools,
procedures, and policies. In this reading, you’ll explore both transferable skills and technical skills further.
Transferable skills
You have probably developed many transferable skills through life experiences; some of those skills will help you
thrive as a cybersecurity professional. These include:
• Communication: As a cybersecurity analyst, you will need to communicate and collaborate with others.
Understanding others’ questions or concerns and communicating information clearly to individuals with
technical and non-technical knowledge will help you mitigate security issues quickly.
• Problem-solving: One of your main tasks as a cybersecurity analyst will be to proactively identify and solve
problems. You can do this by recognizing attack patterns, then determining the most efficient solution to
minimize risk. Don't be afraid to take risks, and try new things. Also, understand that it's rare to find a perfect
solution to a problem. You’ll likely need to compromise.
• Time management: Having a heightened sense of urgency and prioritizing tasks appropriately is essential in
the cybersecurity field. So, effective time management will help you minimize potential damage and risk to
critical assets and data. Additionally, it will be important to prioritize tasks and stay focused on the most urgent
issue.
• Growth mindset: This is an evolving industry, so an important transferable skill is a willingness to learn.
Technology moves fast, and that's a great thing! It doesn't mean you will need to learn it all, but it does mean
that you’ll need to continue to learn throughout your career. Fortunately, you will be able to apply much of
what you learn in this program to your ongoing professional development.
• Diverse perspectives: The only way to go far is together. By having respect for each other and encouraging
diverse perspectives and mutual respect, you’ll undoubtedly find multiple and better solutions to security
problems.
Technical skills
There are many technical skills that will help you be successful in the cybersecurity field. You’ll learn and practice
these skills as you progress through the certificate program. Some of the tools and concepts you’ll need to use and be
able to understand include:
• Programming languages: By understanding how to use programming languages, cybersecurity analysts can
automate tasks that would otherwise be very time consuming. Examples of tasks that programming can be
used for include searching data to identify potential threats or organizing and analyzing information to identify
patterns related to security issues.
• Security information and event management (SIEM) tools: SIEM tools collect and analyze log data, or
records of events such as unusual login behavior, and support analysts’ ability to monitor critical activities in
an organization. This helps cybersecurity professionals identify and analyze potential security threats, risks,
and vulnerabilities more efficiently.
• Intrusion detection systems (IDSs): Cybersecurity analysts use IDSs to monitor system activity and alerts
for possible intrusions. It’s important to become familiar with IDSs because they’re a key tool that every
organization uses to protect assets and data. For example, you might use an IDS to monitor networks for
signs of malicious activity, like unauthorized access to a network.
• Threat landscape knowledge: Being aware of current trends related to threat actors, malware, or threat
methodologies is vital. This knowledge allows security teams to build stronger defenses against threat actor
tactics and techniques. By staying up to date on attack trends and patterns, security professionals are better
able to recognize when new types of threats emerge such as a new ransomware variant.
• Incident response: Cybersecurity analysts need to be able to follow established policies and procedures to
respond to incidents appropriately. For example, a security analyst might receive an alert about a possible
malware attack, then follow the organization’s outlined procedures to start the incident response process. This
could involve conducting an investigation to identify the root issue and establishing ways to remediate it.
Previously, you learned about past and present attacks that helped shape the cybersecurity industry. These included
the LoveLetter attack, also called the ILOVEYOU virus, and the Morris worm. One outcome was the establishment of
response teams, which are now commonly referred to as computer security incident response teams (CSIRTs). In this
reading, you will learn more about common methods of attack. Becoming familiar with different attack methods, and
the evolving tactics and techniques threat actors use, will help you better protect organizations and people.
Phishing
Phishing is the use of digital communications to trick people into revealing sensitive data or deploying malicious
software.
• Business Email Compromise (BEC): A threat actor sends an email message that seems to be from a
known source to make a seemingly legitimate request for information, in order to obtain a financial advantage.
• Spear phishing: A malicious email attack that targets a specific user or group of users. The email seems to
originate from a trusted source.
• Whaling: A form of spear phishing. Threat actors target company executives to gain access to sensitive data.
• Vishing: The exploitation of electronic voice communication to obtain sensitive information or to impersonate
a known source.
• Smishing: The use of text messages to trick users, in order to obtain sensitive information or to impersonate
a known source.
Malware
Malware is software designed to harm devices or networks. There are many types of malware. The primary purpose
of malware is to obtain money, or in some cases, an intelligence advantage that can be used against a person, an
organization, or a territory.
• Viruses: Malicious code written to interfere with computer operations and cause damage to data and
software. A virus needs to be initiated by a user (i.e., a threat actor), who transmits the virus via a malicious
attachment or file download. When someone opens the malicious attachment or download, the virus hides
itself in other files in the now infected system. When the infected files are opened, it allows the virus to insert
its own code to damage and/or destroy data in the system.
• Worms: Malware that can duplicate and spread itself across systems on its own. In contrast to a virus, a
worm does not need to be downloaded by a user. Instead, it self-replicates and spreads from an already
infected computer to other devices on the same network.
• Ransomware: A malicious attack where threat actors encrypt an organization's data and demand payment to
restore access.
• Spyware: Malware that’s used to gather and sell information without consent. Spyware can be used to access
devices. This allows threat actors to collect personal data, such as private emails, texts, voice and image
recordings, and locations.
Social Engineering
Social engineering is a manipulation technique that exploits human error to gain private information, access, or
valuables. Human error is usually a result of trusting someone without question. It’s the mission of a threat actor,
acting as a social engineer, to create an environment of false trust and lies to exploit as many people as possible.
Some of the most common types of social engineering attacks today include:
• Social media phishing: A threat actor collects detailed information about their target from social media sites.
Then, they initiate an attack.
• Watering hole attack: A threat actor attacks a website frequently visited by a specific group of users.
• USB baiting: A threat actor strategically leaves a malware USB stick for an employee to find and install, to
unknowingly infect a network.
• Physical social engineering: A threat actor impersonates an employee, customer, or vendor to obtain
unauthorized access to a physical location.
Social engineering principles
Social engineering is incredibly effective. This is because people are generally trusting and conditioned to respect
authority. The number of social engineering attacks is increasing with every new social media application that allows
public access to people's data. Although sharing personal data—such as your location or photos—can be convenient,
it’s also a risk.
• Authority: Threat actors impersonate individuals with power. This is because people, in general, have been
conditioned to respect and follow authority figures.
• Intimidation: Threat actors use bullying tactics. This includes persuading and intimidating victims into doing
what they’re told.
• Consensus/Social proof: Because people sometimes do things that they believe many others are doing,
threat actors use others’ trust to pretend they are legitimate. For example, a threat actor might try to gain
access to private data by telling an employee that other people at the company have given them access to
that data in the past.
• Scarcity: A tactic used to imply that goods or services are in limited supply.
• Familiarity: Threat actors establish a fake emotional connection with users that can be exploited.
• Trust: Threat actors establish an emotional relationship with users that can be exploited over time. They use
this relationship to develop trust and gain personal information.
• Urgency: A threat actor persuades others to respond quickly and without questioning.
Previously, you learned about the eight Certified Information Systems Security Professional (CISSP) security
domains. The domains can help you better understand how a security analyst's job duties can be organized into
categories. Additionally, the domains can help establish an understanding of how to manage risk. In this reading, you
will learn about additional methods of attack. You’ll also be able to recognize the types of risk these attacks present.
Attack types
Password attack
A password attack is an attempt to access password-secured devices, systems, networks, or data. Some forms of
password attacks that you’ll learn about later in the certificate program are:
• Brute force
• Rainbow table
Password attacks fall under the communication and network security domain.
Social engineering is a manipulation technique that exploits human error to gain private information, access, or
valuables. Some forms of social engineering attacks that you will continue to learn about throughout the program are:
• Phishing
• Smishing
• Vishing
• Spear phishing
• Whaling
• Social media phishing
• Business Email Compromise (BEC)
• Watering hole attack
• USB (Universal Serial Bus) baiting
• Physical social engineering
Social engineering attacks are related to the security and risk management domain.
Physical attack
A physical attack is a security incident that affects not only digital but also physical environments where the incident
is deployed. Some forms of physical attacks are:
Adversarial artificial intelligence is a technique that manipulates artificial intelligence and machine learning
technology to conduct attacks more efficiently. Adversarial artificial intelligence falls under both the communication
and network security and the identity and access management domains.
Supply-chain attack
A supply-chain attack targets systems, applications, hardware, and/or software to locate a vulnerability where
malware can be deployed. Because every item sold undergoes a process that involves third parties, this means that
the security breach can occur at any point in the supply chain. These attacks are costly because they can affect
multiple organizations and the individuals who work for them. Supply-chain attacks can fall under several domains,
including but not limited to the security and risk management, security architecture and engineering, and security
operations domains.
Cryptographic attack
A cryptographic attack affects secure forms of communication between a sender and intended recipient. Some
forms of cryptographic attacks are:
• Birthday
• Collision
• Downgrade
Cryptographic attacks fall under the communication and network security domain.
Previously, you were introduced to security frameworks and how they provide a structured approach to implementing
a security lifecycle. As a reminder, a security lifecycle is a constantly evolving set of policies and standards. In this
reading, you will learn more about how security frameworks, controls, and compliance regulations—or laws—are used
together to manage security and make sure everyone does their part to minimize risk.
The confidentiality, integrity, and availability (CIA) triad is a model that helps inform how organizations consider
risk when setting up systems and security policies.
CIA are the three foundational principles used by cybersecurity professionals to establish appropriate controls that
mitigate threats, risks, and vulnerabilities.
As you may recall, security controls are safeguards designed to reduce specific security risks. So they are used
alongside frameworks to ensure that security goals and processes are implemented correctly and that organizations
meet regulatory compliance requirements.
Security frameworks are guidelines used for building plans to help mitigate risks and threats to data and privacy.
They have four core components:
As an analyst, you can explore various areas of cybersecurity that interest you. One way to explore those areas is by
understanding different security domains and how they’re used to organize the work of security professionals. In this
reading you will learn more about CISSP’s eight security domains and how they relate to the work you’ll do as a
security analyst.
All organizations must develop their security posture. Security posture is an organization’s ability to manage its
defense of critical assets and data and react to change. Elements of the security and risk management domain that
impact an organization's security posture include:
Information security, or InfoSec, is also related to this domain and refers to a set of processes established to secure
information. An organization may use playbooks and implement training as a part of their security and risk
management program, based on their needs and perceived risk. There are many InfoSec design processes, such as:
• Incident response
• Vulnerability management
• Application security
• Cloud security
• Infrastructure security
Domain two: Asset security
Asset security involves managing the cybersecurity processes of organizational assets, including the storage,
maintenance, retention, and destruction of physical and virtual data. Because the loss or theft of assets can expose an
organization and increase the level of risk, keeping track of assets and the data they hold is essential. Conducting a
security impact analysis, establishing a recovery plan, and managing data exposure will depend on the level of risk
associated with each asset. Security analysts may need to store, maintain, and retain data by creating backups to
ensure they are able to restore the environment if a security incident places the organization’s data at risk.
This domain focuses on managing data security. Ensuring effective tools, systems, and processes are in place helps
protect an organization’s assets and data. Security architects and engineers create these processes.
One important aspect of this domain is the concept of shared responsibility. Shared responsibility means all
individuals involved take an active role in lowering risk during the design of a security system. Additional design
principles related to this domain, which are discussed later in the program, include:
• Threat modeling
• Least privilege
• Defense in depth
• Fail securely
• Separation of duties
• Keep it simple
• Zero trust
• Trust but verify
An example of managing data is the use of a security information and event management (SIEM) tool to monitor for
flags related to unusual login or user activity that could indicate a threat actor is attempting to access private data.
This domain focuses on managing and securing physical networks and wireless communications. This includes on-
site, remote, and cloud communications.
Organizations with remote, hybrid, and on-site work environments must ensure data remains secure, but managing
external connections to make certain that remote workers are securely accessing an organization’s networks is a
challenge. Designing network security controls—such as restricted network access—can help protect users and
ensure an organization’s network remains secure when employees travel or work outside of the main office.
The identity and access management (IAM) domain focuses on keeping data secure. It does this by ensuring user
identities are trusted and authenticated and that access to physical and logical assets is authorized. This helps
prevent unauthorized users, while allowing authorized users to perform their tasks.
Essentially, IAM uses what is referred to as the principle of least privilege, which is the concept of granting only the
minimal access and authorization required to complete a task. As an example, a cybersecurity analyst might be asked
to ensure that customer service representatives can only view the private data of a customer, such as their phone
number, while working to resolve the customer's issue; then remove access when the customer's issue is resolved.
The security assessment and testing domain focuses on identifying and mitigating risks, threats, and vulnerabilities.
Security assessments help organizations determine whether their internal systems are secure or at risk. Organizations
might employ penetration testers, often referred to as “pen testers,” to find vulnerabilities that could be exploited by a
threat actor.
This domain suggests that organizations conduct security control testing, as well as collect and analyze data.
Additionally, it emphasizes the importance of conducting security audits to monitor for and reduce the probability of a
data breach. To contribute to these types of tasks, cybersecurity professionals may be tasked with auditing user
permissions to validate that users have the correct levels of access to internal systems.
Domain seven: Security operations
The security operations domain focuses on the investigation of a potential data breach and the implementation of
preventative measures after a security incident has occurred. This includes using strategies, processes, and tools
such as:
The cybersecurity professionals involved in this domain work as a team to manage, prevent, and investigate threats,
risks, and vulnerabilities. These individuals are trained to handle active attacks, such as large amounts of data being
accessed from an organization's internal network, outside of normal working hours. Once a threat is identified, the
team works diligently to keep private data and information safe from threat actors.
The software development security domain is focused on using secure programming practices and guidelines to
create secure applications. Having secure applications helps deliver secure and reliable services, which helps protect
organizations and their users.
Security must be incorporated into each element of the software development life cycle, from design and development
to testing and release. To achieve security, the software development process must have security in mind at each
step. Security cannot be an afterthought.
Performing application security tests can help ensure vulnerabilities are identified and mitigated accordingly. Having a
system in place to test the programming conventions, software executables, and security measures embedded in the
software is necessary. Having quality assurance and pen tester professionals ensure the software has met security
and performance standards is also an essential part of the software development process. For example, an entry-level
analyst working for a pharmaceutical company might be asked to make sure encryption is properly configured for a
new medical device that will store private patient data.
Previously, you learned that security involves protecting organizations and people from threats, risks, and
vulnerabilities. Understanding the current threat landscapes gives organizations the ability to create policies and
processes designed to help prevent and mitigate these types of security issues. In this reading, you will further explore
how to manage risk and some common threat actor tactics and techniques, so you are better prepared to protect
organizations and the people they serve when you enter the cybersecurity field.
Risk management
A primary goal of organizations is to protect assets. An asset is an item perceived as having value to an organization.
Assets can be digital or physical. Examples of digital assets include the personal information of employees, clients, or
vendors, such as:
• Social Security Numbers (SSNs), or unique national identification numbers assigned to individuals
• Dates of birth
• Bank account numbers
• Mailing addresses
• Payment kiosks
• Servers
• Desktop computers
Some common strategies used to manage risks include:
A threat is any circumstance or event that can negatively impact assets. As an entry-level security analyst, your job is
to help defend the organization’s assets from inside and outside threats. Therefore, understanding common types of
threats is important to an analyst’s daily work. As a reminder, common threats include:
• Insider threats: Staff members or vendors abuse their authorized access to obtain data that may harm an
organization.
• Advanced persistent threats (APTs): A threat actor maintains unauthorized access to a system for an
extended period of time.
A risk is anything that can impact the confidentiality, integrity, or availability of an asset. A basic formula for
determining the level of risk is that risk equals the likelihood of a threat. One way to think about this is that a risk is
being late to work and threats are traffic, an accident, a flat tire, etc.
There are different factors that can affect the likelihood of a risk to an organization’s assets, including:
• External risk: Anything outside the organization that has the potential to harm organizational assets, such as
threat actors attempting to gain access to private information
• Internal risk: A current or former employee, vendor, or trusted partner who poses a security risk
• Legacy systems: Old systems that might not be accounted for or updated, but can still impact assets, such
as workstations or old mainframe systems. For example, an organization might have an old vending machine
that takes credit card payments or a workstation that is still connected to the legacy accounting system.
• Multiparty risk: Outsourcing work to third-party vendors can give them access to intellectual property, such
as trade secrets, software designs, and inventions.
• Software compliance/licensing: Software that is not updated or in compliance, or patches that are not
installed in a timely manner
A vulnerability is a weakness that can be exploited by a threat. Therefore, organizations need to regularly inspect for
vulnerabilities within their systems. Some vulnerabilities include:
• ProxyLogon: A pre-authenticated vulnerability that affects the Microsoft Exchange server. This means a
threat actor can complete a user authentication process to deploy malicious code from a remote location.
• ZeroLogon: A vulnerability in Microsoft’s Netlogon authentication protocol. An authentication protocol is a
way to verify a person's identity. Netlogon is a service that ensures a user’s identity before allowing access to
a website's location.
• Log4Shell: Allows attackers to run Java code on someone else’s computer or leak sensitive information. It
does this by enabling a remote attacker to take control of devices connected to the internet and run malicious
code.
• PetitPotam: Affects Windows New Technology Local Area Network (LAN) Manager (NTLM). It is a theft
technique that allows a LAN-based attacker to initiate an authentication request.
• Security logging and monitoring failures: Insufficient logging and monitoring capabilities that result in
attackers exploiting vulnerabilities without the organization knowing it
• Server-side request forgery: Allows attackers to manipulate a server-side application into accessing and
updating backend resources. It can also allow threat actors to steal data.
The CIA triad is a model that helps inform how organizations consider risk when setting up systems and security
policies. It is made up of three elements that cybersecurity analysts and organizations work toward upholding:
confidentiality, integrity, and availability. Maintaining an acceptable level of risk and ensuring systems and policies are
designed with these elements in mind helps establish a successful security posture, which refers to an organization’s
ability to manage its defense of critical assets and data and react to change.
Confidentiality is the idea that only authorized users can access specific assets or data. In an organization,
confidentiality can be enhanced through the implementation of design principles, such as the principle of least
privilege. The principle of least privilege limits users' access to only the information they need to complete work-
related tasks. Limiting access is one way of maintaining the confidentiality and security of private data.
Integrity is the idea that the data is verifiably correct, authentic, and reliable. Having protocols in place to verify the
authenticity of data is essential. One way to verify data integrity is through cryptography, which is used to transform
data so unauthorized parties cannot read or tamper with it (NIST, 2022). Another example of how an organization
might implement integrity is by enabling encryption, which is the process of converting data from a readable format to
an encoded format. Encryption can be used to prevent access and ensure data, such as messages on an
organization's internal chat platform, cannot be tampered with.
Availability is the idea that data is accessible to those who are authorized to use it. When a system adheres to both
availability and confidentiality principles, data can be used when needed. In the workplace, this could mean that the
organization allows remote employees to access its internal network to perform their jobs. It’s worth noting that access
to data on the internal network is still limited, depending on what type of access employees need to do their jobs. If, for
example, an employee works in the organization’s accounting department, they might need access to corporate
accounts but not data related to ongoing development projects.
Security principles
In the workplace, security principles are embedded in your daily tasks. Whether you are analyzing logs, monitoring a
security information and event management (SIEM) dashboard, or using a vulnerability scanner, you will use these
principles in some way.
Previously, you were introduced to several OWASP security principles. These included:
• Minimize attack surface area: Attack surface refers to all the potential vulnerabilities a threat actor could
exploit.
• Principle of least privilege: Users have the least amount of access required to perform their everyday tasks.
• Defence in depth: Organizations should have varying security controls that mitigate risks and threats.
• Separation of duties: Critical actions should rely on multiple people, each of whom follow the principle of
least privilege.
• Keep security simple: Avoid unnecessarily complicated solutions. Complexity makes security difficult.
• Fix security issues correctly: When security incidents occur, identify the root cause, contain the impact,
identify vulnerabilities, and conduct tests to ensure that remediation is successful.
Security audits
A security audit is a review of an organization's security controls, policies, and procedures against a set of
expectations. Audits are independent reviews that evaluate whether an organization is meeting internal and external
criteria. Internal criteria include outlined policies, procedures, and best practices. External criteria include regulatory
compliance, laws, and federal regulations.
Additionally, a security audit can be used to assess an organization's established security controls. As a reminder,
security controls are safeguards designed to reduce specific security risks.
Audits help ensure that security checks are made (i.e., daily monitoring of security information and event management
dashboards), to identify threats, risks, and vulnerabilities. This helps maintain an organization’s security posture. And,
if there are security issues, a remediation process must be in place.
The goal of an audit is to ensure an organization's information technology (IT) practices are meeting industry and
organizational standards. The objective is to identify and address areas of remediation and growth. Audits provide
direction and clarity by identifying what the current failures are and developing a plan to correct them.
Security audits must be performed to safeguard data and avoid penalties and fines from governmental agencies. The
frequency of audits is dependent on local laws and federal compliance regulations.
Factors that affect audits
• Industry type
• Organization size
• Ties to the applicable government regulations
• A business’s geographical location
• A business decision to adhere to a specific regulatory compliance
Along with compliance, it’s important to mention the role of frameworks and controls in security audits. Frameworks
such as the National Institute of Standards and Technology Cybersecurity Framework (NIST CSF) and the
international standard for information security (ISO 27000) series are designed to help organizations prepare for
regulatory compliance security audits. By adhering to these and other relevant frameworks, organizations can save
time when conducting external and internal audits. Additionally, frameworks, when used alongside controls, can
support organizations’ ability to align with regulatory compliance requirements and standards
Audit checklist
It’s necessary to create an audit checklist before conducting an audit. A checklist is generally made up of the following
areas of focus:
• A risk assessment is used to evaluate identified organizational risks related to budget, controls, internal
processes, and external standards (i.e., regulations).
• When conducting an internal audit, you will assess the security of the identified assets listed in the audit
scope.
• A mitigation plan is a strategy established to lower the level of risk and potential costs, penalties, or other
issues that can negatively affect the organization’s security posture.
• The end result of this process is providing a detailed report of findings, suggested improvements needed to
lower the organization's level of risk, and compliance regulations and standards the organization needs to
adhere to.
Playbooks are accompanied by a strategy. The strategy outlines expectations of team members who are assigned a
task, and some playbooks also list the individuals responsible. The outlined expectations are accompanied by a plan.
The plan dictates how the specific task outlined in the playbook must be completed.
Playbooks should be treated as living documents, which means that they are frequently updated by security team
members to address industry changes and new threats. Playbooks are generally managed as a collaborative effort,
since security team members have different levels of expertise.
Updates are often made if:
• A failure is identified, such as an oversight in the outlined policies and procedures, or in the playbook itself.
• There is a change in industry standards, such as changes in laws or regulatory compliance.
• The cybersecurity landscape changes due to evolving threat actor tactics and techniques.
Types of playbooks
Playbooks sometimes cover specific incidents and vulnerabilities. These might include ransomware, vishing, business
email compromise (BEC), and other attacks previously discussed. Incident and vulnerability response playbooks are
very common, but they are not the only types of playbooks organizations develop.
Each organization has a different set of playbook tools, methodologies, protocols, and procedures that they adhere to,
and different individuals are involved at each step of the response process, depending on the country they are in. For
example, incident notification requirements from government-imposed laws and regulations, along with compliance
standards, affect the content in the playbooks. These requirements are subject to change based on where the incident
originated and the type of data affected.
Incident and vulnerability response playbooks are commonly used by entry-level cybersecurity professionals. They are
developed based on the goals outlined in an organization’s business continuity plan. A business continuity plan is an
established path forward allowing a business to recover and continue to operate as normal, despite a disruption like a
security breach.
These two types of playbooks are similar in that they both contain predefined and up-to-date lists of steps to perform
when responding to an incident. Following these steps is necessary to ensure that you, as a security professional, are
adhering to legal and organizational standards and protocols. These playbooks also help minimize errors and ensure
that important actions are performed within a specific timeframe.
When an incident, threat, or vulnerability occurs or is identified, the level of risk to the organization depends on the
potential damage to its assets. A basic formula for determining the level of risk is that risk equals the likelihood of a
threat. For this reason, a sense of urgency is essential. Following the steps outlined in playbooks is also important if
any forensic task is being carried out. Mishandling data can easily compromise forensic data, rendering it unusable.
• Preparation
• Detection
• Analysis
• Containment
• Eradication
• Recovery from an incident
Additional steps include performing post-incident activities, and a coordination of efforts throughout the investigation
and incident and vulnerability response stages.
Devices on a network
Network devices are the devices that maintain information and services for users of a network. These devices connect
over wired and wireless connections. After establishing a connection to the network, the devices send data packets.
The data packets provide information about the source and the destination of the data.
Devices and desktop computers
Most internet users are familiar with everyday devices, such as personal computers, laptops, mobile phones, and
tablets. Each device and desktop computer has a unique MAC address and IP address, which identify it on the
network, and a network interface that sends and receives data packets. These devices can connect to the network via
a hard wire or a wireless connection.
Firewalls
A firewall is a network security device that monitors traffic to or from your network. Firewalls can also restrict specific
incoming and outgoing network traffic. The organization configures the security rules. Firewalls often reside between
the secured and controlled internal network and the untrusted network resources outside the organization, such as the
internet.
Servers
Servers provide a service for other devices on the network. The devices that connect to a server are called clients.
The following graphic outlines this model, which is called the client-server model. In this model, clients send requests
to the server for information and services. The server performs the requests for the clients. Common examples include
DNS servers that perform domain name lookups for internet sites, file servers that store and retrieve files from a
database, and corporate mail servers that organize mail for a company.
Hubs and switches both direct traffic on a local network. A hub is a device that provides a common point of connection
for all devices directly connected to it. Hubs additionally repeat all information out to all ports. From a security
perspective, this makes hubs vulnerable to eavesdropping. For this reason, hubs are not used as often on modern
networks; most organizations use switches instead.
A switch forwards packets between devices directly connected to it. It maintains a MAC address table that matches
MAC addresses of devices on the network to port numbers on the switch and forwards incoming data packets
according to the destination MAC address.
Routers
Routers sit between networks and direct traffic, based on the IP address of the destination network. The IP address of
the destination network is contained in the IP header. The router reads the header information and forwards the
packet to the next router on the path to the destination. This continues until the packet reaches the destination
network. Routers can also include a firewall feature that allows or blocks incoming traffic based on information in the
transmission. This stops malicious traffic from entering the private network and damaging the local area network.
Modems
Modems usually interface with an internet service provider (ISP). ISPs provide internet connectivity via telephone lines
or coaxial cables. Modems receive transmissions from the internet and translate them into digital signals that can be
understood by the devices on the network. Usually, modems connect to a router that takes the decoded transmissions
and sends them on to the local network.
Note: Enterprise networks used by large organizations to connect their users and devices often use other broadband
technologies to handle high-volume traffic, instead of using a modem.
Wireless access point
A wireless access point sends and receives digital signals over radio waves creating a wireless network. Devices with
wireless adapters connect to the access point using Wi-Fi. Wi-Fi refers to a set of standards that are used by network
devices to communicate wirelessly. Wireless access points and the devices connected to them use Wi-Fi protocols to
send data through radio waves where they are sent to routers and switches and directed along the path to their final
destination.
Network diagrams allow network administrators and security personnel to imagine the architecture and design of their
organization’s private network.
Network diagrams are topographical maps that show the devices on the network and how they connect. Network
diagrams use small representative graphics to portray each network device and dotted lines to show how each device
connects to the other. Security analysts use network diagrams to learn about network architecture and how to design
networks.
A cloud service provider (CSP) is a company that offers cloud computing services. These companies own large data
centers in locations around the globe that house millions of servers. Data centers provide technology services, such
as storage, and compute at such a large scale that they can sell their services to other companies for a fee.
Companies can pay for the storage and services they need and consume them through the CSP’s application
programming interface (API) or web console.
• Software as a service (SaaS) refers to software suites operated by the CSP that a company can use
remotely without hosting the software.
• Infrastructure as a service (Iaas) refers to the use of virtual computer components offered by the CSP.
These include virtual containers and storage that are configured remotely through the CSP’s API or web
console. Cloud-compute and storage services can be used to operate existing applications and other
technology workloads without significant modifications. Existing applications can be modified to take
advantage of the availability, performance, and security features that are unique to cloud provider services.
• Platform as a service (PaaS) refers to tools that application developers can use to design custom
applications for their company. Custom applications are designed and accessed in the cloud and used for a
company’s specific business needs.
Hybrid cloud environments
When organizations use a CSP’s services in addition to their on-premise computers, networks, and storage, it is
referred to as a hybrid cloud environment. When organizations use more than one CSP, it is called a multi-cloud
environment. The vast majority of organizations use hybrid cloud environments to reduce costs and maintain control
over network resources.
Software-defined networks
CSPs offer networking tools similar to the physical devices that you have learned about in this section of the course.
Next, you’ll review software-defined networking in the cloud. Software-defined networks (SDNs) are made up of
virtual network devices and services. Just like CSPs provide virtual computers, many SDNs also provide virtual
switches, routers, firewalls, and more. Most modern network hardware devices also support network virtualization and
software-defined networking. This means that physical switches and routers use software to perform packet routing. In
the case of cloud networking, the SDN tools are hosted on servers located at the CSP’s data center.
Three of the main reasons that cloud computing is so attractive to businesses are reliability, decreased cost, and
increased scalability.
Reliability in cloud computing is based on how available cloud services and resources are, how secure connections
are, and how often the services are effectively running. Cloud computing allows employees and customers to access
the resources they need consistently and with minimal interruption.
Cost
Traditionally, companies have had to provide their own network infrastructure, at least for internet connections. This
meant there could be potentially significant upfront costs for companies. However, because CSPs have such large
data centers, they are able to offer virtual devices and services at a fraction of the cost required for companies to
install, patch, upgrade, and manage the components and software themselves.
Scalability
Another challenge that companies face with traditional computing is scalability. When organizations experience an
increase in their business needs, they might be forced to buy more equipment and software to keep up. But what if
business decreases shortly after? They might no longer have the business to justify the cost incurred by the upgraded
components. CSPs reduce this risk by making it easy to consume services in an elastic utility model as needed. This
means that companies only pay for what they need when they need it.
Changes can be made quickly through the CSPs, APIs, or web console—much more quickly than if network
technicians had to purchase their own hardware and set it up. For example, if a company needs to protect against a
threat to their network, web application firewalls (WAFs), intrusion detection/protection systems (IDS/IPS), or L3/L4
firewalls can be configured quickly whenever necessary, leading to better network performance and security.
The TCP/IP model
The TCP/IP model is a framework used to visualize how data is organized and transmitted across a network. This
model helps network engineers and network security analysts conceptualize processes on the network and
communicate where disruptions or security threats occur.
The TCP/IP model has four layers: network access layer, internet layer, transport layer, and application layer. When
troubleshooting issues on the network, security professionals can analyze and deduce which layer or layers an attack
occurred based on what processes were involved in an incident.
The network access layer, sometimes called the data link layer, organizes sending and receiving data frames within
a single network. This layer corresponds to the physical hardware involved in network transmission. Hubs, modems,
cables, and wiring are all considered part of this layer. The address resolution protocol (ARP) is part of the network
access layer. ARP assists IP with directing data packets on the same physical network by mapping IP addresses to
MAC addresses on the same physical network.
Internet layer
The internet layer, sometimes referred to as the network layer, is responsible for ensuring the delivery to the
destination host, which potentially resides on a different network. The internet layer determines which protocol is
responsible for delivering the data packets. Here are some of the common protocols that operate at the internet layer:
• Internet Protocol (IP). IP sends the data packets to the correct destination and relies on the Transmission
Control Protocol/User Datagram Protocol (TCP/UDP) to deliver them to the corresponding service. IP packets
allow communication between two networks. They are routed from the sending network to the receiving
network. The TCP/UDP retransmits any data that is lost or corrupt.
• Internet Control Message Protocol (ICMP). The ICMP shares error information and status updates of data
packets. This is useful for detecting and troubleshooting network errors. The ICMP reports information about
packets that were dropped or that disappeared in transit, issues with network connectivity, and packets
redirected to other routers.
Transport layer
The transport layer is responsible for reliably delivering data between two systems or networks. TCP and UDP are
the two transport protocols that occur at this layer.
The TCP ensures that data is reliably transmitted to the destination service. TCP contains the port number of the
intended destination service, which resides in the TCP header of an TCP/IP packet.
The UDP is used by applications that are not concerned with the reliability of the transmission. Data sent over UDP is
not tracked as extensively as data sent using TCP. Because UDP does not establish network connections, it is used
mostly for performance sensitive applications that operate in real time, such as video streaming.
Application layer
The application layer in the TCP/IP model is similar to the application, presentation, and session layers of the OSI
model. The application layer is responsible for making network requests or responding to requests. This layer defines
which internet services and applications any user can access. Some common protocols used on this layer are:
Application layer protocols rely on underlying layers to transfer the data across the network.
The OSI visually organizes network protocols into different layers. Network professionals often use this model to
communicate with each other about potential sources of problems or security threats when they occur.
The TCP/IP model combines multiple layers of the OSI model. There are many similarities between the two models.
Both models define standards for networking and divide the network communication process into different layers. The
TCP/IP model is a simplified version of the OSI model.
The TCP/IP model is a framework used to visualize how data is organized and transmitted across a network. This
model helps network engineers and network security analysts design the data network and conceptualize processes
on the network and communicate where disruptions or security threats occur.
The TCP/IP model has four layers: network access layer, internet layer, transport layer, and application layer. When
analyzing network events, security professionals can determine what layer or layers an attack occurred in based on
what processes were involved in the incident.
The OSI model is a standardized concept that describes the seven layers computers use to communicate and send
data over the network. Network and security professionals often use this model to communicate with each other about
potential sources of problems or security threats when they occur.
Layer 7: Application layer
The application layer includes processes that directly involve the everyday user. This layer includes all of the
networking protocols that software applications use to connect a user to the internet. This characteristic is the
identifying feature of the application layer—user connection to the network via applications and requests.
An example of a type of communication that happens at the application layer is using a web browser. The internet
browser uses HTTP or HTTPS to send and receive information from the website server. The email application uses
simple mail transfer protocol (SMTP) to send and receive email information. Also, web browsers use the domain name
system (DNS) protocol to translate website domain names into IP addresses which identify the web server that hosts
the information for the website.
Functions at the presentation layer involve data translation and encryption for the network. This layer adds to and
replaces data with formats that can be understood by applications (layer 7) on both sending and receiving systems.
Formats at the user end may be different from those of the receiving system. Processes at the presentation layer
require the use of a standardized format.
Some formatting functions that occur at layer 6 include encryption, compression, and confirmation that the character
code set can be interpreted on the receiving system. One example of encryption that takes place at this layer is SSL,
which encrypts data between web servers and browsers as part of websites with HTTPS.
A session describes when a connection is established between two devices. An open session allows the devices to
communicate with each other. Session layer protocols occur to keep the session open while data is being transferred
and terminate the session once the transmission is complete. The session layer is also responsible for activities such
as authentication, reconnection, and setting checkpoints during a data transfer. If a session is interrupted, checkpoints
ensure that the transmission picks up at the last session checkpoint when the connection resumes. Sessions include a
request and response between applications. Functions in the session layer respond to requests for service from
processes in the presentation layer (layer 6) and send requests for services to the transport layer (layer 4).
The transport layer is responsible for delivering data between devices. This layer also handles the speed of data
transfer, flow of the transfer, and breaking data down into smaller segments to make them easier to transport.
Segmentation is the process of dividing up a large data transmission into smaller pieces that can be processed by the
receiving system. These segments need to be reassembled at their destination so they can be processed at the
session layer (layer 5). The speed and rate of the transmission also has to match the connection speed of the
destination system. TCP and UDP are transport layer protocols.
The network layer oversees receiving the frames from the data link layer (layer 2) and delivers them to the intended
destination. The intended destination can be found based on the address that resides in the frame of the data packets.
Data packets allow communication between two networks. These packets include IP addresses that tell routers where
to send them. They are routed from the sending network to the receiving network.
The data link layer organizes sending and receiving data packets within a single network. The data link layer is home
to switches on the local network and network interface cards on local devices. Protocols like network control protocol
(NCP), high-level data link control (HDLC), and synchronous data link control protocol (SDLC) are used at the data
link layer.
As the name suggests, the physical layer corresponds to the physical hardware involved in network transmission.
Hubs, modems, and the cables and wiring that connect them are all considered part of the physical layer. To travel
across an ethernet or coaxial cable, a data packet needs to be translated into a stream of 0s and 1s. The stream of 0s
and 1s are sent across the physical wiring and cables, received, and then passed on to higher levels of the OSI
model.
Operations at the network layer
Functions at the network layer organize the addressing and delivery of data packets across the network and internet
from the host device to the destination device. This includes directing the packets from one router to another router
across the internet, based on the internet protocol (IP) address of the destination network. The destination IP address
is contained within the header of each data packet. This address will be stored for future routing purposes in routing
tables along the packet’s path to its destination.
All data packets include an IP address; this is referred to as an IP packet or datagram. A router uses the IP address to
route packets from network to network based on information contained in the IP header of a data packet. Header
information communicates more than just the address of the destination. It also includes information such as the
source IP address, the size of the packet, and which protocol will be used for the data portion of the packet.
Next, you can review the format of an IP version 4 (IPv4) packet and review a detailed graphic of the packet header.
An IPv4 packet is made up of two sections, the header and the data:
• The size of the IP header ranges from 20 to 60 bytes. The header includes the IP routing information that
devices use to direct the packet. The format of an IP packet header is determined by the IPv4 protocol.
• The length of the data section of an IPv4 packet can vary greatly in size. However, the maximum possible size
of an IP packet is 65,536 bytes. It contains the message being transferred to the transmission, like website
information or email text.
• Version: The first 4-bit header tells receiving devices what protocol the packet is using. The packet used in
the illustration above is an IPv4 packet.
• IP Header Length (HLEN): HLEN is the packet’s header length. This value indicates where the packet
header ends and the data segment begins.
• Type of Service (ToS): Routers prioritize packets for delivery to maintain quality of service on the network.
The ToS field provides the router with this information.
• Total Length: This field communicates the total length of the entire IP packet, including the header and data.
The maximum size of an IPv4 packet is 65,535 bytes.
• Identification: For IPv4 packets that are larger than 65, 535 bytes, the packets are divided, or fragmented,
into smaller IP packets. The identification field provides a unique identifier for all the fragments of the original
IP packet so that they can be reassembled once they reach their destination.
• Flags: This field provides the routing device with more information about whether the original packet has been
fragmented and if there are more fragments in transit.
• Fragmentation Offset: The fragment offset field tells routing devices where in the original packet the
fragment belongs.
• Time to Live (TTL): TTL prevents data packets from being forwarded by routers indefinitely. It contains a
counter that is set by the source. The counter is decremented by one as it passes through each router along
its path. When the TTL counter reaches zero, the router currently holding the packet will discard the packet
and return an ICMP Time Exceeded error message to the sender.
• Protocol: The protocol field tells the receiving device which protocol will be used for the data portion of the
packet.
• Header Checksum: The header checksum field contains a checksum that can be used to detect corruption of
the IP header in transit. Corrupted packets are discarded.
• Source IP Address: The source IP address is the IPv4 address of the sending device.
• Destination IP Address: The destination IP address is the IPv4 address of the destination device.
• Options: The options field allows for security options to be applied to the packet if the HLEN value is greater
than five. The field communicates these options to the routing devices.
Difference between IPv4 and IPv6
In an earlier part of this course, you learned about the history of IP addressing. As the internet grew, it became clear
that all of the IPv4 addresses would eventually be depleted; this is called IPv4 address exhaustion. At the time, no one
had anticipated how many computing devices would need an IP address in the future. IPv6 was developed to mitigate
IPv4 address exhaustion and other related concerns.
One of the key differences between IPv4 and IPv6 is the length of the addresses. IPv4 addresses are numeric, made
of 4 bytes, and allow for up to 4.3 billion possible addresses. IPv4 addresses are made up of four strings and the
numbers range from 0 to 255. An example of an IPv4 address would be: 198.51.100.0. IPv6 addresses are
hexadecimal, made up of 16 bytes, and allow for up to 340 undecillion addresses (340 followed by 36 zeros). An
example of an IPv6 address would be: 2002:0db8:0000:0000:0000:ff21:0023:1234.
There are also some differences in the layout of an IPv6 packet header. The IPv6 header format is much simpler than
IPv4. For example, the IPv4 Header includes the HLEN, Identification, and Flags fields, whereas the IPv6 does not.
The IPv6 header introduces different fields not included in IPv4 headers, such as the Flow Label and Traffic Class.
There are some important security differences between IPv4 and IPv6. IPv6 offers more efficient routing and
eliminates private address collisions that can occur on IPv4 when two devices on the same network are attempting to
use the same address.
Network protocols can be divided into three main categories: communication protocols, management protocols, and
security protocols. There are dozens of different network protocols, but you don’t need to memorize all of them for an
entry-level security analyst role. However, it’s important for you to know the ones listed in this reading.
Communication protocols
Communication protocols govern the exchange of information in network transmission. They dictate how the data is
transmitted between devices and the timing of the communication. They also include methods to recover data lost in
transit. Here are a few of them.
• Transmission Control Protocol (TCP) is an internet communication protocol that allows two devices to form
a connection and stream data. TCP uses a three-way handshake process. First, the device sends a
synchronize (SYN) request to a server. Then the server responds with a SYN/ACK packet to acknowledge
receipt of the device's request. Once the server receives the final ACK packet from the device, a TCP
connection is established. In the TCP/IP model, TCP occurs at the transport layer.
• User Datagram Protocol (UDP) is a connectionless protocol that does not establish a connection between
devices before a transmission. This makes it less reliable than TCP. But it also means that it works well for
transmissions that need to get to their destination quickly. For example, one use of UDP is for internet gaming
transmissions. In the TCP/IP model, UDP occurs at the transport layer.
• Hypertext Transfer Protocol (HTTP) is an application layer protocol that provides a method of
communication between clients and website servers. HTTP uses port 80. HTTP is considered insecure, so it
is being replaced on most websites by a secure version, called HTTPS. However, there are still many
websites that use the insecure HTTP protocol. In the TCP/IP model, HTTP occurs at the application layer.
• Domain Name System (DNS) is a protocol that translates internet domain names into IP addresses. When a
client computer wishes to access a website domain using their internet browser, a query is sent to a dedicated
DNS server. The DNS server then looks up the IP address that corresponds to the website domain. DNS
normally uses UDP on port 53. However, if the DNS reply to a request is large, it will switch to using the TCP
protocol. In the TCP/IP model, DNS occurs at the application layer.
Management Protocols
The next category of network protocols is management protocols. Management protocols are used for monitoring and
managing activity on a network. They include protocols for error reporting and optimizing performance on the network.
• Simple Network Management Protocol (SNMP) is a network protocol used for monitoring and managing
devices on a network. SNMP can reset a password on a network device or change its baseline configuration.
It can also send requests to network devices for a report on how much of the network’s bandwidth is being
used up. In the TCP/IP model, SNMP occurs at the application layer.
• Internet Control Message Protocol (ICMP) is an internet protocol used by devices to tell each other about
data transmission errors across the network. ICMP is used by a receiving device to send a report to the
sending device about the data transmission. ICMP is commonly used as a quick way to troubleshoot network
connectivity and latency by issuing the “ping” command on a Linux operating system. In the TCP/IP model,
ICMP occurs at the internet layer.
Security Protocols
Security protocols are network protocols that ensure that data is sent and received securely across a network.
Security protocols use encryption algorithms to protect data in transit. Below are some common security protocols.
• Hypertext Transfer Protocol Secure (HTTPS) is a network protocol that provides a secure method of
communication between clients and website servers. HTTPS is a secure version of HTTP that uses secure
sockets layer/transport layer security (SSL/TLS) encryption on all transmissions so that malicious actors
cannot read the information contained. HTTPS uses port 443. In the TCP/IP model, HTTPS occurs at the
application layer.
• Secure File Transfer Protocol (SFTP) is a secure protocol used to transfer files from one device to another
over a network. SFTP uses secure shell (SSH), typically through TCP port 22. SSH uses Advanced
Encryption Standard (AES) and other types of encryption to ensure that unintended recipients cannot
intercept the transmissions. In the TCP/IP model, SFTP occurs at the application layer. SFTP is used often
with cloud storage. Every time a user uploads or downloads a file from cloud storage, the file is transferred
using the SFTP protocol.
Note: The encryption protocols mentioned do not conceal the source or destination IP address of network traffic. This
means a malicious actor can still learn some basic information about the network traffic if they intercept it.
The devices on your local home or office network each have a private IP address that they use to communicate
directly with each other. In order for the devices with private IP addresses to communicate with the public internet,
they need to have a public IP address. Otherwise, responses will not be routed correctly. Instead of having a
dedicated public IP address for each of the devices on the local network, the router can replace a private source IP
address with its public IP address and perform the reverse operation for responses. This process is known as Network
Address Translation (NAT) and it generally requires a router or firewall to be specifically configured to perform NAT.
NAT is a part of layer 2 (internet layer) and layer 3 (transport layer) of the TCP/IP model.
Dynamic Host Configuration Protocol (DHCP) is in the management family of network protocols. DHCP is an
application layer protocol used on a network to configure devices. It assigns a unique IP address and provides the
addresses of the appropriate DNS server and default gateway for each device. DHCP servers operate on UDP port 67
while DHCP clients operate on UDP port 68.
By now, you are familiar with IP and MAC addresses. You’ve learned that each device on a network has both an IP
address that identifies it on the network and a MAC address that is unique to that network interface. A device’s IP
address may change over time, but its MAC address is permanent. Address Resolution Protocol (ARP) is an internet
layer protocol in the TCP/IP model used to translate the IP addresses that are found in data packets into the MAC
address of the hardware device.
Each device on the network performs ARP and keeps track of matching IP and MAC addresses in an ARP cache.
ARP does not have a specific port number.
Telnet
Telnet is an application layer protocol that allows a device to communicate with another device or server. Telnet sends
all information in clear text. It uses command line prompts to control another device similar to secure shell (SSH), but
Telnet is not as secure as SSH. Telnet can be used to connect to local or remote devices and uses TCP port 23.
Secure shell
Secure shell protocol (SSH) is used to create a secure connection with a remote system. This application layer
protocol provides an alternative for secure authentication and encrypted communication. SSH operates over the TCP
port 22 and is a replacement for less secure protocols, such as Telnet.
Post office protocol (POP) is an application layer (layer 4 of the TCP/IP model) protocol used to manage and retrieve
email from a mail server. Many organizations have a dedicated mail server on the network that handles incoming and
outgoing mail for users on the network. User devices will send requests to the remote mail server and download email
messages locally. If you have ever refreshed your email application and had new emails populate in your inbox, you
are experiencing POP and internet message access protocol (IMAP) in action. Unencrypted, plaintext authentication
uses TCP/UDP port 110 and encrypted emails use Secure Sockets Layer/Transport Layer Security (SSL/TLS) over
TCP/UDP port 995. When using POP, mail has to finish downloading on a local device before it can be read and it
does not allow a user to sync emails.
IMAP is used for incoming email. It downloads the headers of emails, but not the content. The content remains on the
email server, which allows users to access their email from multiple devices. IMAP uses TCP port 143 for unencrypted
email and TCP port 993 over the TLS protocol. Using IMAP allows users to partially read email before it is finished
downloading and to sync emails. However, IMAP is slower than POP3.
Simple Mail Transfer Protocol (SMTP) is used to transmit and route email from the sender to the recipient’s address.
SMTP works with Message Transfer Agent (MTA) software, which searches DNS servers to resolve email addresses
to IP addresses, to ensure emails reach their intended destination. SMTP uses TCP/UDP port 25 for unencrypted
emails and TCP/UDP port 587 using TLS for encrypted emails. The TCP port 25 is often used by high-volume spam.
SMTP helps to filter out spam by regulating how many emails a source can send at a time.
Remember that port numbers are used by network devices to determine what should be done with the information
contained in each data packet once they reach their destination. Firewalls can filter out unwanted traffic based on port
numbers. For example, an organization may configure a firewall to only allow access to TCP port 995 (POP3) by IP
addresses belonging to the organization.
Introduction to wireless communication protocols
Many people today refer to wireless internet as Wi-Fi. Wi-Fi refers to a set of standards that define communication for
wireless LANs. Wi-Fi is a marketing term commissioned by the Wireless Ethernet Compatibility Alliance (WECA).
WECA has since renamed their organization Wi-Fi Alliance.
Wi-Fi standards and protocols are based on the 802.11 family of internet communication standards determined by the
Institute of Electrical and Electronics Engineers (IEEE). So, as a security analyst, you might also see Wi-Fi referred to
as IEEE 802.11.
Wi-Fi communications are secured by wireless networking protocols. Wireless security protocols have evolved over
the years, helping to identify and resolve vulnerabilities with more advanced wireless technologies.
In this reading, you will learn about the evolution of wireless security protocols from WEP to WPA, WPA2, and WPA3.
You’ll also learn how the Wireless Application Protocol was used for mobile internet communications.
Wired equivalent privacy (WEP) is a wireless security protocol designed to provide users with the same level of
privacy on wireless network connections as they have on wired network connections. WEP was developed in 1999
and is the oldest of the wireless security standards.
WEP is largely out of use today, but security analysts should still understand WEP in case they encounter it. For
example, a network router might have used WEP as the default security protocol and the network administrator never
changed it. Or, devices on a network might be too old to support newer Wi-Fi security protocols. Nevertheless, a
malicious actor could potentially break the WEP encryption, so it’s now considered a high-risk security protocol.
Wi-Fi Protected Access (WPA) was developed in 2003 to improve upon WEP, address the security issues that it
presented, and replace it. WPA was always intended to be a transitional measure so backwards compatibility could be
established with older hardware.
The flaws with WEP were in the protocol itself and how the encryption was used. WPA addressed this weakness by
using a protocol called Temporal Key Integrity Protocol (TKIP). WPA encryption algorithm uses larger secret keys than
WEPs, making it more difficult to guess the key by trial and error.
WPA also includes a message integrity check that includes a message authentication tag with each transmission. If a
malicious actor attempts to alter the transmission in any way or resend at another time, WPA’s message integrity
check will identify the attack and reject the transmission.
Despite the security improvements of WPA, it still has vulnerabilities. Malicious actors can use a key reinstallation
attack (or KRACK attack) to decrypt transmissions using WPA. Attackers can insert themselves in the WPA
authentication handshake process and insert a new encryption key instead of the dynamic one assigned by WPA. If
they set the new key to all zeros, it is as if the transmission is not encrypted at all.
Because of this significant vulnerability, WPA was replaced with an updated version of the protocol called WPA2.
WPA2 The second version of Wi-Fi Protected Access—known as WPA2—was released in 2004. WPA2 improves
upon WPA by using the Advanced Encryption Standard (AES). WPA2 also improves upon WPA’s use of TKIP. WPA2
uses the Counter Mode Cipher Block Chain Message Authentication Code Protocol (CCMP), which provides
encapsulation and ensures message authentication and integrity. Because of the strength of WPA2, it is considered
the security standard for all Wi-Fi transmissions today. WPA2, like its predecessor, is vulnerable to KRACK attacks.
This led to the development of WPA3 in 2018.
WPA2 personal mode is best suited for home networks for a variety of reasons. It is easy to implement, initial setup
takes less time for personal than enterprise version. The global passphrase for WPA2 personal version needs to be
applied to each individual computer and access point in a network. This makes it ideal for home networks, but
unmanageable for organizations.
WPA2 enterprise mode works best for business applications. It provides the necessary security for wireless networks
in business settings. The initial setup is more complicated than WPA2 personal mode, but enterprise mode offers
individualized and centralized control over the Wi-Fi access to a business network. This means that network
administrators can grant or remove user access to a network at any time. Users never have access to encryption
keys, this prevents potential attackers from recovering network keys on individual computers.
WPA3 is a secure Wi-Fi protocol and is growing in usage as more WPA3 compatible devices are released. These are
the key differences between WPA2 and WPA3:
• WPA3 addresses the authentication handshake vulnerability to KRACK attacks, which is present in WPA2.
• WPA3 uses Simultaneous Authentication of Equals (SAE), a password-authenticated, cipher-key-sharing
agreement. This prevents attackers from downloading data from wireless network connections to their
systems to attempt to decode it.
• WPA3 has increased encryption to make passwords more secure by using 128-bit encryption, with WPA3-
Enterprise mode offering optional 192-bit encryption
Overview of subnetting
Subnetting is the subdivision of a network into logical groups called subnets. It works like a network inside a network.
Subnetting divides up a network address range into smaller subnets within the network. These smaller subnets form
based on the IP addresses and network mask of the devices on the network. Subnetting creates a network of devices
to function as their own network. This makes the network more efficient and can also be used to create security
zones. If devices on the same subnet communicate with each other, the switch changes the transmissions to stay on
the same subnet, improving speed and efficiency of the communications.
Classless Inter-Domain Routing (CIDR) is a method of assigning subnet masks to IP addresses to create a subnet.
Classless addressing replaces classful addressing. Classful addressing was used in the 1980s as a system of
grouping IP addresses into classes (Class A to Class E). Each class included a limited number of IP addresses, which
were depleted as the number of devices connecting to the internet outgrew the classful range in the 1990s. Classless
CIDR addressing expanded the number of available IPv4 addresses.
CIDR allows cybersecurity professionals to segment classful networks into smaller chunks. CIDR IP addresses are
formatted like IPv4 addresses, but they include a slash (“/’”) followed by a number at the end of the address, This
extra number is called the IP network prefix. For example, a regular IPv4 address uses the 198.51.100.0 format,
whereas a CIDR IP address would include the IP network prefix at the end of the address, 198.51.100.0/24. This
CIDR address encompasses all IP addresses between 198.51.100.0 and 198.51.100.255. The system of CIDR
addressing reduces the number of entries in routing tables and provides more available IP addresses within networks.
You can try converting CIDR to IPv4 addresses and vice versa through an online conversion tool, like
IPAddressGuide, for practice and to better understand this concept.
Subnetting allows network professionals and analysts to create a network within their own network without requesting
another network IP address from their internet service provider. This process uses network bandwidth more efficiently
and improves network performance. Subnetting is one component of creating isolated subnetworks through physical
isolation, routing configuration, and firewalls
Common network protocols
Network protocols are used to direct traffic to the correct device and service depending on the kind of communication
being performed by the devices on the network. Protocols are the rules used by all network devices that provide a
mutually agreed upon foundation for how to transfer data across a network.
There are three main categories of network protocols: communication protocols, management protocols, and security
protocols.
1. Communication protocols are used to establish connections between servers. Examples include TCP, UDP,
and Simple Mail Transfer Protocol (SMTP), which provides a framework for email communication.
2. Management protocols are used to troubleshoot network issues. One example is the Internet Control
Message Protocol (ICMP).
3. Security protocols provide encryption for data in transit. Examples include IPSec and SSL/TLS.
• HyperText Transfer Protocol (HTTP). HTTP is an application layer communication protocol. This allows the
browser and the web server to communicate with one another.
• Domain Name System (DNS). DNS is an application layer protocol that translates, or maps, host names to IP
addresses.
• Address Resolution Protocol (ARP). ARP is a network layer communication protocol that maps IP addresses
to physical machines or a MAC address recognized on the local area network.
Wi-Fi
This section of the course also introduced various wireless security protocols, including WEP, WPA, WPA2, and
WPA3. WPA3 encrypts traffic with the Advanced Encryption Standard (AES) cipher as it travels from your device to
the wireless access point. WPA2 and WPA3 offer two modes: personal and enterprise. Personal mode is best suited
for home networks while enterprise mode is generally utilized for business networks and applications.
Firewalls
Previously, you learned that firewalls are network virtual appliances (NVAs) or hardware devices that inspect and can
filter network traffic before it’s permitted to enter the private network. Traditional firewalls are configured with rules that
tell it what types of data packets are allowed based on the port number and IP address of the data packet.
• Stateless: A class of firewall that operates based on predefined rules and does not keep track of information
from data packets
• Stateful: A class of firewall that keeps track of information passing through it and proactively filters out
threats. Unlike stateless firewalls, which require rules to be configured in two directions, a stateful firewall only
requires a rule in one direction. This is because it uses a "state table" to track connections, so it can match
return traffic to an existing session
Next generation firewalls (NGFWs) are the most technologically advanced firewall protection. They exceed the
security offered by stateful firewalls because they include deep packet inspection (a kind of packet sniffing that
examines data packets and takes actions if threats exist) and intrusion prevention features that detect security threats
and notify firewall administrators. NGFWs can inspect traffic at the application layer of the TCP/IP model and are
typically application aware. Unlike traditional firewalls that block traffic based on IP address and ports, NGFWs rules
can be configured to block or allow traffic based on the application. Some NGFWs have additional features like
Malware Sandboxing, Network Anti-Virus, and URL and DNS Filtering.
Proxy servers
A proxy server is another way to add security to your private network. Proxy servers utilize network address
translation (NAT) to serve as a barrier between clients on the network and external threats. Forward proxies handle
queries from internal clients when they access resources external to the network. Reverse proxies function opposite of
forward proxies; they handle requests from external systems to services on the internal network. Some proxy servers
can also be configured with rules, like a firewall. For example, you can create filters to block websites identified as
containing malware.
A VPN is a service that encrypts data in transit and disguises your IP address. VPNs use a process called
encapsulation. Encapsulation wraps your encrypted data in an unencrypted data packet, which allows your data to be
sent across the public network while remaining anonymous. Enterprises and other organizations use VPNs to help
protect communications from users’ devices to corporate resources. Some of these resources include connecting to
servers or virtual machines that host business applications. VPNs can also be used for personal use to increase
personal privacy. They allow the user to access the internet without anyone being able to read their personal
information or access their private IP address. Organizations are increasingly using a combination of VPN and SD-
WAN capabilities to secure their networks. A software-defined wide area network (SD-WAN) is a virtual WAN service
that allows organizations to securely connect users to applications across multiple locations and over large
geographical distances.
A brute force attack is a trial-and-error process of discovering private information. There are different types of brute
force attacks that malicious actors use to guess passwords, including:
• Simple brute force attacks. When attackers try to guess a user's login credentials, it’s considered a simple
brute force attack. They might do this by entering any combination of usernames and passwords that they can
think of until they find the one that works.
• Dictionary attacks use a similar technique. In dictionary attacks, attackers use a list of commonly used
passwords and stolen credentials from previous breaches to access a system. These are called “dictionary”
attacks because attackers originally used a list of words from the dictionary to guess the passwords, before
complex password rules became a common security practice.
Using brute force to access a system can be a tedious and time consuming process, especially when it’s done
manually. There are a range of tools attackers use to conduct their attacks.
Assessing vulnerabilities
Before a brute force attack or other cybersecurity incident occurs, companies can run a series of tests on their network
or web applications to assess vulnerabilities. Analysts can use virtual machines and sandboxes to test suspicious
files, check for vulnerabilities before an event occurs, or to simulate a cybersecurity incident.
Virtual machines (VMs) are software versions of physical computers. VMs provide an additional layer of security for
an organization because they can be used to run code in an isolated environment, preventing malicious code from
affecting the rest of the computer or system. VMs can also be deleted and replaced by a pristine image after testing
malware. VMs are useful when investigating potentially infected machines or running malware in a constrained
environment. Using a VM may prevent damage to your system in the event its tools are used improperly. VMs also
give you the ability to revert to a previous state. However, there are still some risks involved with VMs. There’s still a
small risk that a malicious program can escape virtualization and access the host machine.
Sandbox environments
A sandbox is a type of testing environment that allows you to execute software or programs separate from your
network. They are commonly used for testing patches, identifying and addressing bugs, or detecting cybersecurity
vulnerabilities. Sandboxes can also be used to evaluate suspicious software, evaluate files containing malicious code,
and simulate attack scenarios.
Sandboxes can be stand-alone physical computers that are not connected to a network; however, it is often more
time- and cost-effective to use software or cloud-based virtual machines as sandbox environments. Note that some
malware authors know how to write code to detect if the malware is executed in a VM or sandbox environment.
Attackers can program their malware to behave as harmless software when run inside these types of testing
environments.
Prevention measures
Some common measures organizations use to prevent brute force attacks and similar attacks from occurring include:
• Salting and hashing: Hashing converts information into a unique value that can then be used to determine
its integrity. It is a one-way function, meaning it is impossible to decrypt and obtain the original text. Salting
adds random characters to hashed passwords. This increases the length and complexity of hash values,
making them more secure.
• Multi-factor authentication (MFA) and two-factor authentication (2FA): MFA is a security measure which
requires a user to verify their identity in two or more ways to access a system or network. This verification
happens using a combination of authentication factors: a username and password, fingerprints, facial
recognition, or a one-time password (OTP) sent to a phone number or email. 2FA is similar to MFA, except it
uses only two forms of verification.
• CAPTCHA and reCAPTCHA: CAPTCHA stands for Completely Automated Public Turing test to tell
Computers and Humans Apart. It asks users to complete a simple test that proves they are human. This helps
prevent software from trying to brute force a password. reCAPTCHA is a free CAPTCHA service from Google
that helps protect websites from bots and malicious software.
• Password policies: Organizations use password policies to standardize good password practices throughout
the business. Policies can include guidelines on how complex a password should be, how often users need to
update passwords, and if there are limits to how many times a user can attempt to log in before their account
is suspended.
Firewall
So far in this course, you learned about stateless firewalls, stateful firewalls, and next-generation firewalls (NGFWs),
and the security advantages of each of them.
Most firewalls are similar in their basic functions. Firewalls allow or block traffic based on a set of rules. As data
packets enter a network, the packet header is inspected and allowed or denied based on its port number. NGFWs are
also able to inspect packet payloads. Each system should have its own firewall, regardless of the network firewall.
An intrusion detection system (IDS) is an application that monitors system activity and alerts on possible intrusions.
An IDS alerts administrators based on the signature of malicious traffic.
The IDS is configured to detect known attacks. IDS systems often sniff data packets as they move across the network
and analyze them for the characteristics of known attacks. Some IDS systems review not only for signatures of known
attacks, but also for anomalies that could be the sign of malicious activity. When the IDS discovers an anomaly, it
sends an alert to the network administrator who can then investigate further.
The limitations to IDS systems are that they can only scan for known attacks or obvious anomalies. New and
sophisticated attacks might not be caught. The other limitation is that the IDS doesn’t actually stop the incoming traffic
if it detects something awry. It’s up to the network administrator to catch the malicious activity before it does anything
damaging to the network.
When combined with a firewall, an IDS adds another layer of defense. The IDS is placed behind the firewall and
before entering the LAN, which allows the IDS to analyze data streams after network traffic that is disallowed by the
firewall has been filtered out. This is done to reduce noise in IDS alerts, also referred to as false positives.
Intrusion Prevention System
An intrusion prevention system (IPS) is an application that monitors system activity for intrusive activity and takes
action to stop the activity. It offers even more protection than an IDS because it actively stops anomalies when they
are detected, unlike the IDS that simply reports the anomaly to a network administrator.
An IPS searches for signatures of known attacks and data anomalies. An IPS reports the anomaly to security analysts
and blocks a specific sender or drops network packets that seem suspect.
The IPS (like an IDS) sits behind the firewall in the network architecture. This offers a high level of security because
risky data streams are disrupted before they even reach sensitive parts of the network. However, one potential
limitation is that it is inline: If it breaks, the connection between the private network and the internet breaks. Another
limitation of IPS is the possibility of false positives, which can result in legitimate traffic getting dropped.
A security information and event management system (SIEM) is an application that collects and analyzes log data
to monitor critical activities in an organization. SIEM tools work in real time to report suspicious activity in a centralized
dashboard. SIEM tools additionally analyze network log data sourced from IDSs, IPSs, firewalls, VPNs, proxies, and
DNS logs. SIEM tools are a way to aggregate security event data so that it all appears in one place for security
analysts to analyze. This is referred to as a single pane of glass.
Below, you can review an example of a dashboard from Google Cloud’s SIEM tool, Chronicle. Chronicle is a cloud-
native tool designed to retain, analyze, and search data.
Splunk is another common SIEM tool. Splunk offers different SIEM tool options: Splunk Enterprise and Splunk Cloud.
Both options include detailed dashboards which help security professionals to review and analyze an organization's
data. There are also other similar SIEM tools available, and it's important for security professionals to research the
different tools to determine which one is most beneficial to the organization.
A SIEM tool doesn’t replace the expertise of security analysts, or of the network- and system-hardening activities
covered in this course, but they’re used in combination with other security methods. Security analysts often work in a
Security Operations Center (SOC) where they can monitor the activity across the network. They can then use their
expertise and experience to determine how to respond to the information on the dashboard and decide when the
events meet the criteria to be escalated to oversight.
Many organizations choose to use cloud services because of the ease of deployment, speed of deployment, cost
savings, and scalability of these options. Cloud computing presents unique security challenges that cybersecurity
analysts need to be aware of.
Identity access management (IAM) is a collection of processes and technologies that helps organizations manage
digital identities in their environment. This service also authorizes how users can use different cloud resources. A
common problem that organizations face when using the cloud is the loose configuration of cloud user roles. An
improperly configured user role increases risk by allowing unauthorized users to have access to critical cloud
operations.
Configuration
The number of available cloud services adds complexity to the network. Each service must be carefully configured to
meet security and compliance requirements. This presents a particular challenge when organizations perform an initial
migration into the cloud. When this change occurs on their network, they must ensure that every process moved into
the cloud has been configured correctly. If network administrators and architects are not meticulous in correctly
configuring the organization’s cloud services, they could leave the network open to compromise. Misconfigured cloud
services are a common source of cloud security issues.
Attack surface
Cloud service providers (CSPs) offer numerous applications and services for organizations at a low cost. Every
service or application on a network carries its own set of risks and vulnerabilities and increases an organization’s
overall attack surface. An increased attack surface must be compensated for with increased security measures. Cloud
networks that utilize many services introduce lots of entry points into an organization’s network. However, if the
network is designed correctly, utilizing several services does not introduce more entry points into an organization’s
network design. These entry points can be used to introduce malware onto the network and pose other security
vulnerabilities. It is important to note that CSPs often defer to more secure options, and have undergone more scrutiny
than a traditional on-premises network.
Zero-day attacks
Zero-day attacks are an important security consideration for organizations using cloud or traditional on-premise
network solutions. A zero day attack is an exploit that was previously unknown. CSPs are more likely to know about a
zero day attack occurring before a traditional IT organization does. CSPs have ways of patching hypervisors and
migrating workloads to other virtual machines. These methods ensure the customers are not impacted by the attack.
There are also several tools available for patching at the operating system level that organizations can use.
Network administrators have access to every data packet crossing the network with both on-premise and cloud
networks. They can sniff and inspect data packets to learn about network performance or to check for possible threats
and attacks. This kind of visibility is also offered in the cloud through flow logs and tools, such as packet mirroring.
CSPs take responsibility for security in the cloud, but they do not allow the organizations that use their infrastructure to
monitor traffic on the CSP’s servers. Many CSPs offer strong security measures to protect their infrastructure. Still,
this situation might be a concern for organizations that are accustomed to having full access to their network and
operations. CSPs pay for third-party audits to verify how secure a cloud network is and identify potential
vulnerabilities. The audits can help organizations identify whether any vulnerabilities originate from on-premise
infrastructure and if there are any compliance lapses from their CSP.
A commonly accepted cloud security principle is the shared responsibility model. The shared responsibility model
states that the CSP must take responsibility for security involving the cloud infrastructure, including physical data
centers, hypervisors, and host operating systems. The company using the cloud service is responsible for the assets
and processes that they store or operate in the cloud.
The shared responsibility model ensures that both the CSP and the users agree about where their responsibility for
security begins and ends. A problem occurs when organizations assume that the CSP is taking care of security that
they have not taken responsibility for. One example of this is cloud applications and configurations. The CSP takes
responsibility for securing the cloud, but it is the organization’s responsibility to ensure that services are configured
properly according to the security requirements of their organization.
Understand risks, threats, and vulnerabilities
When security events occur, you’ll need to work in close coordination with others to address the problem. Doing so
quickly requires clear communication between you and your team to get the job done.
• Risk: Anything that can impact the confidentiality, integrity, or availability of an asset
• Threat: Any circumstance or event that can negatively impact assets
• Vulnerability: A weakness that can be exploited by a threat
These words tend to be used interchangeably in everyday life. But in security, they are used to describe very specific
concepts when responding to and planning for security events. In this reading, you’ll identify what each term
represents and how they are related.
Security risk
Security plans are all about how an organization defines risk. However, this definition can vary widely by organization.
As you may recall, a risk is anything that can impact the confidentiality, integrity, or availability of an asset. Since
organizations have particular assets that they value, they tend to differ in how they interpret and approach risk.
One way to interpret risk is to consider the potential effects that negative events can have on a business. Another way
to present this idea is with this calculation:
For example, you risk being late when you drive a car to work. This negative event is more likely to happen if you get
a flat tire along the way. And the impact could be serious, like losing your job. All these factors influence how you
approach commuting to work every day. The same is true for how businesses handle security risks.
The business impact of a negative event will always depend on the asset and the situation. Your primary focus as a
security professional will be to focus on the likelihood side of the equation by dealing with certain factors that increase
the odds of a problem.
Risk factors
As you’ll discover throughout this course, there are two broad risk factors that you’ll be concerned with in the field:
• Threats
• Vulnerabilities
The risk of an asset being harmed or damaged depends greatly on whether a threat takes advantage of
vulnerabilities.
Categories of threat
Threats are circumstances or events that can negatively impact assets. There are many different types of threats.
However, they are commonly categorized as two types: intentional and unintentional.
For example, an intentional threat might be a malicious hacker who gains access to sensitive information by targeting
a misconfigured application. An unintentional threat might be an employee who holds the door open for an unknown
person and grants them access to a restricted area. Either one can cause an event that must be responded to.
Categories of vulnerability
Vulnerabilities are weaknesses that can be exploited by threats. There’s a wide range of vulnerabilities, but they can
be grouped into two categories: technical and human.
For example, a technical vulnerability can be misconfigured software that might give an unauthorized person access
to important data. A human vulnerability can be a forgetful employee who loses their access card in a parking lot.
Either one can lead to risk.
Organizations often face an overwhelming amount of risk. Developing a security plan from the beginning that
addresses all risk can be challenging. This makes security frameworks a useful option.
Previously, you learned about the NIST Cybersecurity Framework (CSF). A major benefit of the CSF is that it's flexible
and can be applied to any industry. In this reading, you’ll explore how the NIST CSF can be implemented.
As you might recall, the framework consists of three main components: the core, tiers, and profiles. In the following
sections, you'll learn more about each of these CSF components.
Core
The CSF core is a set of desired cybersecurity outcomes that help organizations customize their security plan. It
consists of five functions, or parts: Identify, Protect, Detect, Respond, and Recover. These functions are commonly
used as an informative reference to help organizations identify their most important assets and protect those assets
with appropriate safeguards. The CSF core is also used to understand ways to detect attacks and develop response
and recovery plans should an attack happen.
Tiers
The CSF tiers are a way of measuring the sophistication of an organization's cybersecurity program. CSF tiers are
measured on a scale of 1 to 4. Tier 1 is the lowest score, indicating that a limited set of security controls have been
implemented. Overall, CSF tiers are used to assess an organization's security posture and identify areas for
improvement.
Profiles
The CSF profiles are pre-made templates of the NIST CSF that are developed by a team of industry experts. CSF
profiles are tailored to address the specific risks of an organization or industry. They are used to help organizations
develop a baseline for their cybersecurity plans, or as a way of comparing their current cybersecurity posture to a
specific industry standard.
Principle of least privilege
Security controls are essential to keeping sensitive data private and safe. One of the most common controls is the
principle of least privilege, also referred to as PoLP or least privilege. The principle of least privilege is a security
concept in which a user is only granted the minimum level of access and authorization required to complete a task or
function.
Least privilege is a fundamental security control that supports the confidentiality, integrity, and availability (CIA) triad of
information. In this reading, you'll learn how the principle of least privilege reduces risk, how it's commonly
implemented, and why it should be routinely audited.
Every business needs to plan for the risk of data theft, misuse, or abuse. Implementing the principle of least privilege
can greatly reduce the risk of costly incidents like data breaches by:
Least privilege greatly reduces the likelihood of a successful attack by connecting specific resources to specific users
and placing limits on what they can do. It's an important security control that should be applied to any asset. Clearly
defining who or what your users are is usually the first step of implementing least privilege effectively.
To implement least privilege, access and authorization must be determined first. There are two questions to ask to do
so:
Determining who the user is usually straightforward. A user can refer to a person, like a customer, an employee, or a
vendor. It can also refer to a device or software that's connected to your business network. In general, every user
should have their own account. Accounts are typically stored and managed within an organization's directory service.
• Guest accounts are provided to external users who need to access an internal network, like customers,
clients, contractors, or business partners.
• User accounts are assigned to staff based on their job duties.
• Service accounts are granted to applications or software that needs to interact with other software on the
network.
• Privileged accounts have elevated permissions or administrative access.
It's best practice to determine a baseline access level for each account type before implementing least privilege.
However, the appropriate access level can change from one moment to the next. For example, a customer support
representative should only have access to your information while they are helping you. Your data should then become
inaccessible when the support agent starts working with another customer and they are no longer actively assisting
you. Least privilege can only reduce risk if user accounts are routinely and consistently monitored.
Setting up the right user accounts and assigning them the appropriate privileges is a helpful first step. Periodically
auditing those accounts is a key part of keeping your company’s systems secure.
• Usage audits
• Privilege audits
• Account change audits
As a security professional, you might be involved with any of these processes.
Usage audits
When conducting a usage audit, the security team will review which resources each account is accessing and what
the user is doing with the resource. Usage audits can help determine whether users are acting in accordance with an
organization’s security policies. They can also help identify whether a user has permissions that can be revoked
because they are no longer being used.
Privilege audits
Users tend to accumulate more access privileges than they need over time, an issue known as privilege creep. This
might occur if an employee receives a promotion or switches teams and their job duties change. Privilege audits
assess whether a user's role is in alignment with the resources they have access to.
Account directory services keep records and logs associated with each user. Changes to an account are usually
saved and can be used to audit the directory for suspicious activity, like multiple attempts to change an account
password. Performing account change audits helps to ensure that all account changes are made by authorized users.
The data lifecycle is an important model that security teams consider when protecting information. It influences how
they set policies that align with business objectives. It also plays an important role in the technologies security teams
use to make information accessible.
In general, the data lifecycle has five stages. Each describe how data flows through an organization from the moment
it is created until it is no longer useful:
• Collect
• Store
• Use
• Archive
• Destroy
Protecting information at each stage of this process describes the need to keep it accessible and recoverable should
something go wrong.
Data governance
Businesses handle massive amounts of data every day. New information is constantly being collected from internal
and external sources. A structured approach to managing all of this data is the best way to keep it private and secure.
Data governance is a set of processes that define how an organization manages information. Governance often
includes policies that specify how to keep data private, accurate, available, and secure throughout its lifecycle.
Effective data governance is a collaborative activity that relies on people. Data governance policies commonly
categorize individuals into a specific role:
• Data owner: the person that decides who can access, edit, use, or destroy their information.
• Data custodian: anyone or anything that's responsible for the safe handling, transport, and storage of
information.
• Data steward: the person or group that maintains and implements data governance policies set by an
organization.
Most security plans include a specific policy that outlines how information will be managed across an organization.
This is known as a data governance policy. These documents clearly define procedures that should be followed to
participate in keeping data safe. They place limits on who or what can access data. Security professionals are
important participants in data governance. As a data custodian, you will be responsible for ensuring that data isn’t
damaged, stolen, or misused.
Data is more than just a bunch of 1s and 0s being processed by a computer. Data can represent someone's personal
thoughts, actions, and choices. It can represent a purchase, a sensitive medical decision, and everything in between.
For this reason, data owners should be the ones deciding whether or not to share their data. As a security
professional, protecting a person's data privacy decisions must always be respected.
Securing data can be challenging. In large part, that's because data owners generate more data than they can
manage. As a result, data custodians and stewards sometimes lack direct, explicit instructions on how they should
handle specific types of data. Governments and other regulatory agencies have bridged this gap by creating rules that
specify the types of information that organizations must protect by default:
• PII is any information used to infer an individual's identity. Personally identifiable information, or PII, refers to
information that can be used to contact or locate someone.
• PHI stands for protected health information. In the U.S., it is regulated by the Health Insurance Portability and
Accountability Act (HIPAA), which defines PHI as “information that relates to the past, present, or future
physical or mental health or condition of an individual.” In the EU, PHI has a similar definition but it is
regulated by the General Data Protection Regulation (GDPR).
• SPII is a specific type of PII that falls under stricter handling guidelines. The S stands for sensitive, meaning
this is a type of personally identifiable information that should only be accessed on a need-to-know basis,
such as a bank account number or login credentials.
Security and privacy are two terms that often get used interchangeably outside of this field. Although the two concepts
are connected, they represent specific functions:
• Information privacy refers to the protection of unauthorized access and distribution of data.
• Information security (InfoSec) refers to the practice of keeping data in all states away from unauthorized
users.
The key difference: Privacy is about providing people with control over their personal information and how it's shared.
Security is about protecting people’s choices and keeping their information safe from potential threats. For example, a
retail company might want to collect specific kinds of personal information about its customers for marketing purposes,
like their age, gender, and location. How this private information will be used should be disclosed to customers before
it's collected. In addition, customers should be given an option to opt-out if they decide not to share their data. Once
the company obtains consent to collect personal information, it might implement specific security controls in place to
protect that private data from unauthorized access, use, or disclosure. The company should also have security
controls in place to respect the privacy of all stakeholders and anyone who chose to opt-out.
Why privacy matters in security
Data privacy and protection are topics that started gaining a lot of attention in the late 1990s. At that time, tech
companies suddenly went from processing people’s data to storing and using it for business purposes. For example, if
a user searched for a product online, companies began storing and sharing access to information about that user’s
search history with other companies. Businesses were then able to deliver personalized shopping experiences to the
user for free.
Eventually this practice led to a global conversation about whether these organizations had the right to collect and
share someone’s private data. Additionally, the issue of data security became a greater concern; the more
organizations collected data, the more vulnerable it was to being abused, misused, or stolen.
Many organizations became more concerned about the issues of data privacy. Businesses became more transparent
about how they were collecting, storing, and using information. They also began implementing more security
measures to protect people's data privacy. However, without clear rules in place, protections were inconsistently
applied.
Note: The more data is collected, stored, and used, the more vulnerable it is to breaches and threats.
Businesses are required to abide by certain laws to operate. As you might recall, regulations are rules set by a
government or another authority to control the way something is done. Privacy regulations in particular exist to protect
a user from having their information collected, used, or shared without their consent. Regulations may also describe
the security measures that need to be in place to keep private information away from threats.
Three of the most influential industry regulations that every security professional should know about are:
GDPR
GDPR is a set of rules and regulations developed by the European Union (EU) that puts data owners in total control of
their personal information. Under GDPR, types of personal information include a person's name, address, phone
number, financial information, and medical information.
The GDPR applies to any business that handles the data of EU citizens or residents, regardless of where that
business operates. For example, a US based company that handles the data of EU visitors to their website is subject
to the GDPRs provisions.
PCI DSS
PCI DSS is a set of security standards formed by major organizations in the financial industry. This regulation aims to
secure credit and debit card transactions against data theft and fraud.
Businesses should comply with important regulations in their industry. Doing so validates that they have met a
minimum level of security while also demonstrating their dedication to maintaining data privacy.
Meeting compliance standards is usually a continual, two-part process of security audits and assessments:
• A security audit is a review of an organization's security controls, policies, and procedures against a set of
expectations.
• A security assessment is a check to determine how resilient current security implementations are against
threats.
Types of encryption
• Symmetric encryption is the use of a single secret key to exchange information. Because it uses one key for
encryption and decryption, the sender and receiver must know the secret key to lock or unlock the cipher.
• Asymmetric encryption is the use of a public and private key pair for encryption and decryption of data. It
uses two separate keys: a public key and a private key. The public key is used to encrypt data, and the private
key decrypts it. The private key is only given to users with authorized access.
Ciphers are vulnerable to brute force attacks, which use a trial and error process to discover private information. This
tactic is the digital equivalent of trying every number in a combination lock trying to find the right one. In modern
encryption, longer key lengths are considered to be more secure. Longer key lengths mean more possibilities that an
attacker needs to try to unlock a cipher.
One drawback to having long encryption keys is slower processing times. Although short key lengths are generally
less secure, they’re much faster to compute. Providing fast data communication online while keeping information safe
is a delicate balancing act.
Approved algorithms
Many web applications use a combination of symmetric and asymmetric encryption. This is how they balance user
experience with safeguarding information. As an analyst, you should be aware of the most widely-used algorithms.
Symmetric algorithms
• Triple DES (3DES) is known as a block cipher because of the way it converts plaintext into ciphertext in
“blocks.” Its origins trace back to the Data Encryption Standard (DES), which was developed in the early
1970s. DES was one of the earliest symmetric encryption algorithms that generated 64-bit keys. A bit is the
smallest unit of data measurement on a computer. As you might imagine, Triple DES generates keys that are
192 bits, or three times as long. Despite the longer keys, many organizations are moving away from using
Triple DES due to limitations on the amount of data that can be encrypted. However, Triple DES is likely to
remain in use for backwards compatibility purposes.
• Advanced Encryption Standard (AES) is one of the most secure symmetric algorithms today. AES generates
keys that are 128, 192, or 256 bits. Cryptographic keys of this size are considered to be safe from brute force
attacks. It’s estimated that brute forcing an AES 128-bit key could take a modern computer billions of years!
Asymmetric algorithms
• Rivest Shamir Adleman (RSA) is named after its three creators who developed it while at the Massachusetts
Institute of Technology (MIT). RSA is one of the first asymmetric encryption algorithms that produces a public
and private key pair. Asymmetric algorithms like RSA produce even longer key lengths. In part, this is due to
the fact that these functions are creating two keys. RSA key sizes are 1,024, 2,048, or 4,096 bits. RSA is
mainly used to protect highly sensitive data.
• Digital Signature Algorithm (DSA) is a standard asymmetric algorithm that was introduced by NIST in the early
1990s. DSA also generates key lengths of 2,048 bits. This algorithm is widely used today as a complement to
RSA in public key infrastructure.
Generating keys
These algorithms must be implemented when an organization chooses one to protect their data. One way this is done
is using OpenSSL, which is an open-source command line tool that can be used to generate public and private keys.
OpenSSL is commonly used by computers to verify digital certificates that are exchanged as part of public key
infrastructure.
Origins of hashing
Hash functions have been around since the early days of computing. They were originally created as a way to quickly
search for data. Since the beginning, these algorithms have been designed to represent data of any size as small,
fixed-size values, or digests. Using a hash table, which is a data structure that's used to store and reference hash
values, these small values became a more secure and efficient way for computers to reference data.
One of the earliest hash functions is Message Digest 5, more commonly known as MD5. Professor Ronald Rivest of
the Massachusetts Institute of Technology (MIT) developed MD5 in the early 1990s as a way to verify that a file sent
over a network matched its source file.
Whether it’s used to convert a single email or the source code of an application, MD5 works by converting data into a
128-bit value. You might recall that a bit is the smallest unit of data measurement on a computer. Bits can either be a
0 or 1. In a computer, bits represent user input in a way that computers can interpret. In a hash table, this appears as
a string of 32 characters. Altering anything in the source file generates an entirely new hash value.
Generally, the longer the hash value, the more secure it is. It wasn’t long after MD5's creation that security
practitioners discovered 128-bit digests resulted in a major vulnerability.
Hash collisions
One of the flaws in MD5 happens to be a characteristic of all hash functions. Hash algorithms map any input,
regardless of its length, into a fixed-size value of letters and numbers. What’s the problem with that? Although there
are an infinite amount of possible inputs, there’s only a finite set of available outputs!
MD5 values are limited to 32 characters in length. Due to the limited output size, the algorithm is considered to be
vulnerable to hash collision, an instance when different inputs produce the same hash value. Because hashes are
used for authentication, a hash collision is similar to copying someone’s identity. Attackers can carry out collision
attacks to fraudulently impersonate authentic data.
Next-generation hashing
To avoid the risk of hash collisions, functions that generated longer values were needed. MD5's shortcomings gave
way to a new group of functions known as the Secure Hashing Algorithms, or SHAs. The National Institute of
Standards and Technology (NIST) approves each of these algorithms. Numbers besides each SHA function indicate
the size of its hash value in bits. Except for SHA-1, which produces a 160-bit digest, these algorithms are considered
to be collision-resistant. However, that doesn’t make them invulnerable to other exploits.
• SHA-1
• SHA-224
• SHA-256
• SHA-384
• SHA-512
This is a safe system unless an attacker gains access to the user database. If passwords are stored in plaintext, then
an attacker can steal that information and use it to access company resources. Hashing adds an additional layer of
security. Because hash values can't be reversed, an attacker would not be able to steal someone's login credentials if
they managed to gain access to the database.
Rainbow tables
A rainbow table is a file of pre-generated hash values and their associated plaintext. They’re like dictionaries of weak
passwords. Attackers capable of obtaining an organization’s password database can use a rainbow table to compare
them against all possible values.
Functions with larger digests are less vulnerable to collision and rainbow table attacks. But as you’re learning, no
security control is perfect. Salting is an additional safeguard that's used to strengthen hash functions. A salt is a
random string of characters that's added to data before it's hashed. The additional characters produce a more unique
hash value, making salted data resilient to rainbow table attacks. For example, a database containing passwords
might have several hashed entries for the password "password." If those passwords were all salted, each entry would
be completely different. That means an attacker using a rainbow table would be unable to find matching values for
"password" in the database.
For this reason, salting has become increasingly common when storing passwords and other types of sensitive data.
The length and uniqueness of a salt is important. Similar to hash values, the longer and more complex a salt is, the
harder it is to crack.
Security is more than simply combining processes and technologies to protect assets. Instead, security is about
ensuring that these processes and technologies are creating a secure environment that supports a defense strategy.
A key to doing this is implementing two fundamental security principles that limit access to organizational resources:
• The principle of least privilege in which a user is only granted the minimum level of access and
authorization required to complete a task or function.
• Separation of duties, which is the principle that users should not be given levels of authorization that would
allow them to misuse a system.
Both principles typically support each other. For example, according to least privilege, a person who needs permission
to approve purchases from the IT department shouldn't have the permission to approve purchases from every
department. Likewise, according to separation of duties, the person who can approve purchases from the IT
department should be different from the person who can input new purchases. In other words, least privilege limits the
access that an individual receives, while separation of duties divides responsibilities among multiple people to prevent
any one person from having too much control. Previously, you learned about the authentication, authorization, and
accounting (AAA) framework. Many businesses used this model to implement these two security principles and
manage user access. In this reading, you’ll learn about the other major framework for managing user access, identity
and access management (IAM). You will learn about the similarities between AAA and IAM and how they're commonly
implemented.
Identity and access management (IAM)
As organizations become more reliant on technology, regulatory agencies have put more pressure on them to
demonstrate that they’re doing everything they can to prevent threats. Identity and access management (IAM) is a
collection of processes and technologies that helps organizations manage digital identities in their environment. Both
AAA and IAM systems are designed to authenticate users, determine their access privileges, and track their activities
within a system.
Either model used by your organization is more than a single, clearly defined system. They each consist of a collection
of security controls that ensure the right user is granted access to the right resources at the right time and for the right
reasons. Each of those four factors is determined by your organization's policies and processes.
Authenticating users
Authentication is mainly verified with login credentials. Single sign-on (SSO), a technology that combines several
different logins into one, and multi-factor authentication (MFA), a security measure that requires a user to verify
their identity in two or more ways to access a system or network, are other tools that organizations use to authenticate
individuals and systems.
User provisioning
Back-end systems need to be able to verify whether the information provided by a user is accurate. To accomplish
this, users must be properly provisioned. User provisioning is the process of creating and maintaining a user's digital
identity. For example, a college might create a new user account when a new instructor is hired. The new account will
be configured to provide access to instructor-only resources while they are teaching. Security analysts are routinely
involved with provisioning users and their access privileges.
Granting authorization
If the right user has been authenticated, the network should ensure the right resources are made available. There are
three common frameworks that organizations use to handle this step of IAM:
MAC is the strictest of the three frameworks. Authorization in this model is based on a strict need-to-know basis.
Access to information must be granted manually by a central authority or system administrator. For example, MAC is
commonly applied in law enforcement, military, and other government agencies where users must request access
through a chain of command. MAC is also known as non-discretionary control because access isn’t given at the
discretion of the data owner
.
DAC is typically applied when a data owner decides appropriate levels of access. One example of DAC is when the
owner of a Google Drive folder shares editor, viewer, or commentor access with someone else.
RBAC is used when authorization is determined by a user's role within an organization. For example, a user in the
marketing department may have access to user analytics but not network administration.
Users often experience authentication and authorization as a single, seamless experience. In large part, that’s due to
access control technologies that are configured to work together. These tools offer the speed and automation needed
by administrators to monitor and modify access rights. They also decrease errors and potential risks.
An organization's IT department sometimes develops and maintains customized access control technologies on their
own. A typical IAM or AAA system consists of a user directory, a set of tools for managing data in that directory, an
authorization system, and an auditing system. Some organizations create custom systems to tailor them to their
security needs. However, building an in-house solution comes at a steep cost of time and other resources.
Instead, many organizations opt to license third-party solutions that offer a suite of tools that enable them to quickly
secure their information systems. Keep in mind, security is about more than combining a bunch of tools. It’s always
important to configure these technologies so they can help to provide a secure environment.
What is OWASP?
OWASP is a nonprofit foundation that works to improve the security of software. OWASP is an open platform that
security professionals from around the world use to share information, tools, and events that are focused on securing
the web.
Common vulnerabilities
Businesses often make critical security decisions based on the vulnerabilities listed in the OWASP Top 10. This
resource influences how businesses design new software that will be on their network, unlike the CVE® list, which
helps them identify improvements to existing programs. These are the most regularly listed vulnerabilities that appear
in their rankings to know about:
Access controls limit what users can do in a web application. For example, a blog might allow visitors to post
comments on a recent article but restricts them from deleting the article entirely. Failures in these mechanisms can
lead to unauthorized information disclosure, modification, or destruction. They can also give someone unauthorized
access to other business applications.
Cryptographic failures
Information is one of the most important assets businesses need to protect. Privacy laws such as General Data
Protection Regulation (GDPR) require sensitive data to be protected by effective encryption methods. Vulnerabilities
can occur when businesses fail to encrypt things like personally identifiable information (PII). For example, if a web
application uses a weak hashing algorithm, like MD5, it’s more at risk of suffering a data breach.
Injection
Injection occurs when malicious code is inserted into a vulnerable application. Although the app appears to work
normally, it does things that it wasn’t intended to do. Injection attacks can give threat actors a backdoor into an
organization’s information system. A common target is a website’s login form. When these forms are vulnerable to
injection, attackers can insert malicious code that gives them access to modify or steal user credentials.
Insecure design
Applications should be designed in such a way that makes them resilient to attack. When they aren’t, they’re much
more vulnerable to threats like injection attacks or malware infections. Insecure design refers to a wide range of
missing or poorly implemented security controls that should have been programmed into an application when it was
being developed.
Security misconfiguration
Misconfigurations occur when security settings aren’t properly set or maintained. Companies use a variety of different
interconnected systems. Mistakes often happen when those systems aren’t properly set up or audited. A common
example is when businesses deploy equipment, like a network server, using default settings. This can lead
businesses to use settings that fail to address the organization's security objectives.
Vulnerable and outdated components is a category that mainly relates to application development. Instead of coding
everything from scratch, most developers use open-source libraries to complete their projects faster and easier. This
publicly available software is maintained by communities of programmers on a volunteer basis. Applications that use
vulnerable components that have not been maintained are at greater risk of being exploited by threat actors.
Identification is the keyword in this vulnerability category. When applications fail to recognize who should have access
and what they’re authorized to do, it can lead to serious problems. For example, a home Wi-Fi router normally uses a
simple login form to keep unwanted guests off the network. If this defense fails, an attacker can invade the
homeowner’s privacy.
Software and data integrity failures are instances when updates or patches are inadequately reviewed before
implementation. Attackers might exploit these weaknesses to deliver malicious software. When that occurs, there can
be serious downstream effects. Third parties are likely to become infected if a single system is compromised, an event
known as a supply chain attack.
Companies have public and private information stored on web servers. When you use a hyperlink or click a button on
a website, a request is sent to a server that should validate who you are, fetch the appropriate data, and then return it
to you.
At some point in time, you may have wondered, “Why do my devices constantly need updating?” For consumers,
updates provide improvements to performance, stability, and even new features! But from a security standpoint, they
serve a specific purpose. Updates allow organizations to address security vulnerabilities that can place their users,
devices, and networks at risk.
An outdated computer is a lot like a house with unlocked doors. Malicious actors use these gaps in security the same
way, to gain unauthorized access. Software updates are similar to locking the doors to keep them out. A patch
update is a software and operating system update that addresses security vulnerabilities within a program or product.
Patches usually contain bug fixes that address common security vulnerabilities and exposures.
When software updates become available, clients and users have two installation options:
• Manual updates
• Automatic updates
Manual updates
A manual deployment strategy relies on IT departments or users obtaining updates from the developers. Home office
or small business environments might require you to find, download, and install updates yourself. In enterprise
settings, the process is usually handled with a configuration management tool. These tools offer a range of options to
deploy updates, like to all clients on your network or a select group of users.
Advantage: An advantage of manual update deployment strategies is control. That can be useful if software updates
are not thoroughly tested by developers, leading to instability issues.
Disadvantage: A drawback to manual update deployments is that critical updates can be forgotten or disregarded
entirely.
Automatic updates
An automatic deployment strategy takes the opposite approach. With this option, finding, downloading, and installing
updates can be done by the system or application. Certain permissions need to be enabled by users or IT groups
before updates can be installed, or pushed, when they're available. It is up to the developers to adequately test their
patches before release.
Advantage: An advantage to automatic updates is that the deployment process is simplified. It also keeps systems
and software current with the latest, critical patches.
Disadvantage: A drawback to automatic updates is that instability issues can occur if the patches were not thoroughly
tested by the vendor. This can result in performance problems and a poor user experience.
End-of-life software
Sometimes updates are not available for a certain type of software known as end-of-life (EOL) software. All software
has a lifecycle. It begins when it’s produced and ends when a newer version is released. At that point, developers
must allocate resources to the newer versions, which leads to EOL software. While the older software is still useful,
the manufacturer no longer supports it.
Penetration testing
An effective security plan relies on regular testing to find an organization's weaknesses. Previously, you learned that
vulnerability assessments, the internal review process of an organization's security systems, are used to design
defense strategies based on system weaknesses. In this reading, you'll learn how security teams evaluate the
effectiveness of their defenses using penetration testing.
Penetration testing
A penetration test, or pen test, is a simulated attack that helps identify vulnerabilities in systems, networks, websites,
applications, and processes. The simulated attack in a pen test involves using the same tools and techniques as
malicious actors in order to mimic a real life attack. Since a pen test is an authorized attack, it is considered to be a
form of ethical hacking. Unlike a vulnerability assessment that finds weaknesses in a system's security, a pen test
exploits those weaknesses to determine the potential consequences if the system breaks or gets broken into by a
threat actor. For example, the cybersecurity team at a financial company might simulate an attack on their banking
app to determine if there are weaknesses that would allow an attacker to steal customer information or illegally
transfer funds. If the pen test uncovers misconfigurations, the team can address them and improve the overall security
of the app.
Note: Organizations that are regulated by PCI DSS, HIPAA, or GDPR must routinely perform penetration testing to
maintain compliance standards.
These authorized attacks are performed by pen testers who are skilled in programming and network architecture.
Depending on their objectives, organizations might use a few different approaches to penetration testing:
• Red team tests simulate attacks to identify vulnerabilities in systems, networks, or applications.
• Blue team tests focus on defense and incident response to validate an organization's existing security
systems.
• Purple team tests are collaborative, focusing on improving the security posture of the organization by
combining elements of red and blue team exercises.
Red team tests are commonly performed by independent pen testers who are hired to evaluate internal systems.
Although, cybersecurity teams may also have their own pen testing experts. Regardless of the approach, penetration
testers must make an important decision before simulating an attack: How much access and information do I need?
• Open-box testing is when the tester has the same privileged access that an internal developer would have—
information like system architecture, data flow, and network diagrams. This strategy goes by several different
names, including internal, full knowledge, white-box, and clear-box penetration testing.
• Closed-box testing is when the tester has little to no access to internal systems—similar to a malicious
hacker. This strategy is sometimes referred to as external, black-box, or zero knowledge penetration testing.
• Partial knowledge testing is when the tester has limited access and knowledge of an internal system—for
example, a customer service representative. This strategy is also known as gray-box testing.
Closed box testers tend to produce the most accurate simulations of a real-world attack. Nevertheless, each strategy
produces valuable results by demonstrating how an attacker might infiltrate a system and what information they could
access.
Penetration testers are in-demand in the fast growing field of cybersecurity. All of the skills you’re learning in this
program can help you advance towards a career in pen testing:
• Network and application security
• Experience with operating systems, like Linux
• Vulnerability analysis and threat modeling
• Detection and response tools
• Programming languages, like Python and BASH
• Communication skills
Programming skills are very helpful in penetration testing because it's often performed on software and IT systems.
With enough practice and dedication, cybersecurity professionals at any level can develop the skills needed to be a
pen tester.
Threat actors
A threat actor is any person or group who presents a security risk. This broad definition refers to people inside and
outside an organization. It also includes individuals who intentionally pose a threat, and those that accidentally put
assets at risk. That’s a wide range of people!
Threat actors are normally divided into five categories based on their motivations:
• Competitors refers to rival companies who pose a threat because they might benefit from leaked information.
• State actors are government intelligence agencies.
• Criminal syndicates refer to organized groups of people who make money from criminal activity.
• Insider threats can be any individual who has or had authorized access to an organization’s resources. This
includes employees who accidentally compromise assets or individuals who purposefully put them at risk for
their own benefit.
• Shadow IT refers to individuals who use technologies that lack IT governance. A common example is when
an employee uses their personal email to send work-related communications.
In the digital attack surface, these threat actors often gain unauthorized access by hacking into systems. By definition,
a hacker is any person who uses computers to gain access to computer systems, networks, or data. Similar to the
term threat actor, hacker is also an umbrella term. When used alone, the term fails to capture a threat actor’s
intentions.
Types of hackers
Because the formal definition of a hacker is broad, the term can be a bit ambiguous. In security, it applies to three
types of individuals based on their intent:
1. Unauthorized hackers
2. Authorized, or ethical, hackers
3. Semi-authorized hackers
An unauthorized hacker, or unethical hacker, is an individual who uses their programming skills to commit crimes.
Unauthorized hackers are also known as malicious hackers. Skill level ranges widely among this category of hacker.
For example, there are hackers with limited skills who can’t write their own malicious software, sometimes called script
kiddies. Unauthorized hackers like this carry out attacks using pre-written code that they obtain from other, more
skilled hackers. Authorized, or ethical, hackers refer to individuals who use their programming skills to improve an
organization's overall security. These include internal members of a security team who are concerned with testing and
evaluating systems to secure the attack surface. They also include external security vendors and freelance hackers
that some companies incentivize to find and report vulnerabilities, a practice called bug bounty programs. Semi-
authorized hackers typically refer to individuals who might violate ethical standards, but are not considered malicious.
For example, a hacktivist is a person who might use their skills to achieve a political goal. One might exploit security
vulnerabilities of a public utility company to spread awareness of their existence. The intentions of these types of
threat actors are often to expose security risks that should be addressed before a malicious hacker finds them.
An advanced persistent threat (APT) refers to instances when a threat actor maintains unauthorized access to a
system for an extended period of time. The term is mostly associated with nation states and state-sponsored actors.
Typically, an APT is concerned with surveilling a target to gather information. They then use the intel to manipulate
government, defense, financial, and telecom services.
Access points
Each threat actor has a unique motivation for targeting an organization's assets. Keeping them out takes more than
knowing their intentions and capabilities. It’s also important to recognize the types of attack vectors they’ll use.
For the most part, threat actors gain access through one of these attack vector categories:
• Direct access, referring to instances when they have physical access to a system
• Removable media, which includes portable hardware, like USB flash drives
• Social media platforms that are used for communication and content sharing
• Email, including both personal and business accounts
• Wireless networks on premises
• Cloud services usually provided by third-party organizations
• Supply chains like third-party vendors that can present a backdoor into systems
Any of these attack vectors can provide access to a system. Recognizing a threat actor’s intentions can help you
determine which access points they might target and what ultimate goals they could have. For example, remote
workers are more likely to present a threat via email than a direct access threat.
Usernames and passwords are one of the most common and important security controls in use today. They’re like the
door lock that organizations use to restrict access to their networks, services, and data. But a major issue with relying
on login credentials as a critical line of defense is that they’re vulnerable to being stolen and guessed by attackers.
One way of opening a closed lock is trying as many combinations as possible. Threat actors sometimes use similar
tactics to gain access to an application or a network.
• Simple brute force attacks are an approach in which attackers guess a user's login credentials. They might do
this by entering any combination of username and password that they can think of until they find the one that
works.
• Dictionary attacks are a similar technique except in these instances attackers use a list of commonly used
credentials to access a system. This list is similar to matching a definition to a word in a dictionary.
• Reverse brute force attacks are similar to dictionary attacks, except they start with a single credential and try it
in various systems until a match is found.
• Credential stuffing is a tactic in which attackers use stolen login credentials from previous data breaches to
access user accounts at another organization. A specialized type of credential stuffing is called pass the hash.
These attacks reuse stolen, unsalted hashed credentials to trick an authentication system into creating a new
authenticated user session on the network.
Note: Besides access credentials, encrypted information can sometimes be brute forced using a technique known as
exhaustive key search.
There are so many combinations that can be used to create a single set of login credentials. The number of
characters, letters, and numbers that can be mixed together is truly incredible. When done manually, it could take
someone years to try every possible combination. Instead of dedicating the time to do this, attackers often use
software to do the guess work for them. These are some common brute forcing tools:
• Aircrack-ng
• Hashcat
• John the Ripper
• Ophcrack
• THC Hydra
Sometimes, security professionals use these tools to test and analyze their own systems. They each serve different
purposes. For example, you might use Aircrack-ng to test a Wi-Fi network for vulnerabilities to brute force attack.
Prevention measures
Organizations defend against brute force attacks with a combination of technical and managerial controls. Each make
cracking defense systems through brute force less likely:
Technologies, like multi-factor authentication (MFA), reinforce each login attempt by requiring a second or third form of
identification. Other important tools are CAPTCHA and effective password policies.
Hashing converts information into a unique value that can then be used to determine its integrity. Salting is an
additional safeguard that’s used to strengthen hash functions. It works by adding random characters to data, like
passwords. This increases the length and complexity of hash values, making them harder to brute force and less
susceptible to dictionary attacks.
Multi-factor authentication (MFA) is a security measure that requires a user to verify their identity in two or more
ways to access a system or network. MFA is a layered approach to protecting information. MFA limits the chances of
brute force attacks because unauthorized users are unlikely to meet each authentication requirement even if one
credential becomes compromised.
CAPTCHA
CAPTCHA stands for Completely Automated Public Turing test to tell Computers and Humans Apart. It is known as a
challenge-response authentication system. CAPTCHA asks users to complete a simple test that proves they are
human and not software that’s trying to brute force a password. There are two types of CAPTCHA tests. One
scrambles and distorts a randomly generated sequence of letters and/or numbers and asks users to enter them into a
text box. The other test asks users to match images to a randomly generated word. You’ve likely had to pass a
CAPTCHA test when accessing a web service that contains sensitive information, like an online bank account.
Password policy
Organizations use these managerial controls to standardize good password practices across their business. For
example, one of these policies might require users to create passwords that are at least 8 characters long and feature
a letter, number, and symbol. Other common requirements can include password lockout policies. For example, a
password lockout can limit the number of login attempts before access to an account is suspended and require users
to create new, unique passwords after a certain amount of time.
Signs of an attack
• Baiting is a social engineering tactic that tempts people into compromising their security. A common example
is USB baiting that relies on someone finding an infected USB drive and plugging it into their device.
• Phishing is the use of digital communications to trick people into revealing sensitive data or deploying
malicious software. It is one of the most common forms of social engineering, typically performed via email.
• Quid pro quo is a type of baiting used to trick someone into believing that they’ll be rewarded in return for
sharing access, information, or money. For example, an attacker might impersonate a loan officer at a bank
and call customers offering them a lower interest rate on their credit card. They'll tell the customers that they
simply need to provide their account details to claim the deal.
• Tailgating is a social engineering tactic in which unauthorized people follow an authorized person into a
restricted area. This technique is also sometimes referred to as piggybacking.
• Watering hole is a type of attack when a threat actor compromises a website frequently visited by a specific
group of users. Oftentimes, these watering hole sites are infected with malicious software. An example is the
Holy Water attack of 2020 that infected various religious, charity, and volunteer websites.
Encouraging caution
Spreading awareness usually starts with comprehensive security training. When it comes to social engineering, there
are three main areas to focus on when teaching others:
• Stay alert of suspicious communications and unknown people, especially when it comes to email. For
example, look out for spelling errors and double-check the sender's name and email address.
• Be cautious about sharing information, especially over social media. Threat actors often search these
platforms for any information they can use to their advantage.
• Control curiosity when something seems too good to be true. This can include wanting to click on
attachments or links in emails and advertisements.
Phishing has been around since the early days of the internet. It can be traced back to the 1990s. At the time, people
across the world were coming online for the first time. As the internet became more accessible it began to attract the
attention of malicious actors. These malicious actors realized that the internet gave them a level of anonymity to
commit their crimes.
One of the earliest instances of phishing was aimed at a popular chat service called AOL Instant Messenger (AIM).
Users of the service began receiving emails asking them to verify their accounts or provide personal billing
information. The users were unaware that these messages were sent by malicious actors pretending to be service
providers. This was one of the first examples of mass phishing, which describes attacks that send malicious emails
out to a large number of people, increasing the likelihood of baiting someone into the trap. During the AIM attacks,
malicious actors carefully crafted emails that appeared to come directly from AOL. The messages used official logos,
colors, and fonts to trick unsuspecting users into sharing their information and account details. Attackers used the
stolen information to create fraudulent AOL accounts they could use to carry out other crimes anonymously. AOL was
forced to adapt their security policies to address these threats. The chat service began including messages on their
platforms to warn users about phishing attacks.
Phishing continued evolving at the turn of the century as businesses and newer technologies began entering the
digital landscape. In the early 2000s, e-commerce and online payment systems started to become popular alternatives
to traditional marketplaces. The introduction of online transactions presented new opportunities for attackers to
commit crimes. A number of techniques began to appear around this time period, many of which are still used today.
There are five common types of phishing that every security analyst should know:
• Email phishing is a type of attack sent via email in which threat actors send messages pretending to be a
trusted person or entity.
• Smishing is a type of phishing that uses Short Message Service (SMS), a technology that powers text
messaging. Smishing covers all forms of text messaging services, including Apple’s iMessages, WhatsApp,
and other chat mediums on phones.
• Vishing refers to the use of voice calls or voice messages to trick targets into providing personal information
over the phone.
• Spear phishing is a subset of email phishing in which specific people are purposefully targeted, such as the
accountants of a small business.
• Whaling refers to a category of spear phishing attempts that are aimed at high-ranking executives in an
organization.
Since the early days of phishing, email attacks remain the most common types that are used. While they were
originally used to trick people into sharing access credentials and credit card information, email phishing became a
popular method to infect computer systems and networks with malicious software.
Virus
A virus is malicious code written to interfere with computer operations and cause damage to data and software. This
type of malware must be installed by the target user before it can spread itself and cause damage. One of the many
ways that viruses are spread is through phishing campaigns where malicious links are hidden within links or
attachments.
Worm
A worm is malware that can duplicate and spread itself across systems on its own. Similar to a virus, a worm must be
installed by the target user and can also be spread with tactics like malicious email. Given a worm's ability to spread
on its own, attackers sometimes target devices, drives, or files that have shared access over a network. A well known
example is the Blaster worm, also known as Lovesan, Lovsan, or MSBlast. In the early 2000s, this worm spread itself
on computers running Windows XP and Windows 2000 operating systems. It would force devices into a continuous
loop of shutting down and restarting. Although it did not damage the infected devices, it was able to spread itself to
hundreds of thousands of users around the world. Many variants of the Blaster worm have been deployed since the
original and can infect modern computers.
Note: Worms were very popular attacks in the mid 2000s but are less frequently used in recent years.
Trojan
A trojan, also called a Trojan horse, is malware that looks like a legitimate file or program. This characteristic relates
to how trojans are spread. Similar to viruses, attackers deliver this type of malware hidden in file and application
downloads. Attackers rely on tricking unsuspecting users into believing they’re downloading a harmless file, when
they’re actually infecting their own device with malware that can be used to spy on them, grant access to other
devices, and more.
Adware
Advertising-supported software, or adware, is a type of legitimate software that is sometimes used to display digital
advertisements in applications. Software developers often use adware as a way to lower their production costs or to
make their products free to the public—also known as freeware or shareware. In these instances, developers
monetize their product through ad revenue rather than at the expense of their users.
Malicious adware falls into a sub-category of malware known as a potentially unwanted application (PUA). A PUA
is a type of unwanted software that is bundled in with legitimate programs which might display ads, cause device
slowdown, or install other software. Attackers sometimes hide this type of malware in freeware with insecure design to
monetize ads for themselves instead of the developer. This works even when the user has declined to receive ads.
Spyware
Spyware is malware that's used to gather and sell information without consent. It's also considered a PUA. Spyware
is commonly hidden in bundleware, additional software that is sometimes packaged with other applications. PUAs like
spyware have become a serious challenge in the open-source software development ecosystem. That’s because
developers tend to overlook how their software could be misused or abused by others.
Scareware
Another type of PUA is scareware. This type of malware employs tactics to frighten users into infecting their own
device. Scareware tricks users by displaying fake warnings that appear to come from legitimate companies. Email and
pop-ups are just a couple of ways scareware is spread. Both can be used to deliver phony warnings with false claims
about the user's files or data being at risk.
Fileless malware
Fileless malware does not need to be installed by the user because it uses legitimate programs that are already
installed to infect a computer. This type of infection resides in memory where the malware never touches the hard
drive. This is unlike the other types of malware, which are stored within a file on disk. Instead, these stealthy infections
get into the operating system or hide within trusted applications.
Pro tip: Fileless malware is detected by performing memory analysis, which requires experience with operating
systems.
Rootkits
A rootkit is malware that provides remote, administrative access to a computer. Most attackers use rootkits to open a
backdoor to systems, allowing them to install other forms of malware or to conduct network security attacks.
This kind of malware is often spread by a combination of two components: a dropper and a loader. A dropper is a
type of malware that comes packed with malicious code which is delivered and installed onto a target system. For
example, a dropper is often disguised as a legitimate file, such as a document, an image, or an executable to deceive
its target into opening, or dropping it, onto their device. If the user opens the dropper program, its malicious code is
executed and it hides itself on the target system.
Multi-staged malware attacks, where multiple packets of malicious code are deployed, commonly use a variation
called a loader. A loader is a type of malware that downloads strains of malicious code from an external source and
installs them onto a target system. Attackers might use loaders for different purposes, such as to set up another type
of malware---a botnet.
Botnet
A botnet, short for “robot network,” is a collection of computers infected by malware that are under the control of a
single threat actor, known as the “bot-herder.” Viruses, worms, and trojans are often used to spread the initial infection
and turn the devices into a bot for the bot-herder. The attacker then uses file sharing, email, or social media
application protocols to create new bots and grow the botnet. When a target unknowingly opens the malicious file, the
computer, or bot, reports the information back to the bot-herder, who can execute commands on the infected
computer.
Ransomware
Ransomware describes a malicious attack where threat actors encrypt an organization's data and demand payment to
restore access. According to the Cybersecurity and Infrastructure Security Agency (CISA), ransomware crimes are on
the rise and becoming increasingly sophisticated. Ransomware infections can cause significant damage to an
organization and its customers. An example is the WannaCry attack that encrypts a victim’s computer until a ransom
payment of cryptocurrency is paid.
Defending the application layer requires proper testing to uncover weaknesses that can lead to risk. Threat modeling
is one of the primary ways to ensure that an application meets security requirements. A DevSecOps team, which
stands for development, security, and operations, usually performs these analyses.
Common frameworks
When performing threat modeling, there are multiple methods that can be used, such as:
• STRIDE
• PASTA
• Trike
• VAST
Organizations might use any one of these to gather intelligence and make decisions to improve their security posture.
Ultimately, the “right” model depends on the situation and the types of risks an application might face.
STRIDE
STRIDE is a threat-modeling framework developed by Microsoft. It’s commonly used to identify vulnerabilities in six
specific attack vectors. The acronym represents each of these vectors: spoofing, tampering, repudiation, information
disclosure, denial of service, and elevation of privilege.
PASTA
The Process of Attack Simulation and Threat Analysis (PASTA) is a risk-centric threat modeling process
developed by two OWASP leaders and supported by a cybersecurity firm called VerSprite. Its main focus is to
discover evidence of viable threats and represent this information as a model. PASTA's evidence-based design can
be applied when threat modeling an application or the environment that supports that application. Its seven stage
process consists of various activities that incorporate relevant security artifacts of the environment, like vulnerability
assessment reports.
Trike
Trike is an open source methodology and tool that takes a security-centric approach to threat modeling. It's commonly
used to focus on security permissions, application use cases, privilege models, and other elements that support a
secure environment.
VAST
The Visual, Agile, and Simple Threat (VAST) Modeling framework is part of an automated threat-modeling platform
called ThreatModeler®. Many security teams opt to use VAST as a way of automating and streamlining their threat
modeling assessments.
Threat modelling is often performed by experienced security professionals, but it’s almost never done alone. This is
especially true when it comes to securing applications. Programs are complex systems responsible for handling a lot
of data and processing a variety of commands from users and other systems.
One of the keys to threat modeling is asking the right questions:
A computer security incident response team (CSIRT) is a specialized group of security professionals that are
trained in incident management and response. During incident response, teams can encounter a variety of different
challenges. For incident response to be effective and efficient, there must be clear command, control, and
communication of the situation to achieve the desired goal.
• Command refers to having the appropriate leadership and direction to oversee the response.
• Control refers to the ability to manage technical aspects during incident response, like coordinating resources
and assigning tasks.
• Communication refers to the ability to keep stakeholders informed.
Establishing a CSIRT organizational structure with clear and distinctive roles aids in achieving an effective and
efficient response.
Roles in CSIRTs
CSIRTs are organization dependent, so they can vary in their structure and operation. Structurally, they can exist as a
separate, dedicated team or as a task force that meets when necessary. CSIRTs involve both nonsecurity and
security professionals. Nonsecurity professionals are often consulted to offer their expertise on the incident. These
professionals can be from external departments, such as human resources, public relations, management, IT, legal,
and others. Security professionals involved in a CSIRT typically include three key security related roles:
1. Security analyst
2. Technical lead
3. Incident coordinator
Security analyst
The job of the security analyst is to continuously monitor an environment for any security threats. This includes:
If a critical threat is identified, then analysts escalate it to the appropriate team lead, such as the technical lead.
Technical lead
The job of the technical lead is to manage all of the technical aspects of the incident response process, such as
applying software patches or updates. They do this by first determining the root cause of the incident. Then, they
create and implement the strategies for containing, eradicating, and recovering from the incident. Technical leads
often collaborate with other teams to ensure their incident response priorities align with business priorities, such as
reducing disruptions for customers or returning to normal operations.
Incident coordinator
Responding to an incident also requires cross-collaboration with nonsecurity professionals. CSIRTs will often consult
with and leverage the expertise of members from external departments. The job of the incident coordinator is to
coordinate with the relevant departments during a security incident. By doing so, the lines of communication are open
and clear, and all personnel are made aware of the incident status. Incident coordinators can also be found in other
teams, like the SOC.
Other roles
Depending on the organization, many other roles can be found in a CSIRT, including a dedicated communications
lead, a legal lead, a planning lead, and more.
Note: Teams, roles, responsibilities, and organizational structures can differ for each company. For example, some
different job titles for incident coordinator include incident commander and incident manager.
A security operations center (SOC) is an organizational unit dedicated to monitoring networks, systems, and
devices for security threats or attacks. Structurally, a SOC (usually pronounced "sock") often exists as its own
separate unit or within a CSIRT. You may be familiar with the term blue team, which refers to the security
professionals who are responsible for defending against all security threats and attacks at an organization. A SOC is
involved in various types of blue team activities, such as network monitoring, analysis, and response to incidents.
SOC organization
A SOC is composed of SOC analysts, SOC leads, and SOC managers. Each role has its own respective
responsibilities. SOC analysts are grouped into three different tiers.
The first tier is composed of the least experienced SOC analysts who are known as level 1s (L1s). They are
responsible for:
The second tier comprises the more experienced SOC analysts, or level 2s (L2s). They are responsible for:
The third tier of a SOC is composed of the SOC leads, or level 3s (L3s). These highly experienced professionals are
responsible for:
The SOC manager is at the top of the pyramid and is responsible for:
Other roles
• Forensic investigators: Forensic investigators are commonly L2s and L3s who collect, preserve, and
analyze digital evidence related to security incidents to determine what happened.
• Threat hunters: Threat hunters are typically L3s who work to detect, analyze, and defend against new and
advanced cybersecurity threats using threat intelligence.
An intrusion detection system (IDS) is an application that monitors system activity and alerts on possible intrusions.
An IDS provides continuous monitoring of network events to help protect against security threats or attacks. The goal
of an IDS is to detect potential malicious activity and generate an alert once such activity is detected. An IDS does not
stop or prevent the activity. Instead, security professionals will investigate the alert and act to stop it, if necessary.
For example, an IDS can send out an alert when it identifies a suspicious user login, such as an unknown IP address
logging into an application or a device at an unusual time. But, an IDS will not stop or prevent any further actions, like
blocking the suspicious user login. Examples of IDS tools include Zeek, Suricata, Snort®, and Sagan.
Detection categories
As a security analyst, you will investigate alerts that an IDS generates. There are four types of detection categories
you should be familiar with:
An intrusion prevention system (IPS) is an application that monitors system activity for intrusive activity and takes
action to stop the activity. An IPS works similarly to an IDS. But, IPS monitors system activity to detect and alert on
intrusions, and it also takes action to prevent the activity and minimize its effects. For example, an IPS can send an
alert and modify an access control list on a router to block specific traffic on a server.
Endpoint detection and response (EDR) is an application that monitors an endpoint for malicious activity. EDR tools
are installed on endpoints. Remember that an endpoint is any device connected on a network. Examples include
end-user devices, like computers, phones, tablets, and more.
EDR tools monitor, record, and analyze endpoint system activity to identify, alert, and respond to suspicious activity.
Unlike IDS or IPS tools, EDRs collect endpoint activity data and perform behavioral analysis to identify threat patterns
happening on an endpoint. Behavioral analysis uses the power of machine learning and artificial intelligence to
analyze system behavior to identify malicious or unusual activity. EDR tools also use automation to stop attacks
without the manual intervention of security professionals. For example, if an EDR detects an unusual process starting
up on a user’s workstation that normally is not used, it can automatically block the process from running.
Monitor your network
Once you’ve determined a baseline, you can monitor a network to identify any deviations from that baseline.
Monitoring involves examining network components to detect unusual activities, such as large and unusual data
transfers. Here are examples of network components that can be monitored to detect malicious activity:
Flow analysis
Flow refers to the movement of network communications and includes information related to packets, protocols, and
ports. Packets can travel to ports, which receive and transmit communications. Ports are often, but not always,
associated with network protocols. For example, port 443 is commonly used by HTTPS which is a protocol that
provides website traffic encryption.
However, malicious actors can use protocols and ports that are not commonly associated to maintain communications
between the compromised system and their own machine. These communications are what’s known as command
and control (C2), which are the techniques used by malicious actors to maintain communications with compromised
systems.
Network packets contain components related to the transmission of the packet. This includes details like source and
destination IP address, and the packet payload information, which is the actual data that’s transmitted. Often, this data
is encrypted and requires decryption for it to be readable. Organizations can monitor the payload information of
packets to uncover unusual activity, such as sensitive data transmitting outside of the network, which could indicate a
possible data exfiltration attack.
Temporal patterns
Network packets contain information relating to time. This information is useful in understanding time patterns. For
example, a company operating in North America experiences bulk traffic flows between 9 a.m. to 5 p.m., which is the
baseline of normal network activity. If large volumes of traffic are suddenly outside of the normal hours of network
activity, then this is considered off baseline and should be investigated. Through network monitoring, organizations
can promptly detect network intrusions and work to prevent them from happening by securing network components.
In this program, you’ve learned about security operations centers (SOC) and their role in monitoring systems
against security threats and attacks. Organizations may deploy a network operations center (NOC), which is an
organizational unit that monitors the performance of a network and responds to any network disruption, such as a
network outage. While a SOC is focused on maintaining the security of an organization through detection and
response, a NOC is responsible for maintaining network performance, availability, and uptime. Security analysts
monitor networks to identify any signs of potential security incidents known as indicators of compromise (IoC) and
protect networks from threats or attacks. To do this, they must understand the environment that network
communications travel through so that they can identify deviations in network traffic.
Network monitoring can be automated or performed manually. Some common network monitoring tools can include:
• Intrusion detection systems (IDS) monitor system activity and alert on possible intrusions. An IDS will
detect and alert on the deviations you’ve configured it to detect. Most commonly, IDS tools will monitor the
content of packet payload to detect patterns associated with threats such as malware or phishing attempts.
• Network protocol analyzers, also known as packet sniffers, are tools designed to capture and analyze data
traffic within a network. They can be used to analyze network communications manually in detail. Examples
include tools such as tcpdump and Wireshark, which can be used by security professionals to record network
communications through packet captures. Packet captures can then be investigated to identify potentially
malicious activity.
Indicators of compromise
Indicators of compromise (IoCs) are observable evidence that suggests signs of a potential security incident. IoCs
chart specific pieces of evidence that are associated with an attack, like a file name associated with a type of malware.
You can think of an IoC as evidence that points to something that's already happened, like noticing that a valuable has
been stolen from inside of a car.
Indicators of attack (IoA) are the series of observed events that indicate a real-time incident. IoAs focus on
identifying the behavioral evidence of an attacker, including their methods and intentions.
Pyramid of Pain
Not all indicators of compromise are equal in the value they provide to security teams. It’s important for security
professionals to understand the different types of indicators of compromise so that they can quickly and effectively
detect and respond to them. This is why security researcher David J. Bianco created the concept of the Pyramid of
Pain, with the goal of improving how indicators of compromise are used in incident detection.
The Pyramid of Pain captures the relationship between indicators of compromise and the level of difficulty that
malicious actors experience when indicators of compromise are blocked by security teams. It lists the different types
of indicators of compromise that security professionals use to identify malicious activity. Each type of indicator of
compromise is separated into levels of difficulty. These levels represent the “pain” levels that an attacker faces when
security teams block the activity associated with the indicator of compromise. For example, blocking an IP address
associated with a malicious actor is labeled as easy because malicious actors can easily use different IP addresses to
work around this and continue with their malicious efforts. If security teams are able to block the IoCs located at the
top of the pyramid, the more difficult it becomes for attackers to continue their attacks. Here’s a breakdown of the
different types of indicators of compromise found in the Pyramid of Pain.
1. Hash values: Hashes that correspond to known malicious files. These are often used to provide unique
references to specific samples of malware or to files involved in an intrusion.
2. IP addresses: An internet protocol address like 192.168.1.1
3. Domain names: A web address such as www.google.com
4. Network artifacts: Observable evidence created by malicious actors on a network. For example, information
found in network protocols such as User-Agent strings.
5. Host artifacts: Observable evidence created by malicious actors on a host. A host is any device that’s
connected on a network. For example, the name of a file created by malware.
6. Tools: Software that’s used by a malicious actor to achieve their goal. For example, attackers can use
password cracking tools like John the Ripper to perform password attacks to gain access into an account.
7. Tactics, techniques, and procedures (TTPs): This is the behavior of a malicious actor. Tactics refer to the
high-level overview of the behavior. Techniques provide detailed descriptions of the behavior relating to the
tactic. Procedures are highly detailed descriptions of the technique. TTPs are the hardest to detect.
Triage process
Incidents can have the potential to cause significant damage to an organization. Security teams must respond quickly
and efficiently to prevent or limit the impact of an incident before it becomes too late. Triage is the prioritizing of
incidents according to their level of importance or urgency. The triage process helps security teams evaluate and
prioritize security alerts and allocate resources effectively so that the most critical issues are addressed first.
The triage process consists of three steps:
During this first step of the triage process, a security analyst receives an alert from an alerting system like an
intrusion detection system (IDS). You might recall that an IDS is an application that monitors system activity and
alerts on possible intrusions. The analyst then reviews the alert to verify its validity and ensure they have a complete
understanding of the alert.
This involves gathering as much information as possible about the alert, including details about the activity that
triggered the alert, the systems and assets involved, and more. Here are some questions to consider when verifying
the validity of an alert:
• Is the alert a false positive? Security analysts must determine whether the alert is a genuine security
concern or a false positive, or an alert that incorrectly detects the presence of a threat.
• Was this alert triggered in the past? If so, how was it resolved? The history of an alert can help determine
whether the alert is a new or recurring issue.
• Is the alert triggered by a known vulnerability? If an alert is triggered by a known vulnerability, security
analysts can leverage existing knowledge to determine an appropriate response and minimize the impact of
the vulnerability.
• What is the severity of the alert? The severity of an alert can help determine the priority of the response so
that critical issues are quickly escalated.
Assign priority
Once the alert has been properly assessed and verified as a genuine security issue, it needs to be prioritized
accordingly. Incidents differ in their impact, size, and scope, which affects the response efforts. To manage time and
resources, security teams must prioritize how they respond to various incidents because not all incidents are equal.
Here are some factors to consider when determining the priority of an incident:
• Functional impact: Security incidents that target information technology systems impact the service that
these systems provide to its users. For example, a ransomware incident can severely impact the
confidentiality, availability, and integrity of systems. Data can be encrypted or deleted, making it completely
inaccessible to users. Consider how an incident impacts the existing business functionality of the affected
system.
• Information impact: Incidents can affect the confidentiality, integrity, and availability of an organization’s data
and information. In a data exfiltration attack, malicious actors can steal sensitive data. This data can belong to
third party users or organizations. Consider the effects that information compromise can have beyond the
organization.
• Recoverability: How an organization recovers from an incident depends on the size and scope of the incident
and the amount of resources available. In some cases, recovery might not be possible, like when a malicious
actor successfully steals proprietary data and shares it publicly. Spending time, effort, and resources on an
incident with no recoverability can be wasteful. It’s important to consider whether recovery is possible and
consider whether it’s worth the time and cost.
Note: Security alerts often come with an assigned priority or severity level that classifies the urgency of the alert
based on a level of prioritization.
The final step of the triage process involves the security analyst performing a comprehensive analysis of the incident.
Analysis involves gathering evidence from different sources, conducting external research, and documenting the
investigative process. The goal of this step is to gather enough information to make an informed decision to address it.
Depending on the severity of the incident, escalation to a level two analyst or a manager might be required. Level two
analysts and managers might have more knowledge on using advanced techniques to address the incident.
Benefits of triage
By prioritizing incidents based on their potential impact, you can reduce the scope of impact to the organization by
ensuring a timely response. Here are some benefits that triage has for security teams:
• Resource management: Triaging alerts allows security teams to focus their resources on threats that require
urgent attention. This helps team members avoid dedicating time and resources to lower priority tasks and
might also reduce response time.
• Standardized approach: Triage provides a standardized approach to incident handling. Process
documentation, like playbooks, help to move alerts through an iterative process to ensure that alerts are
properly assessed and validated. This ensures that only valid alerts are moved up to investigate.
A host-based intrusion detection system (HIDS) is an application that monitors the activity of the host on which it's
installed. A HIDS is installed as an agent on a host. A host is also known as an endpoint, which is any device
connected to a network like a computer or a server. Typically, HIDS agents are installed on all endpoints and used to
monitor and detect security threats. A HIDS monitors internal activity happening on the host to identify any
unauthorized or abnormal behavior. If anything, unusual is detected, such as the installation of an unauthorized
application, the HIDS logs it and sends out an alert. In addition to monitoring inbound and outbound traffic flows, HIDS
can have additional capabilities, such as monitoring file systems, system resource usage, user activity, and more.
A network-based intrusion detection system (NIDS) is an application that collects and monitors network traffic and
network data. NIDS software is installed on devices located at specific parts of the network that you want to monitor.
The NIDS application inspects network traffic from different devices on the network. If any malicious network traffic is
detected, the NIDS logs it and generates an alert. This diagram shows a NIDS that is installed on a network. The
highlighted circle around the server and computers indicates that the NIDS is installed on the server and is monitoring
the activity of the computers. Using a combination of HIDS and NIDS to monitor an environment can provide a multi-
layered approach to intrusion detection and response. HIDS and NIDS tools provide a different perspective on the
activity occurring on a network and the individual hosts that are connected to it. This helps provide a comprehensive
view of the activity happening in an environment.
Detection techniques
Detection systems can use different techniques to detect threats and attacks. The two types of detection techniques
that are commonly used by IDS technologies are signature-based analysis and anomaly-based analysis.
Signature-based analysis
Signature analysis, or signature-based analysis, is a detection method that is used to find events of interest. A
signature is a pattern that is associated with malicious activity. Signatures can contain specific patterns like a
sequence of binary numbers, bytes, or even specific data like an IP address. Previously, you explored the Pyramid of
Pain, which is a concept that prioritizes the different types of indicators of compromise (IoCs) associated with an
attack or threat, such as IP addresses, tools, tactics, techniques, and more. IoCs and other indicators of attack can be
useful for creating targeted signatures to detect and block attacks. Different types of signatures can be used
depending on which type of threat or attack you want to detect. For example, an anti-malware signature contains
patterns associated with malware. This can include malicious scripts that are used by the malware. IDS tools will
monitor an environment for events that match the patterns defined in this malware signature. If an event matches the
signature, the event gets logged and an alert is generated.
Advantages
• Low rate of false positives: Signature-based analysis is very efficient at detecting known threats because it
is simply comparing activity to signatures. This leads to fewer false positives. Remember that a false positive
is an alert that incorrectly detects the presence of a threat.
Disadvantages
• Signatures can be evaded: Signatures are unique, and attackers can modify their attack behaviors to
bypass the signatures. For example, attackers can make slight modifications to malware code to alter its
signature and avoid detection.
• Signatures require updates: Signature-based analysis relies on a database of signatures to detect threats.
Each time a new exploit or attack is discovered, new signatures must be created and added to the signature
database.
• Inability to detect unknown threats: Signature-based analysis relies on detecting known threats through
signatures. Unknown threats can't be detected, such as new malware families or zero-day attacks, which are
exploits that were previously unknown.
Anomaly-based analysis
Anomaly-based analysis is a detection method that identifies abnormal behavior. There are two phases to anomaly-
based analysis: a training phase and a detection phase. In the training phase, a baseline of normal or expected
behavior must be established. Baselines are developed by collecting data that corresponds to normal system
behavior. In the detection phase, the current system activity is compared against this baseline. Activity that happens
outside of the baseline gets logged, and an alert is generated.
Advantages
• Ability to detect new and evolving threats: Unlike signature-based analysis, which uses known patterns to
detect threats, anomaly-based analysis can detect unknown threats.
Disadvantages
• High rate of false positives: Any behavior that deviates from the baseline can be flagged as abnormal,
including non-malicious behaviors. This leads to a high rate of false positives.
• Pre-existing compromise: The existence of an attacker during the training phase will include malicious
behavior in the baseline. This can lead to missing a pre-existing attacker.