UNIT 1 TO 5 ICS NOTES
UNIT 1 TO 5 ICS NOTES
Cyber security is the most concerned matter as cyber threats and attacks are overgrowing.
Attackers are now using more sophisticated techniques to target the systems. Individuals, small-
scale businesses or large organization, are all being impacted. So, all these firms whether IT or
non-IT firms have understood the importance of Cyber Security and focusing on adopting all
possible measures to deal with cyber threats.
"Cyber security is primarily about people, processes, and technologies working together to
encompass the full range of threat reduction, vulnerability reduction, deterrence, international
engagement, incident response, resiliency, and recovery policies and activities, including
computer network operations, information assurance, law enforcement, etc."
OR
Cyber security is the body of technologies, processes, and practices designed to protect
networks, computers, programs and data from attack, damage or unauthorized access.
The term cyber security refers to techniques and practices designed to protect
digital data.
1
Course Code/ Title : IT3404/ INTRODUCTION TO CYBER SECURITY Unit :1
It also means trying to keep the identity of authorized parties involved in sharing and holding
data private and anonymous.
Data encryption
Two-factor authentication
Biometric verification
Security tokens
Integrity
Cryptographic checksums
Using file permissions
Uninterrupted power supplies
Data backups
2
Course Code/ Title : IT3404/ INTRODUCTION TO CYBER SECURITY Unit :1
Availability
Availability is making sure that authorized parties are able to access the information when
needed.
1) Web-based attacks
2) System-based
attacks Web-based
attacks
These are the attacks which occur on a website or web applications. Some of the important
web-based attacks are as follows-
1. Injection attacks
It is the attack in which some data will be injected into a web application to manipulate the
application and fetch the required information.
Example- SQL Injection, code Injection, log Injection, XML Injection etc.
2. DNS Spoofing
DNS Spoofing is a type of computer security hacking. Whereby a data is introduced into a DNS
resolver's cache causing the name server to return an incorrect IP address, diverting traffic to
the attackers computer or any other computer. The DNS spoofing attacks can go on for a long
period of time without being detected and can cause serious security issues.
3. Session Hijacking
It is a security attack on a user session over a protected network. Web applications create cookies
to store the state and user sessions. By stealing the cookies, an attacker can have access to all of
the user data.
3
Course Code/ Title : IT3404/ INTRODUCTION TO CYBER SECURITY Unit :1
4. Phishing
Phishing is a type of attack which attempts to steal sensitive information like user login
credentials and credit card number. It occurs when an attacker is masquerading as a trustworthy
entity in electronic communication.
5. Brute force
It is a type of attack which uses a trial and error method. This attack generates a large number
of guesses and validates them to obtain actual data like user password and personal
identification number. This attack may be used by criminals to crack encrypted data, or by
security, analysts to test an organization's network security.
6. Denial of Service
It is an attack which meant to make a server or network resource unavailable to the users. It
accomplishes this by flooding the target with traffic or sending it information that triggers a
crash. It uses the single system and single internet connection to attack a server. It can be
classified into the following-
Volume-based attacks- Its goal is to saturate the bandwidth of the attacked site, and is
measured in bit per second.
Application layer attacks- Its goal is to crash the web server and is measured in request
per second.
7. Dictionary attacks
This type of attack stored the list of a commonly used password and validated them to get
original password.
8. URL Interpretation
It is a type of attack where we can change the certain parts of a URL, and one can make a
web server to deliver web pages for which he is not authorized to browse.
It is a type of attack that allows an attacker to intercepts the connection between client and
server and acts as a bridge between them. Due to this, an attacker will be able to read, insert
and modify the data in the intercepted connection.
4
Course Code/ Title : IT3404/ INTRODUCTION TO CYBER SECURITY Unit :1
System-based attacks
These are the attacks which are intended to compromise a computer or a computer network.
Some of the important system-based attacks are as follows-
Virus
It is a type of malicious software program that spread throughout the computer files without
the knowledge of a user. It is a self-replicating malicious computer program that replicates by inserting
copies of itself into other computer programs when executed. It can also execute instructions that cause
harm to the system.
2. Worm
It is a type of malware whose primary function is to replicate itself to spread to uninfected computers.
It works same as the computer virus. Worms often originate from email attachments that appear to be
from trusted senders.
3. Trojan horse
It is a malicious program that occurs unexpected changes to computer setting and unusual activity, even
when the computer should be idle. It misleads the user of its true intent. It appears to be a normal
application but when opened/executed some malicious code will run in the background.
4. Backdoors
It is a method that bypasses the normal authentication process. A developer may create a
backdoor so that an application or operating system can be accessed for troubleshooting or other
purposes.
5. Bots
A bot (short for "robot") is an automated process that interacts with other network services.
Some bots program run automatically, while others only execute commands when they receive
specific input. Common examples of bots program are the crawler, chatroom bots, and
malicious bots.
5
Course Code/ Title : IT3404/ INTRODUCTION TO CYBER SECURITY Unit :1
The 7 layers of cyber security should centre on the mission critical assets you are seeking
to protect.
Cyber threats are security incidents or circumstances with the potential to have a negative
outcome for your network or other data management systems.
Examples of common types of security threats include phishing attacks that result in the
installation of malware that infects your data, failure of a staff member to follow data protection
protocols that cause a data breach, or even a tornado that takes down your company’s data
headquarters, disrupting access.
Vulnerabilities are the gaps or weaknesses in a system that make threats possible and tempt
threat actors to exploit them.
Types of vulnerabilities in network security include but are not limited to SQL injections,
server misconfigurations, cross-site scripting, and transmitting sensitive data in a non-
encrypted plain text format.
When threat probability is multiplied by the potential loss that may result, cyber security
experts, refer to this as a risk.
6
Course Code/ Title : IT3404/ INTRODUCTION TO CYBER SECURITY Unit :1
Computer criminals
Computer criminals have access to enormous amounts of hardware, software, and data; they
have the potential to cripple much of effective business and government throughout the world.
In a sense, the purpose of computer security is to prevent these criminals from doing damage.
We say computer crime is any crime involving a computer or aided by the use of one. Although
this definition is admittedly broad, it allows us to consider ways to protect ourselves, our
businesses, and our communities against those who use computers maliciously.
One approach to prevention or moderation is to understand who commits these crimes and why.
Many studies have attempted to determine the characteristics of computer criminals. By
studying those who have already used computers to commit crimes, we may be able in the future
to spot likely criminals and prevent the crimes from occurring.
CIA Triad
The CIA Triad is actually a security model that has been developed to help people think about
various parts of IT security.
CIA triad broken down:
Confidentiality
It's crucial in today's world for people to protect their sensitive, private information from
unauthorized access. Protecting confidentiality is dependent on being able to define and enforce
certain access levels for information.
7
Course Code/ Title : CS3404/ INTRODUCTION TO CYBER SECURITY Unit :1
In some cases, doing this involves separating information into various collections that are
organized by who needs access to the information and how sensitive that information actually
is - i.e. the amount of damage suffered if the confidentiality was breached.
Some of the most common means used to manage confidentiality include access control lists,
volume and file encryption, and Unix file permissions.
Integrity
This is an essential component of the CIA Triad and designed to protect data from deletion or
modification from any unauthorized party, and it ensures that when an authorized person makes
a change that should not have been made the damage can be reversed.
Availability
This is the final component of the CIA Triad and refers to the actual availability of your data.
Authentication mechanisms, access channels and systems all have to work properly for the
information they protect and ensure it's available when it is needed.
The CIA Triad is all about information. While this is considered the core factor of the majority
of IT security, it promotes a limited view of the security that ignores other important factors.
For example, even though availability may serve to make sure you don't lose access to resources
needed to provide information when it is needed, thinking about information security in itself
doesn't guarantee that someone else hasn't used your hardware resources without authorization.
It's important to understand what the CIA Triad is, how it is used to plan and also to implement
a quality security policy while understanding the various principles behind it. It's also important
to understand the limitations it presents. When you are informed, you can utilize the CIA Triad
for what it has to offer and avoid the consequences that may come along by not understanding
it.
What is an Asset: An asset is any data, device or other component of an organization’s systems
that is valuable – often because it contains sensitive data or can be used to access such
information.
8
Course Code/ Title : CS3404/ INTRODUCTION TO CYBER SECURITY Unit :1
For example: An employee’s desktop computer, laptop or company phone would be considered
an asset, as would applications on those devices. Likewise, critical infrastructure, such as
servers and support systems, are assets. An organization’s most common assets are information
assets. These are things such as databases and physical files – i.e. the sensitive data that you
store
Types of cyber-attacker actions and their motivations when deliberate
Active attacks: An active attack is a network exploit in which a hacker attempts to make
changes to data on the target or data en route to the target.
Masquerade: In this attack, the intruder pretends to be a particular user of a system to gain
access or to gain greater privileges than they are authorized for. A masquerade may be
attempted through the use of stolen login IDs and passwords, through finding security gaps in
programs or through bypassing the authentication mechanism.
Session replay: In this type of attack, a hacker steals an authorized user’s log in information
by stealing the session ID. The intruder gains access and the ability to do anything the authorized
user can do on the website.
Message modification: In this attack, an intruder alters packet header addresses to direct a
message to a different destination or modify the data on a target machine.
In a denial of service (DoS) attack, users are deprived of access to a network or web resource.
This is generally accomplished by overwhelming the target with more traffic than it can handle.
Passive Attacks: Passive attacks are relatively scarce from a classification perspective, but can
be carried out with relative ease, particularly if the traffic is not encrypted.
Eavesdropping (tapping): the attacker simply listens to messages exchanged by two entities.
For the attack to be useful, the traffic must not be encrypted. Any unencrypted information,
such as a password sent in response to an HTTP request, may be retrieved by the attacker.
Traffic analysis: the attacker looks at the metadata transmitted in traffic in order to deduce
information relating to the exchange and the participating entities, e.g. the form of the
exchanged traffic (rate, duration, etc.). In the cases where encrypted data are used, traffic
analysis can also lead to attacks by cryptanalysis, whereby the attacker may obtain information
or succeed in unencrypting the traffic.
Attack Characteristics
9
Course Code/ Title : CS3404/ INTRODUCTION TO CYBER SECURITY Unit :1
Virus A virus is a program that attempts to damage a computer system and replicate itself
to other computer systems. A virus:
Logic A Logic Bomb is malware that lies dormant until triggered. A logic bomb is a
Bomb specific example of an asynchronous attack.
Hardware Attacks:
Common hardware attacks include:
10
Course Code/ Title : CS3404/ INTRODUCTION TO CYBER SECURITY Unit :1
Inducing faults, causing the interruption of normal behaviour
Cyber Crime:
Cybercrime is criminal activity that either targets or uses a computer, a computer network
or a networked device. Cyber crime is committed by cybercriminals or hackers who want to make
money. Cybercrime is carried out by individuals or organizations. Some cybercriminals are
organized, use advanced techniques and are highly technically skilled. Others are novice hackers.
Cyber Terrorism:
Cyber terrorism is the convergence of cyberspace and terrorism. It refers to unlawful attacks and threats
of attacks against computers, networks and the information stored therein when done to intimidate or
coerce a government or its people in furtherance of political or social objectives.
Examples are hacking into computer systems, introducing viruses to vulnerable networks, web site
infefacing, Denial-of-service attacks, or terroristic threats made via electronic communication.
Cyber Espionage:
Cyber spying, or cyber espionage, is the act or practice of obtaining secrets and information without the
permission and knowledge of the holder of the information .
II. Cyber Threat Landscape
The cyber threat landscape refers to the evolving environment in which cyber threats and vulnerabilities
exist. It encompasses the full spectrum of cyber risks, including both external and internal threats, as well
as the changing tactics, techniques, and procedures (TTPs) used by cyber adversaries. Here's a breakdown
of key aspects of the cyber threat landscape:
Types of Cyber Threats:
Malware: Software designed to damage or disrupt systems, such as viruses, worms, ransomware,
and trojans. Phishing and Social Engineering: Techniques used to manipulate individuals into
revealing sensitive information or performing actions that compromise security.Denial of Service
(DoS) / Distributed Denial of Service (DDoS): Attacks that overload systems or networks,
rendering them unavailable to legitimate users.Data Breaches: Unauthorized access to or
disclosure of sensitive information, often resulting in financial or reputational damage.Insider
Threats: Employees, contractors, or other trusted individuals who intentionally or unintentionally
compromise security.Advanced Persistent Threats (APT): Long- term, targeted attacks by well-
funded, skilled adversaries, often involving espionage or sabotage.
11
Course Code/ Title : CS3404/ INTRODUCTION TO CYBER SECURITY Unit :1
Emerging Threats:
Ransomware: Increasingly sophisticated attacks where attackers encrypt data and demand
payment for its release.IoT Vulnerabilities: As more devices become connected, vulnerabilities
in Internet of Things (IoT) devices increase, making them targets for cybercriminals.Cloud
Security Issues: Misconfigured cloud environments or vulnerabilities in cloud service providers
lead to data breaches or service outages.Supply Chain Attacks: Attacks targeting the supply chain,
including software updates and third-party vendors, to gain access to an organization's network.
Adversary Techniques:
Exploitation of Vulnerabilities: Attackers exploit unpatched or zero-day vulnerabilities in
software or hardware to gain access. Credential Stuffing: Using stolen or leaked credentials to
gain unauthorized access to accounts across multiple platforms. Man-in-the-Middle Attacks:
Intercepting and potentially altering communications between two parties. Lateral Movement:
Once inside a network, attackers move through systems to gain access to more critical assets.
Threat Actors:
Cybercriminals: Motivated by financial gain, cybercriminals often target individuals, companies,
and even governments. Nation-State Actors: Governments or state-sponsored groups that engage
in cyber warfare, espionage, or sabotage. Hacktivists: Groups or individuals that conduct
cyberattacks to promote a political, social, or environmental cause.Insiders: Employees or
contractors who exploit their access to systems or data for malicious purposes.
5. Impact of Cyber Threats:
Financial Loss: Costs associated with data breaches, system downtime, or recovery efforts.
Reputation Damage: Loss of customer trust and public confidence after an attack. Legal and Regulatory
Consequences: Fines, lawsuits, and penalties resulting from data breaches or non-compliance with
cybersecurity regulations (e.g., GDPR, HIPAA). Operational Disruption: Interruptions to business
operations, supply chains, or critical services.
6. Mitigation Strategies:
Regular Software Updates: Applying patches and updates to fix vulnerabilities and reduce the attack
surface. Network Segmentation: Isolating critical systems and data to limit lateral movement by attackers.
Employee Training: Educating staff on security best practices, such as recognizing phishing attempts and
using strong passwords. Incident Response Plans: Having well-defined plans in place for detecting,
responding to, and recovering from cyberattacks. Threat Intelligence: Continuously monitoring and
analyzing emerging threats and trends to proactively defend against attacks.
12
Course Code/ Title : CS3404/ INTRODUCTION TO CYBER SECURITY Unit :1
III. Cyber Security Frameworks and Standards
The digital threat landscape is always changing, with cybercriminals developing more advanced
attacks every day. To stay ahead in this ever-shifting environment, organizations must adopt the latest
cybersecurity frameworks.
These frameworks offer a structured approach to managing cybersecurity risks, addressing potential
vulnerabilities, and strengthening overall digital defenses. As companies increasingly rely on digital
technologies, keeping up with the most current cybersecurity frameworks has become crucial.
From the National Institute of Standards and Technology (NIST) to the Health Insurance Portability and
Accountability Act (HIPAA), these frameworks are vital for any IT operation.
This framework is used to strengthen the infrastructure to bridge the gap between the CEOs and
the technical team.
It is a widely accepted way to protect any business from ever-changing cyber threats.
This framework also helps in integrating the industry standards and the best practices.
13
Course Code/ Title : CS3404/ INTRODUCTION TO CYBER SECURITY Unit :1
information security controls, providing a robust guide for protecting sensitive information.
Advantages
These frameworks help organizations protect sensitive information against a broad range of
cybersecurity threats, ensuring the confidentiality, integrity, and availability of data.
They align closely with other standard management systems, such as those for quality assurance
and environmental management, making them easier to integrate into existing operations.
Adopting ISO 27001 and ISO 27002 can significantly boost an organization's credibility and
resilience against cyber threats, enhancing trust with stakeholders and customers.
3. HIPAA
HIPAA is abbreviated as the Health Insurance Portability and Accountability Act which was introduced
by the United States government for, the availability and integrity of protected health information in the
healthcare industry. The main objective of HIPAA is to make sure that the individual's medical
information is secure and that they have full control over how the information is being used and disclosed.
HIPAA framework has been used by multiple industries that handle healthcare providers, health plans,
and PHI. Therefore this framework mainly consists of privacy rule sets and security rules
Advantages
HIPAA mainly provides advantages such as it helps in enhancing the privacy of the patients
and data security.
It helps reduce healthcare abuse and fraud by implementing industry standards.
HIPAA results in significant fines and reputational damage for the organizations.
4. PCI-DSS
PCI-DSS (Payment Card Industry Data Security Standard) is a globally recognized cybersecurity
framework designed specifically to protect payment card information. Developed by the Payment Card
Industry Security Standards Council (PCI SSC), PCI-DSS provides a comprehensive set of requirements
aimed at securing credit card transactions and ensuring the safe handling of cardholder data by merchants
and service providers.
This framework encompasses various security measures, including data encryption, access control,
network security, and regular monitoring. Organizations that process, store, or transmit credit card
information must comply with PCI-DSS to protect against data breaches and fraud, ensuring the security
of their customers' financial information.
Advantages:
PCI-DSS is essential for businesses in the payment card industry, helping them safeguard
cardholder data against a wide range of cyber threats, thereby reducing the risk of fraud and data
breaches.
Compliance with PCI-DSS enhances an organization's credibility and trustworthiness,
demonstrating a commitment to securing customer financial data.
By adhering to PCI-DSS, organizations can avoid hefty fines and penalties associated with non-
compliance, while also minimizing the potential financial and reputational damage from security
breaches.
5. SOC2
SOC2 is another popular cybersecurity framework and auditing standard that can be mainly used to verify
vendors and partners. It is a type of detailed framework with over 60 compliance requirements and
extensive auditing processes for third-party controls and systems. It is known to be one of the
toughest cybersecurity frameworks to implement especially for organizations in the banking or in
the financial sector which face a higher standard for compliance
14
Course Code/ Title : CS3404/ INTRODUCTION TO CYBER SECURITY Unit :1
Advantages
SOC2 is mainly used to improve the services and it also shows the ways individuals can
streamline the organization's control and processes.
This framework also allows businesses to make security improvements which can increase
the efficiency of the organizations.
SOC2 makes sure that the third-party service provider stores and processes the customer data in
an effective and secure manner.
6. FISMA
FISMA is abbreviated as the Federal Information Security Management Act which is a detailed
cybersecurity framework which was designed to protect the federal government information and the
systems as well as the third parties and the lenders who are working on behalf of the federal agencies
against the cyber security threats. Therefore under this cybersecurity framework agencies and third parties
are needed to maintain an inventory of the digital assets and identify any integration between the systems
and the networks.
Advantages
Fisma mainly offers multiple benefits such as it helps in enhancing the security posture.
This framework is used for the implementation of robust security which helps organizations to
strengthen their overall security postures.
This framework also helps in reducing the risk of cyber-attacks and data breaches.
7. COBIT
COBIT is a popular cybersecurity framework that was developed by the Information Systems Audit and
Control Association. Control objectives for the information and related technology is a comprehensive
framework designed to help the organization manage their IT resources more effectively. This framework
mainly offers the best practices for risk management, security, and governance. The framework is
mainly divided into categories such as acquiring,
implementing, delivering, supporting, monitoring, and evaluating management. These categories are
used for particular processes and activities to help organizations effectively manage IT resources.
Advantages
This framework mainly includes comprehensive data security and protection guidelines.
It is mainly used to protect organizations and their systems from cybersecurity threats.
This framework is mainly used to improve and maintain high-quality information to
support business decisions.
IV. Security and Architecture Model of Cyber Security
A Security Architecture is critical to reducing risk, ensuring compliance, and effectively addressing
security issues in Software Development. Whether in the cloud or on-premises, it provides a basis for
identifying and managing potential threats, thereby increasing the safety and security of the organization
in the face of change in the digital environment. In this Article, we are going to study about Secuirty
Architecture, its types, examples, its benefits and why do we need security architecture in software
development.
15
Course Code/ Title : CS3404/ INTRODUCTION TO CYBER SECURITY Unit :1
Security architecture is a strategy for designing and building a company's security infrastructure.
Troubleshoots data protection issues by analyzing processes, controls and systems. This multifaceted
strategy has many elements such as security policy, risk management, and determination of controls and
procedures. It is suitable for special cases such as network security, application security or business
information security.
The purpose of network security architecture is to protect the organization's network infrastructure using
tools such as firewalls and intrusion detection systems. Application security architecture focuses on
software security with an emphasis on secure coding methods and strong authentication systems. At the
same time, the company's information security architecture takes an approach to combine security measures
with business objectives across people, processes and technology.
Types of Security Architecture
1. Architecture of Network Security:
The systematic design and implementation of security measures to safeguard an organization's
computer networks against unwanted access, cyberattacks, and data breaches is referred to as
network security architecture. It entails the installation of firewalls, intrusion detection/prevention
systems, and other network security controls in order to protect the integrity and confidentiality of
data transmitted across the network.
Example: To defend its internal network from illegal access and cyber threats, a corporation installs a
network security architecture that comprises firewalls, intrusion detection/prevention systems, and secure
Wi-Fi protocols.
2. Architecture of Application Security:
Application Security Architecture entails the systematic design and integration of security
measures into software applications in order to prevent vulnerabilities and illegal access. Secure
coding practices, authentication systems, and encryption are all used to ensure the confidentiality
and integrity of sensitive data processed by apps.
Example: To prevent vulnerabilities and preserve user data, a software development business adds secure
coding methods, encryption, and rigorous authentication mechanisms into its application development
process.
3. Architecture of Cloud Security:
Cloud Security Architecture is the design and implementation of security rules and practices
adapted specifically for cloud computing systems. To safeguard data, apps, and infrastructure
housed in the cloud, it includes methods such as encryption, identity and access management
(IAM), and frequent security audits.
Example: To secure data and applications hosted on cloud platforms such as Amazon Web Services
(AWS) or Microsoft Azure, a business deploys resources in a cloud environment using encryption, identity
and access management (IAM) restrictions, and frequent security audits.
4. Architecture of Enterprise Information Security:
Enterprise Information Security Architecture (EISA) is a comprehensive method to protecting an
organization's information assets spanning people, processes, and technology. It entails the creation
and implementation of comprehensive security policies, as well as identity management and risk
assessment, in order to connect security efforts with business objectives and provide a unified
security posture.
Example: To protect sensitive client information and ensure regulatory compliance, a large financial
institution builds an enterprise-wide security architecture that comprises extensive security policies,
identity management systems, and regular risk assessments.
16
Course Code/ Title : CS3404/ INTRODUCTION TO CYBER SECURITY Unit :1
5. Architecture for Wireless Security:
Wireless Security Architecture is concerned with the design and implementation of security
mechanisms for wireless networks. It includes mechanisms such as WPA3 encryption, MAC
address filtering, and access control to prevent unauthorized access and protect data transfer in Wi-
Fi networks.
Example: The school uses a wireless security architecture that includes WPA3 encryption, MAC address
filtering, and access access to protect the Wi-Fi network and prevent unauthorized access.
6. Endpoint Security Architecture:
Endpoint Security Architecture involves designing and implementing security mechanisms to
protect specific devices (endpoints) such as computers, mobile phones and tablets. It includes anti-
virus software, endpoint detection and response (EDR) technology, and mobile device management
(MDM) solutions to prevent malware and unauthorized access.
Example: A company uses endpoint security measures, including antivirus software, endpoint detection
and response (EDR) tools, and networking solutions to protect personal devices (computers, smartphones,
etc.) from malware. mobile device (MDM) and unauthorized access.
Elements of Security Architecture
The security architecture aspect includes many products and activities designed to provide effective
security in the organization. These devices work together to protect data assets and reduce risk. The
following are the main components of security architecture:
1. Security Framework:
Policies and procedures that establish security standards, procedures, and policies in an
organization.
Responsibilities: Building a security system, communicating expectations, and providing a
framework for compliance is part of the job.
2. Security Management:
Security measures taken to detect, prevent or reduce the impact of security threats and
vulnerabilities.
Responsibilities: Prevent unauthorized access, data deletion, and other security issues by using
security policies.
3. Risk Management:
The process of identifying, analyzing and monitoring risks to the institution's information assets.
Responsibilities: Participate in decision making, resource allocation and implementation of controls
to reduce or control identified risks.
4. IAM (Identity and Access Management):
Management of user identities and their access to systems, applications and information.
Responsibilities: Ensuring that only authorized personnel can access sensitive information,
preventing unauthorized access or information leakage.
5. Encryption:
The process of encoding data so that it cannot be understood without the decryption key.
Responsibilities: Protect sensitive data from unauthorized access while maintaining confidentiality,
especially during data transfer and storage.
17
Course Code/ Title : CS3404/ INTRODUCTION TO CYBER SECURITY Unit :1
6. Responses to Issues:
A good way to handle a security incident and control its outcome.
Responsibilities: Minimize downtime, recover quickly, and analyze and learn from security
incidents.
7. Security Architecture Framework:
A model or framework that provides best practices and guidelines for designing and implementing
security solutions.
Responsibilities: As a plan to create an integrated and effective security system that suits business
needs.
8. Security Education and Training:
Programs and events designed to educate employees and users about security risks, policies, and
best practices.
Responsibilities: To improve the human base of security by promoting knowledge, behavior and
compliance with security laws.
Together, these elements help create a robust security system that helps protect an organization's
information assets and maintain effective defense against security-altering threats.
Examples of Security Architecture Framework
Many security architecture companies provide design guidelines and guidelines to help organizations
design and implement effective security solutions. Some good ideas on security architecture:
1. Open Group Architecture Framework (TOGAF):
Overview: A popular approach to business architecture that incorporates security concerns
designed into its framework. TOGAF provides a comprehensive approach to business information
design, planning, implementation and management.
Role: TOGAF incorporates security concerns into its infrastructure, making security an important
element of all business development processes.
2. Sherwood Applied Business Security Architecture (SABSA):
Overview: A business-focused security framework focused on integrating security architecture
with business objectives. SABSA focuses on risk management and security integration across all
business sectors.
Role: SABSA's role in security is to provide businesses with the tools to create a secure, risk-based
security architecture that closely meets business needs.
3. Zachman Framework:
Overview: The Zachman Framework is not only a security framework but also a company
structure used to organize and explain the various perspectives involved in business architecture. It
provides a way to view and create complex systems.
Role: The Zachman Framework can serve as a reference to ensure that every aspect of
organizations' security decisions is addressed, resulting in better security.
4. NIST Cybersecurity Framework:
Overview: Developed by the National Institute of Standards and Technology (NIST), this
framework provides guidelines, standards, and best practices for managing cybersecurity risks. The
18
Course Code/ Title : CS3404/ INTRODUCTION TO CYBER SECURITY Unit :1
Role: NIST Cybersecurity Framework provides a framework for reviewing and updating
cybersecurity measures, aligning them with business objectives, and facilitating cybersecurity
communications.
5. ISO/IEC 27001:
Overview: ISO/IEC 27001, ISO/IEC 27000 series are widely recognized information security
management (ISMS) standards. It provides an effective and risk-based approach to data
management.
Role: Although ISO/IEC 27001 is not an original architectural framework, it can help
organizations create a unified and complete information security system to ensure business print
security.
6. MITER ATT&CK Framework:
Overview: ATT&CK (Countermeasures, Countermeasures, Techniques, and Common Sense) is a
cybersecurity threat intelligence matrix that provides examples of strategies and tactics used by
known adversaries in cyber attacks.
Role: Although not an architecture firm, ATT&CK provides security professionals with threat
intelligence and strategies to help organizations develop security policies that protect against
threats around the world.
These framework provides guidance to help organizations develop and improve security based on their
unique needs, risks, and business objectives.
Incident Response and Cybersecurity Incident Handling
Incident response is the structured process organizations use to identify, manage, and mitigate security
incidents effectively. It involves preparing for, detecting, analyzing, containing, eradicating, and recovering
from cybersecurity threats. Key steps include:
1. Preparation: Creating incident response plans (IRPs), setting up tools, and training employees.
2. Detection and Analysis: Identifying potential threats and understanding their scope and impact.
3. Containment: Isolating affected systems to prevent further damage.
4. Eradication: Removing malicious components and vulnerabilities.
5. Recovery: Restoring normal operations and verifying the systems' integrity.
6. Post-Incident Review: Documenting lessons learned to improve future response.
An effective incident response plan minimizes downtime, reduces financial losses, and ensures regulatory
compliance.
Security Awareness and Training
Security awareness and training programs aim to educate employees about potential cyber threats and best
practices for maintaining organizational security. Core components include:
1. Understanding Threats: Awareness of phishing, malware, ransomware, and social engineering
attacks.
2. Safe Practices: Guidance on password hygiene, secure browsing, and handling sensitive data.
3. Policy Familiarization: Training on organizational policies, such as acceptable use policies
(AUPs) and data protection guidelines.
4. Regular Updates: Continuous education on emerging threats and updated protocols.
Engaging methods like interactive sessions, real-world scenarios, and phishing simulations help improve
retention and promote a security-conscious culture.
19
Course Code/ Title : CS3404/ INTRODUCTION TO CYBER SECURITY Unit :1
20
Course Code/ Title : CS3404/ INTRODUCTION TO CYBER SECURITY Unit :1
21
Course Code/ Title : CS3404/ INTRODUCTION TO CYBER SECURITY Unit :1
22
Course Code/ Title : CS3404/ INTRODUCTION TO CYBER SECURITY Unit :1
23
UNIT II FUNDAMENTALS OF NETWORKING
Introduction to Networking Concepts - OSI Model Overview - TCP/IP Protocol Suite - Network
Devices and Components - IP Addressing and Subnetting - Routing and Switching Basics -
Wireless Networking Fundamentals - Network Security Principles.
A computer network is a group of devices connected with each other through a transmission
medium such as wires, cables etc. These devices can be computers, printers, scanners, Fax
machines etc. The purpose of having a computer network is to send and receive data stored in
other devices over the network. These devices are often referred to as nodes.
There are five basic components of a computer network:
Message: It is the data or information which needs to be transferred from one device to another
device over a computer network.
Sender: Sender is the device that has the data and needs to send the data to another device
connected to the network.
Receiver: A receiver is the device which is expecting the data from other devices on the
network.
Transmission media: In order to transfer data from one device to another device we need a
transmission media such as wires, cables, radio waves etc.
Protocol: A protocol is a set of rules that are agreed by both sender and receiver, without a
protocol two devices can be connected to each other but they cannot communicate.
In order to establish reliable communication or data sharing between two different devices we
need a set of rules that are called protocol. For example, http and https are the two protocols used
by web browsers to get and post the data to the internet; similarly SMTP protocol is used by
email services connected to the internet. How Does a Computer Network Work? Basics building
blocks of a Computer network are Nodes and Links. A Network Node can be illustrated as
Equipment for Data Communication like a Modem, Router, etc., or Equipment of a Data
Terminal like connecting two computers or more. Links in Computer Networks can be defined as
wires or cables or free space of wireless networks.
An example of a network is the term LAPLINK, which allows you to copy files from one
device to another device over a specific parallel port to be considered a computer network.
Another example specified here is that we all may use it in our daily lives, i.e., the internet.
Some various types of networks are LAN, MAN, WAN etc.
There are various types of networks that can be used for different functions:
LAN: Local area networks are mainly used to connect personal devices within a few
kilometers of a limited area. These networks are used in offices, companies and factories to
exchange data and Information.
MAN: Metropolitan area networks are used to connect the devices over an entire city under
the range of up to 50 km. These networks are used in the telephone company network and
cable TV network.
WAN: Wide Area Networks are used in the wide geological range over a country and
continent. These networks are used in military services, mobile operators, railways and
airline reservations.
PAN: Personal area networks appropriate to personal or separate workspace under the
range of 10 meters. These networks are mostly used to connect tablets, smartphones and
laptops.
CAN: Campus area networks are used to connect limited geographic areas. CAN
interconnect multiple local area networks (LAN) within Colleges, Universities, Corporates
buildings, etc.
Application of computer networks:
1. Resource Sharing:
Resource sharing is an application of a computer network. Resource sharing means you can
share one Hardware and Software among multiple users. Hardware includes printers, Disks,
Fax Machines, etc. Computing devices. And Software includes Atom, Oracle VM Virtual
Box, Postman, Android Studio, etc.
2. Information Sharing:
Using a Computer network, we can share Information over the network, and it provides
Search capabilities such as WWW. Over the network, single information can be shared
among the many users over the internet.
3. Communication:
Communication includes email, calls, message broadcast, electronic funds transfer system etc.
4. Entertainment Industry:
The Entertainment industry also uses computer networks widely. Some of the
Entertainment industries are Video on demand, Multiperson real-time simulation games,
movie/TV programs, etc.
5. Access to Remote Databases:
Computer networks allow us to access the Remote Database of the various applications by
the end- users. Some applications are Reservation for Hotels, Airplane Booking, Home
Banking, Automated Newspaper, Automated Library etc.
6. Home applications:
There are many common uses of the computer network as home applications. For example,
you can consider user-to-user communication, access to remote instruction, electronic
commerce and entertainment. Another way is managing bank accounts, transferring money
to some other banks, and paying bills electronically. A computer network arranges a robust
connection mechanism between users.
7. Business applications:
The result of the business application here is resource sharing. And the purpose of resource
sharing is that without moving to the physical location of the resource, all the data, plans
and tools can be shared to any network user. Most of the companies are doing business
electronically with other companies and with other clients worldwide with the help of a
computer network.
8. Mobile users:
The rapidly growing sectors in computer applications are mobile devices like notebook
computers and PDAs (personal digital assistants). Here mobile users/device means portable
device. The computer network is widely used in new-age technology like smartwatches,
wearable devices, tablets, online transactions, purchasing or selling products online, etc.
9. Social media:
Social media is also a great example of a computer network application. It helps people to
share and receive any information related to political, ethical and social issues.
o You can share expensive software and databases among network users.
o It facilitates communications from one computer to another computer.
o It allows the exchange of data and information among users through a network.
Topology defines the structure of the network of how all the components are interconnected to
each other. There are two types of topology: physical and logical topology.
Physical topology is the geometric representation of all the nodes in a network. There are
six types of network topology which are Bus Topology, Ring Topology, Tree Topology, Star
Topology, Mesh Topology and Hybrid Topology.
1) Bus Topology:
● The bus topology is designed in such a way that all the stations are connected through a
single cable known as a backbone cable.
● Each node is either connected to the backbone cable by drop cable or directly
connected to the backbone cable.
● When a node wants to send a message over the network, it puts a message over the
network.
● All the stations available in the network will receive the message whether it has been
addressed or not.
● The bus topology is mainly used in 802.3 (Ethernet) and 802.4 standard networks.
● The configuration of a b s topology is quite simpler as compared to other topologies.
● The backbone cable is considered as a "single lane" through which the message is
broadcast to all the stations.
● The most common access method of the bus topologies is CSMA (Carrier Sense Multiple
Access).
CSMA: It is a media access control used to control the data flow so that data integrity is
maintained, i.e., the packets do not get lost. There are two alternative ways of handling the
problems that occur when two nodes send the messages simultaneously.
● CSMA CD: CSMA CD (Collision detection) is an access method used to detect the
collision. Once the collision is detected, the sender will stop transmitting the data.
Therefore, it works on "recovery after the collision".
● CSMA CA: CSMA CA (Collision Avoidance) is an access method used to avoid
the collision by checking whether the transmission media is busy or not. If busy,
then the sender waits until the media becomes idle. This technique effectively
reduces the possibility of collision. It does not work on "recovery after the
collision".
Advantages of Bus topology:
● Low-cost cable: In bus topology, nodes are directly connected to the cable without
passing through a hub. Therefore, the initial cost of installation is low.
● Moderate data speeds: Coaxial or twisted pair cables are mainly used in bus-based
networks that support upto 10 Mbps.
● Familiar technology: Bus topology is a familiar technology as the installation and
troubleshooting techniques are well known and hardware components are easily
available.
● Limited failure: A failure in one node will not have any effect on other nodes.
● Extensive cabling: A bus topology is quite simple, but still it requires a lot of cabling.
● Difficult troubleshooting: It requires specialized test equipment to determine the cable
faults. If any fault occurs in the cable, then it would disrupt the communication for all the
nodes.
● Signal interference: If two nodes send the messages simultaneously, then the
signals of both the nodes collide with each other.
● Reconfiguration is difficult: Adding new devices to the network would slow down
the network.
● Attenuation: Attenuation is a loss of signal that leads to communication issues.
Repeaters are used to regenerate the signal.
2) Ring Topology:
● Ring topology is like a bus topology, but with connected ends.
● The node that receives the message from the previous computer will retransmit to the
next node. The data flows in one direction, i.e., it is unidirectional.
● The data flows in a single loop continuously known as an endless loop.It has no
terminated ends, i.e., each node is connected to another node and has no termination
point.
● The data in a ring topology flow in a clockwise direction.
● The most common access method of the ring topology is token passing.
Token passing: It is a network access method in which a token is passed from one node to
another node.
● A token moves around the network and it is passed from computer to computer until
it reaches the destination.
● The sender modifies the token by putting the address along with the data.
● The data is passed from one device to another device until the destination address
matches. Once the token is received by the destination device, then it sends the
acknowledgement to the sender.
● In a ring topology, a token is used as a carrier.
● Network Management: Faulty devices can be removed from the net ork without
bringing the network down.
● Product availability: any hardware and software tools for network operation and
monitoring are available.
● Cost: Twisted pair cabling is inexpensive and easily available. Therefore, the
installation cost is very low.
● Reliable: It is a more reliable network because the communication system is not
dependent on the single host computer.
Disadvantages of Ring topology:
3) Star Topology:
● Star topology is an arrangement of the network in which every node is connected to the
central hub, switch or a central computer.
● The central computer is known as a server and the peripheral devices att known as
clients.
● A Central point of failure: If the central hub or switch goes down, then all the
connected nodes will not be able to communicate with each other.
● Cable: Sometimes cable routing becomes difficult when a significant amount of
routing is required.
4) Tree topology:
● Tree topology combines the characteristics of bus topology and star topology.
● A tree topology is a type of structure in which all the computers are connected with each
other in hierarchical fashion.
● The top-most node in tree topology is known as a root node, and all other nodes are the
descendants of the root node.
● There is only one path between two nodes for the data transmission. Thus, it forms a
parent-child hierarchy.
5) Mesh topology:
● There are multiple paths from one computer to another computer. It does not contain the
switch, hub or any central computer which acts as a central point of communication.The
Internet is an example of the mesh topology.
● Mesh topology is mainly used for WAN implementations where communication
failures are a critical concern.
● Mesh topology is mainly used for wireless networks.
● Mesh topology canbe formed by using the formula Number of cables = (n*(n-1))/2
Where n is the number of nodes that represent the network.
Mesh topology is divided into two categories:
Full Mesh Topology: In a full mesh topology, each computer is connected to all the computers
available in the network.
Partial Mesh Topology: In a partial mesh topology, not all but certain computers are
connected to those computers with which they communicate frequently.
● Advantages of Mesh topology:
● Reliable: The mesh topology networks are very reliable as if any link breakdown
will not affect the communication between connected computers.
● Fast Communication: Communication is very fast between the nodes.
● Easier Reconfiguration: Adding new devices would not disrupt the
communication between other devices.
Disadvantages of Mesh topology:
6) Hybrid Topology:
● The combination of various different topologies is known as Hybrid topology.
● A Hybrid topology is a connection between different links and nodes to transfer the data.
● When two or more different topologies are combined together it is termed as Hybrid
topology and if similar topologies are connected with each other will not result in
Hybrid topology.
For example, if there exists a ring topology in one branch of ICICI bank and bus topology
in another branch of ICICI bank, connecting these two topologies will result in Hybrid
topology.
Advantages of Hybrid Topology:
● Reliable: If a fault occurs in any part of the network will not affect the functioning of the
rest of the network.
● Scalable: Size of the network can be easily expanded by adding new devices without
affecting the functionality of the existing network.
● Flexible: This topology is very flexible as it can be designed according to the
requirements of the organization.
● Effective: Hybrid topology is very effective as it can be designed in such a way that
the strength of the network is maximized and weakness of the network is
minimized.
● Disadvantages of Hybrid topology:
● Complex design: The major drawback of the Hybrid topology is the design of the
Hybrid network. It is very difficult to design the architecture of the Hybrid network.
● Costly Hub: The Hubs used in the Hybrid topology are very expensive as these
hubs are different from usual Hubs used in other topologies.
● Costly infrastructure: The infrastructure cost is very high as a hybrid network
requires a lot of cabling, network devices, etc.
Computer Network Architecture is defined as the physical and logical design of the
software, hardware, protocols, and media of the transmission of data. Simply we can say
how computers are organized and how tasks are allocated to the computer.
● Peer-To-Peer network
● Client/Server network
Peer-To-Peer network:
Peer-To-Peer network is a network in which all the computers are linked together with
equal privilege and responsibilities for processing the data.
● Peer-To-Peer network is useful for small environments, usually up to 10 computers.
● Peer-To-Peer network has no dedicated server.
● Special permissions are assigned to each computer for sharing the resources, but this can
lead to a problem if the computer with the resource is down.
Advantages of Peer-To-Peer Network:
● In the case of the Peer-To-Peer network, it does not contain the centralized system.
Therefore, it cannot back up the data as the data is different in different locations.
● It has a security issue as the device is managed itself.
Client/Server Network:
● Client/Server network is a network model designed for the end users called clients,
to access the resources such as songs, video, etc. from a central computer known as
Server.
● The central controller is known as a server while all other computers in the network
are called clients.
● A server performs all the major operations such as security and network management.
● A server is responsible for managing all the resources such as files, directories, printer,
etc.
● All the clients communicate with each other through a server. For example, if
client1 wants to send some data to client 2, then it first sends the request to the
server for the permission. The server sends the response to client 1 to initiate its
communication with client 2.
OSI Model
● OSI stands for Open System Interconnection is a reference model that describes how
information from a software application in one computer moves through a physical
medium to the software application in another computer.
● OSI consists of seven layers, and each layer performs a particular network function.
● The OSI model was developed by the International Organization for Standardization
(ISO) in 1984, and it is now considered as an architectural model for inter-computer
communication.
● The OSI model divides the whole task into seven smaller and manageable tasks. Each
layer is assigned a particular task.
● Each layer is self-contained, so that tasks assigned to each layer can be performed
independently.
Characteristics of OSI Model:
The OSI model is divided into two layers: upper layers and lower layers.
● The upper layer of the SI model mainly deals with the application related issues, and
they are implemented only in the software. The application layer is closest to the end
user. Both the end user and the application layer interact with the software applications.
An upper layer refers to the layer just above another layer.
● The lower layer of the OSI model deals with the data transport issues. The data link layer
and the physical layer are implemented in hardware and software. The physical layer is
the lowest layer of the OSI model and is closest to the physical medium. The physical
layer is mainly responsible for placing the information on the physical medium.
1) Physical layer:
● The main functionality of the physical layer is to transmit the individual bits from one
node to another node.
● It is the lowest layer of the OSI model.
● It establishes, maintains and deactivates the physical connection.
● It specifies the mechanical, electrical and procedural network interface specifications.
● Line Configuration: It defines the way how two or more devices can be connected
physically.
● Data Transmission: It defines the transmission mode whether it is simplex, half
full-duplex mode between the two devices on the network.
● Topology: It defines the way how network devices are arranged. Signals: It determines
the type of the signal used for transmitting the information
2) Data-Link Layer:
● This layer is responsible for the error-free transfer of data frames.
● It defines the format of t e data on the network.
● It provides reliable and efficient communication between two or more devices.
● It is mainly responsible for the unique identification of each device t network.
● It is responsible for transferring the packets that reside on a local network layer of the
receiver that is receiving.
● It identifies the address of the network layer protocol for the header.
● It also provides flow control.
● A Media access control layer is a link between the Logi and the network's physical layer.
● It is used for transferring the packets over the network.
● Framing: The data link layer translates the physical's raw bit stream into packets
known as Frames. The Data link layer adds the header and trailer to the frame. The
header which is added to the frame contains the hardware destination and source
address.
● Physical Addressing: The Data link layer adds a header to the frame that contains a
destination address. The header frame is transmitted to the destination address
mentioned in the header.
● Flow Control: Flow control is the main functionality of the Data-link layer. It is the
technique through which the constant data rate is maintained on both sides so that
no data get corrupted. It ensures that the transmitting station such as a server ith
higher processing speed does not exceed the receiving station, with lower
processing speed.
● Error Control: Error control is achieved by adding a calculated value CRC (Cyclic
Redundancy Check) that is placed to the Data link layer's trailer which is added to
the message frame before it is sent to the physical layer. If any error seems to
occur, then the receiver sends the acknowledgment for the retransmission of the
corrupt d frames.
● Access Control: When two or more devices are connected to the same communication
channel, then the data link layer protocols are used to determine which device has control
over the link at a given time.
3) Network Layer:
● It is a layer 3 that manages device addressing, and tracks the location of devices on the
network.
● It determines the best path to move data from source to the destination based on the
network conditions, the priority of service and other factors.
● The Data link layer is responsible for routing and forwarding the packets.
● Routers are the layer 3 devices; they are specified in this layer and used to provide the
routing services within an intern network. The protocols used to route the network traffic
are known as Network layer protocols.
Examples of protocols are IP and Ipv6.
4) Transport Layer:
● The Transport layer is a Layer 4 ensures that messages are transmitted in the order in
which they are sent and there is no duplication of data.
● The main responsibility of the transport layer is to transfer the data completely.
● It receives the data from the upper layer and converts them into smaller units known as
segments.
● This layer can be termed as an end-to-end layer as it provides a point-to-point
connection between source and destination to deliver the data reliably.
The two protocols used in this layer are:
● It is a standard protocol that allows the systems to communicate over the internet.
● It establishes and maintains a connection between hosts.
● When data is sent over the TCP connection, then the TCP protocol divides the data
into smaller units known as segments. Each segment travels over the internet using
multiple routes and they arrive in different orders at the destination. The
transmission control protocol reorders the packets in the correct order at the
receiving end.
User Datagram Protocol:
● Dialog control: Session layer acts as a dialog controller that creates dialog between two
processes or we can say that it allows the communication between two processes which
can be either half-duplex or full-duplex.
● Synchronization: Session layer adds some checkpoints when transiting the data in a
sequence. If some error occurs in the middle of the transmission of data, then the
transmission will take place again from the checkpoint. This process is known as
Synchronization and recovery.
6) Presentation Layer:
● A Presentation layer is mainly concerned with the syntax and semantics of the
information exchanged between the two systems.
● It acts as a data translator for a network.
● This layer is a part of the operating system that converts the data from o to another
format. The Presentation layer is also known as the syntax layer.
● Translation: The processes in two systems exchange the information in the form of
character strings, numbers and so on. Different computers use different encoding
methods, the presentation layer handles the interoperability between the different
encoding methods. It converts the data from sender-dependent format into a common
format and changes the common format into receiver-dependent format at the receiving
end.
● Encryption: Encryption is needed to maintain privacy. Encryption is a process of
converting the sender-transmitted information into another form and sends the resulting
message over the network.
● Compression: Data compression is a process of compressing the data, i.e., it reduces the
number of bits to be transmitted. Data compression is very important in multimedia such
as text, audio, video.
7) Application Layer:
● An application layer serves as a window for users and application processes to access
network service.
● It handles issues such as network transparency, resource allocation, etc.
● An application layer is not an application, but it performs the application layer functions.
● This layer provides the network services to the end-users.
● File transfer, access and management (FTAM): An application layer allows a user to
access the files in a remote computer, to retrieve the files from a computer and to manage
the files in a remote computer.
● Mail services: An application layer provides the facility for email forwarding and
storage.
● Directory services: An application provides the distributed database sources and is
used to provide that global information about various objects.
TCP/IP Protocol Suite
TCP/IP Model:
Internet Layer:
IP Protocol: IP protocol is used in this layer and it is the most significant part of the entire
TCP/IP suite. Following are the responsibilities of this protocol:
ARP Protocol:
ICMP Protocol:
Transport Layer:
● The transport layer is responsible for the reliability, flow control and correction of
data which is being sent over the network.
● The two protocols used in the transport layer are User Datagram protocol and
Transmission control protocol.
Source port address: The source port address is the address of the application program
that has created the message.
Destination port address: The destination port address is the address of the application
program that receives the message.
Total length: It defines the total number of bytes of the user datagram in bytes.
● UDP does not specify which packet is lost. UDP contains only checksum; it does
not contain any ID of a data segment.
Application Layer:
● HTTP: HTTP stands for Hypertext transfer protocol. This protocol allows us to
access the data over the World Wide Web. It transfers the data in the form of plain
text, audio, video. It is known as a Hypertext transfer protocol as it has the
efficiency to be used in a hypertext environment where there are rapid jumps from
one document to another.
● SNMP: SNMP stands for Simple Network Management Protocol. It is a framework
used for managing the devices on the internet by using the TCP/IP protocol suite.
● SMTP: SMTP stands for Simple mail transfer protocol. The TCP/IP protocol that
supports the email is known as a Simple mail transfer protocol. This protocol is
used to send the data to another email address.
● DNS: DNS stands for Domain Name System. An IP address is used to identify the
connection of a host to the internet uniquely. But, people prefer to use the names
instead of addresses. Therefore, the system that maps the name to the address is
known as Domain Name System.
● TELNET: It is an abbreviation for Terminal Network. It establishes the connection
between the local computer and remote computer in such a way that the local
terminal appears to be a terminal at the remote system.
● FTP: FTP stands for File Transfer Protocol. FTP is a standard internet protocol
used for transmitting the files from one computer to another computer.
Computer network components are the major parts which are needed to install the software.
Some important network components are NIC, switch, cable, hub, router, and modem.
Depending on the type of network that we need to install, some network components can
also be removed. For example, the wireless network does not require a cable.Following are
the major components required to install a network:
NIC:
Wired NIC: The Wired NIC is present inside the motherboard. Cables and connectors are used
with wired NIC to transfer data.
Wireless NIC: The wireless NIC contains the antenna to obtain the connection over the wireless
network. For example, laptop computers contain the wireless NIC.
Hub: A Hub is a hardware device that divides the network connection among multiple devices.
When a computer requests for some information from a network, it first sends the request to the
hub through cable. Hub will broadcast this request to the entire network. All the devices will
check whether the request belongs to them or not. If not, the request will be dropped.The process
used by the Hub consumes more bandwidth and limits the amount of communication.Nowadays,
the use of hub is obsolete, and it is replaced by more advanced computer network components
such as Switches, Routers.
Switch:
A switch is a hardware device that connects multiple devices on a computer network. A switch
contains more advanced features than Hub. The Switch contains the updated ta le that decides
where the data is transmitted or not. which delivers the message to the correct destination based
on the physical address present in the i coming message. A Switch does not broadcast the
message to the entire network like the Hub. It determines the device to whom the message is to
be transmitted. Therefore, we can say that the switch provides a direct connection between the
source and destination. It increases the speed of the network.
Router:
Advantages of Router:
Security: The information which is transmitted to the network will traverse the entire
cable, but the only specified device which has been addressed can read the data.
Reliability: If the server has stopped functioning, the network goes down, but no other
networks are affected that are served by the router.
Performance: Router enhances the overall performance of the network. Suppose there are
24 workstations in a network that generate the same amount of traffic. This increases the
traffic load on the network. Router splits the single network into two networks of 12
workstations each, reducing the traffic load by half.
Modem:
● A modem is a hardware device that allows the computer to connect to the internet over
the existing telephone line.
● A modem is not integrated with the motherboard rather it is installed on the PCI slot
found on the motherboard.
● It stands for Modulator/Demodulator. It converts the digital data into an analog
signal over the telephone lines.
Based on the differences in speed and transmission rate, a modem can be classified in
the following categories:
● Standard PC modem or Dial-up modem
● Cellular Modem
● Cable modem
IPV4 Address
● The identifier used in the IP layer of the TCP/IP protocol suite to identify the connection
of each device to the Internet is called the Internet address or IP address.
● Internet Protocol version 4 (IPv4) is the fourth version in the development of the Internet
Protocol (IP) and the first version of the protocol to be widely deployed.
● The IP address is the address of the connection, not the host or the router. An IPv4
address is a 32-bit address that uniquely and universally defines the connection .
● If the device is moved to another network, the IP address may be changed.
● IPv4 addresses are unique in the sense that each address defines one, and only one,
connection to the Internet. If a device has two connections to the Internet, via two
networks, it has two IPv4 addresses.
● IPv4 addresses are universal in the sense that the addressing system must be accepted by
any host that wants to be connected to the Internet.
IPV4 ADDRESS SPACE
There are three common notations to show an IPv4 address: (i) binary notation (base 2), (ii)
dotted-decimal notation (base 256), and (ii) hexadecimal notation (base 16).
● In binary notation, an IPv4 address is displayed as 32 bits. To make the address more
readable, one or more spaces are usually inserted between bytes (8 bits).
● In dotted-decimal notation, IPv4 addresses are usually written in decimal form with a
decimal point (dot) separating the bytes. Each number in the dotted-decimal notation is
between 0 and 255.
● In hexadecimal notation, each hexadecimal digit is equivalent to four bits. This means
that a 32-bit address has 8 hexadecimal digits. This notation is often used in network
programming.
Internet Protocol being a layer-3 protocol (OSI) takes data Segments from layer-4 (Transport)
and divides it into packets. IP packet encapsulates data units received from the above layer and
adds to its own header information.
The encapsulated data is referred to as IP Payload. The IP header contains all the necessary
information to deliver the packet at the other end.
IP header includes many relevant information including Version Number, which, in this context,
is 4. Other details are as follows −
ECN − Explicit Congestion Notification; It carries information about the congestion seen in the
route.
Identification − If the IP packet is fragmented during the transmission, all the fragments contain
the same identification number. to identify the original IP packet they belong to.
Flags − As required by the network resources, if an IP Packet is too large to handle, these ‘flags’
tell if they can be fragmented or not. In this 3- bit flag, the MSB is always set to ‘0’.
Fragment Offset − This offset tells the exact position of the fragment in the original IP Packet.
Time to Live − To avoid looping in the network, every packet is sent with some TTL value set,
which tells the network how many routers (hops) this packet can cross. At each hop, its value is
decremented by one and when the value reaches zero, the packet is discarded.
Protocol − Tells the Network layer at the destination host, to which Protocol this packet belongs
to, i.e. the next level Protocol.For example protocol number of ICMP is 1, TCP is 6 and UDP
is17.
Header Checksum − This field is used to keep the checksum value of the entire header which is
then used to check if the packet is received error-free.
Source Address − 32-bit address of the Sender (or source) of the packet.
Destination Address − 32-bit address of the Receiver (or destination) of the packet.
Options − This is an optional field, which is used if the value of IHLis greater than 5. These
options may contain values for options such as Security, Record Route, Time Stamp.
CLASSFUL ADDRESSING
Class B
● In Class B, an IP address is assigned to those networks that range from small- sized to
large-sized networks.
● The Network ID is 16 bits long and Host ID is 16 bits long.
● In Class B, the higher order bits of the first octet are always set to 10, and the
remaining14 bits determine the network ID.
● The other 16 bits determine the Host ID.
● The total number of networks in Class B = 2 14 = 16384 network address
● The total number of hosts in Class B = 2 16 - 2 = 65534 host address
Class C
● In Class C, an IP address is assigned to only small-sized networks.
● The Network ID is 24 bits long and host ID is 8 bits long.
● In Class C, the higher order bits of the first octet are always set to 110, and the remaining
21 bits determine the network ID.
● The 8 bits of the host ID determine the host in a network.
● The total number of networks = 2 21 = 2097152 network address
● The total number of hosts = 2 8 - 2 = 254 host address
Class D
Class E
● In Class E, an IP address is used for future use or for the research and development
purposes.
● It does not possess any subnetting.
● The higher order bits of the first octet are always set to 1111, and the remaining bits
determine the host ID in any network.
Subnetting
In subnetting, a class A or class B block is divided into several subnets. Each subnet has a larger
prefix length than the original network. For example, if a network in class A is divided into four
subnets, each subnet has a prefix of n sub = 10. At the same time, if all of the addresses in a
network are not used, subnetting allows the addresses to be divided among several organizations.
An IPv6 address is made of 128 bits divided into eight 16-bits blocks.Each block is then
converted into 4-digit Hexadecimal numbers separated by colon symbols
IPv6 fixed header is 40 bytes long and contains the following information.
Traffic Class (8-bits): These 8 bits are divided into two parts. The most significant 6 bits are
used for Type of Service to let the Router Know what services should be provided to this packet.
The least significant 2 bits are used for Explicit Congestion Notification (ECN).
Flow Label (20-bits): This label is used to maintain the sequential flow of the packets belonging
to a communication. The source labels the sequence to help the router identify that a particular
packet belongs to a specific flow of information. This field helps avoid re-ordering of data
packets. It is designed for streaming/real-time media.
Payload Length (16-bits): This field is used to tell the routers how much information a
particular packet contains in its payload. Payload is composed of Extension Headers and Upper
Layer data. With 16 bits, up to 65535 bytes can be indicated; but if the Extension Headers
contain Hop-by-Hop Extension Header, then the payload may exceed 65535 bytes and this field
is set to 0.
Next Header (8-bits): This field is used to indicate either the type of Extension Header, or if the
Extension Header is not present then it indicates the Upper Layer PDU. The values for the type
of Upper Layer PDU are the same as IPv4’s. 6 Hop Limit (8-bits): This field is used to stop
packets from looping in the network infinitely. This is the same as TTL in IPv4. The value of
Hop Limit field is decremented by 1 as it passes a link (router/hop). When the field reaches 0 the
packet is discarded.
Source Address (128-bits): This field indicates the address of the originator of the packet.
Destination Address (128-bits): This field provides the address of the intended recipient of the
packet.
1. Consider sending a 2400-byte datagram into a link that has an MTU of 700 bytes.
Suppose the original datagram is stamped with the identification number 422. How many
fragments are generated? What are the values in the various fields in the IP datagram(s)
generated related to fragmentation.
The Maximum Transmission Unit (MTU) of the link is 700 bytes.The original datagram size is
2400 bytes. An IP header is 20 bytes (assuming no options). The maximum payload per fragment
is:
700−20=680 bytes
The fragmentation process requires that data be divided into 680-byte chunks.
2400−20=6802380≈3.5
Since fragmentation must occur at multiples of 8 bytes (because the fragment offset is in 8-byte
units), we adjust the payload size accordingly.
Each fragment will contain an IP header with the following key fields:
Explanation of Fields:
First Fragment
Second Fragment
Third Fragment
Final Answer:
● Number of fragments: 4
● Fragment fields:
○ Identification: 422 (same for all)
○ Total Lengths: 700, 700, 700, 360
○ Fragment Offsets: 0, 85, 170, 255
○ MF Flag: 1, 1, 1, 0
2.Consider sending a 3500 - byte datagram that has arrived at a router R1 that needs to be
sent over a link that has an MTU size of 1000 bytes to R2. Then it has to traverse a link
with an MTU of 600 bytes. Let the identification number of the original datagram be 465.
The fragmentation process as a 3500-byte datagram passes through two different links with
different MTU sizes:
3480/980 = 3.559803480=3.55
Now, the four fragments must be sent over a 600-byte MTU link, which means they need to be
re-fragmented.
Final Answer:
This is the final fragmentation process ensuring the datagram can be transmitted across both
links efficiently.
ROUTING
● Routing is the process of determining paths through a network for sending data packets.
● It ensures that data moves effectively from source to destination, making the best use of
network resources and ensuring consistent communication.
● Routing performed by layer 3 (or network layer) devices to deliver the packet by
choosing an optimal path from one network to another.
● It is an autonomous process handled by the network devices to direct a data packet to its
intended destination. The node here refers to a network device called Router.
Types of Routing: Routing is typically of 3 types, each serving its purpose and offering different
functionalities.
1. Static Routing
Static routing is also called “non-adaptive routing”. In this, routing configuration is done
manually by the network administrator. Let’s say for example, we have 5 different routes to
transmit data from one node to another, so the network administrator will have to manually enter
the routing information by assessing all the routes.
Advantages of Static Routing
● No routing overhead for the router CPU which means a cheaper router can be used to do
routing.
● It adds security because only an administrator can allow routing to particular networks
only.
● No bandwidth usage between routers.
Disadvantage of Static Routing
● For a large network, it is a hectic task for administrators to manually add each route for
the network in the routing table on each router.
● The administrator should have good knowledge of the topology. If a new administrator
comes, then he has to manually add each route so he should have very good knowledge
of the routes of the topology.
2. Default Routing
This is the method where the router is configured to send all packets toward a single router (next
hop). It doesn’t matter to which network the packet belongs, it is forwarded out to the router
which is configured for default routing. It is generally used with stub routers. A stub router is a
router that has only one route to reach all other networks.
Advantages of Default Routing
● Default routing provides a “last resort” route for packets that don’t match any specific
route in the routing table. It ensures that packets are not dropped and can reach their
intended destination.
● It simplifies network configuration by reducing the need for complex routing tables.
● Default routing improves network reliability and reduces packet loss.
Disadvantages of Default Routing
● Relying solely on default routes can lead to inefficient routing, as it doesn’t consider
specific paths.
● Using default routes may introduce additional network latency.
3. Dynamic Routing
Dynamic routing makes automatic adjustments of the routes according to the current state of the
route in the routing table. Dynamic routing uses protocols to discover network destinations and
the routes to reach them. RIP and OSPF are the best examples of dynamic routing protocols.
Automatic adjustments will be made to reach the network destination if one route goes down. A
dynamic protocol has the following features:
● The routers should have the same dynamic protocol running in order to exchange routes.
● When a router finds a change in the topology then the router advertises it to all other
routers.
Advantages of Dynamic Routing
● Easy to configure.
● More effective at selecting the best route to a destination remote network and also for
discovering remote networks.
Disadvantage of Dynamic Routing
● Consumes more bandwidth for communicating with other neighbors.
● Less secure than static routing.
ROUTING PROTOCOL:
There are three main classes of routing protocols:
● Distance vector routing is distributed, i.e., the algorithm is run on all nodes.
● Each node knows the distance (cost) to each of its directly connected neighbors.
● Nodes construct a vector (Destination, Cost, NextHop) and distribute to its neighbors.
● Nodes compute a routing table of minimum distance to every other node via NextHop
using information obtained from its neighbors.
Initial State
● Each node sends its initial table (distance vector) to neighbors and receives their
estimate.
● Node A sends its table to nodes B, C, E & F and receives tables from nodes B, C, E & F.
● Each node updates its routing table by comparing with each of its neighbor's table
● For each destination, Total Cost is computed as: Total Cost = Cost (Node to Neighbor) +
Cost (Neighbor to Destination)
● If Total Cost < Cost then Cost = Total Cost and NextHop = Neighbor
● Node A learns from C's table to reach node D and from F's table to reach node G.
● Total Cost to reach node D via C = Cost (A to C) + Cost(C to D) Cost = 1 + 1 = 2.
● Since 2 < ∞, entry for destination D in A's table is changed to (D, 2, C)
● Total Cost to reach node G via F = Cost(A to F) + Cost(F to G) = 1 + 1 = 2
● Since 2 < ∞, entry for destination G in A's table is changed to (G, 2, F)
● Each node builds a complete routing table after a few exchanges amongst its neighbors.
● System stabilizes when all nodes have complete routing information, i.e., convergence.
● Routing tables are exchanged periodically or in case of triggered update. The final
distances stored at each node is given below:
There are two different circumstances under which a given node decides to send a routing
update to its neighbors.
Periodic Update
In this case, each node automatically sends an update message every so often, even if
nothing has changed. The frequency of these periodic updates varies from protocol to
protocol, but it is typically on the order of several seconds to several minutes.
Triggered Update
In this case, whenever a node notices a link failure or receives an update from one of its
neighbors that causes it to change one of the routes in its routing table.
Whenever a node’s routing table changes, it sends an update to its neighbors, which may
lead to a change in their tables, causing them to send an update to their neighbors.
● Routers advertise the cost of reaching networks. Cost of reaching each link is 1
hop. For example, router C advertises to A that it can reach network 2, 3 at cost 0
(directly connected), networks 5, 6 at cost 1 and network 4 at cost 2.
● Each router updates cost and next hop for each network number.
● Infinity is defined as 16, i.e., any route cannot have more than 15 hops.
● Therefore RIP can be implemented on small-sized networks only.
● Advertisements are sent every 30 seconds or in case of triggered update.
● Command - It indicates the packet type. Value 1 represents a request packet.
Value 2 represents a response packet.
● Version - It indicates the RIP version number. For RIPv1, the value is 0x01.
● Address Family Identifier - When the value is 2, it represents the IP protocol.
● IP Address - It indicates the destination IP address of the route. It can be the
addresses of only the natural network segment.
● Metric - It indicates the hop count of a route to its destination.
Each node knows the state of link to its neighbors and cost.
Nodes create an update packet called link-state packet (LSP) that contains:
● ID of the node
● List of neighbors for that node and associated cost
● 64-bit Sequence number
● Time to live
● The Border Gateway Protocol version (BGP) is the only interdomain routing protocol
used in the Internet today.
● BGP4 is based on the path-vector algorithm. It provides information about the
reachability of networks on the Internet.
● BGP views the internet as a set of autonomous systems interconnected arbitrarily.
● Each AS has a border router (gateway), by which packets enter and leave that AS. In the
above figure, R3 and R4 are border routers.
● One of the routers in each autonomous system is designated as a BGP speaker.
● BGP speakers exchange reachability information with other BGP speakers, known as
external BGP sessions.
● BGP advertises a complete path as an enumerated list of AS (path vector) to reach a
particular network.
● Paths must be without any loop, i.e., AS list is unique.
● For example, backbone network advertises that networks 128.96 and 192.4.153 can be
reached along the path .
● If there are multiple routes to a destination, the BGP speaker chooses one based on
policy.
● Speakers need not advertise any route to a destination, even if one exists.
● Advertised paths can be cancelled, if a link/node on the path goes down. This negative
advertisement is known as withdrawn route.
● Routes are not repeatedly sent. If there is no change, messages are sent.
● A Variant of BGP
● Used by routers to update routing information learnt from other speakers to routers
inside the autonomous system.
● Each router in the AS is able to determine the appropriate next hop for all prefixes.
Switching
In large networks, there can be multiple paths from sender to receiver. The switching technique will
decide the best route for data transmission.Switching technique is used to connect the systems for
making one-to-one communication.
○ Circuit establishment
○ Data transfer
○ Circuit Disconnect
Circuit Switching can use either of the two technologies:
○ Once the dedicated path is established, the only delay occurs in the speed of data
transmission.
○ It takes a long time to establish a connection approx 10 seconds during which no data can
be transmitted.
○ It is more expensive than other switching techniques as a dedicated path is required for
each connection.
○ It is inefficient to use because once the path is established and no data is transferred, then
the capacity of the path is wasted.
○ In this case, the connection is dedicated therefore no other data can be transferred even if
the channel is free.
Message Switching
○ The message switches must be equipped with sufficient storage to enable them to store
the messages until the message is forwarded.
○ The Long delay can occur due to the storing and forwarding facility provided by the
message switching technique.
Packet Switching
○ The packet switching is a switching technique in which the message is sent in one go, but
it is divided into smaller pieces, and they are sent individually.
○ The message splits into smaller pieces known as packets and packets are given a unique
number to identify their order at the receiving end.
○ Every packet contains some information in its headers such as source address, destination
address and sequence number.
○ Packets will travel across the network, taking the shortest path as possible.
○ All the packets are reassembled at the receiving end in correct order.
○ If any packet is missing or corrupted, then the message will be sent to resend the
message.
○ If the correct order of the packets is reached, then the acknowledgment message will be
sent.
Approaches Of Packet Switching:
There are two approaches to Packet Switching:
○ Packet Switching technique cannot be implemented in those applications that require low
delay and high-quality services.
○ The protocols used in a packet switching technique are very complex and require high
implementation cost.
○ If the network is overloaded or corrupted, then it requires retransmission of lost packets.
It can also lead to the loss of critical information if errors are nor recovered.
An Intrusion Detection System (IDS) is a security tool that monitors network or system
activities to detect potential threats or unauthorized access attempts. Its primary role is to identify
any suspicious activity that could harm the system and alert the system administrator. The main
objective of IDS is to detect malicious activities such as unauthorized access, attacks, or policy
violations, and notify the administrator for investigation and response.
1. Address Spoofing: Attackers use fake or forged IP addresses to disguise their identity
and location, making it difficult for the IDS to trace the origin of the attack.
2. Fragmentation: Attackers break large malicious data into smaller fragments, which can
bypass IDS systems that might not reassemble the packets properly.
3. Pattern Evasion: Attackers modify the attack patterns to avoid detection by IDS systems
that are designed to search for specific known attack patterns.
4. Coordinated Attacks: Multiple attackers target the system simultaneously or use several
ports to confuse the IDS, making it harder to track and detect the attack.
Working of IDS
An IDS works by continuously monitoring the network or system for unusual or suspicious
behavior. The process typically involves the following steps:
1. Traffic Capture: The IDS captures network traffic or system data.
2. Pattern Matching or Anomaly Detection: The captured data is compared to known
attack patterns (signature-based detection) or a baseline of normal activity
(anomaly-based detection).
3. Alert Generation: When the IDS detects a suspicious event, it generates an alert and
notifies the administrator.
4. Investigation: The administrator investigates the alert to confirm whether it is a genuine
attack or a false positive. If it is an attack, they take appropriate measures to block or
mitigate the threat.
Types of IDS
A Network Intrusion Detection System (NIDS) is designed to monitor traffic across an entire
network. It examines data packets traveling through the network to detect signs of potential
attacks or malicious activity. The NIDS is typically installed at key locations within the network,
such as behind firewalls or at the network perimeter, to effectively monitor the traffic passing
through various network segments. By analyzing the network traffic, it can detect external
attacks trying to infiltrate the network. For example, a NIDS installed at the network's edge can
monitor all incoming external traffic to detect attacks like port scans or DDoS attacks, filtering
malicious traffic before it can harm the system.
A Host Intrusion Detection System (HIDS), on the other hand, is used to monitor specific
devices or hosts on the network, such as servers, personal computers, or workstations. HIDS is
installed directly on these devices and focuses on detecting suspicious activities or unauthorized
changes on that particular host. It works by creating a baseline snapshot of the system's
configuration and files when the system is known to be secure. It then continuously monitors for
any deviations from this baseline, such as unauthorized modifications, file deletions, or
configuration changes, and sends alerts when such changes are detected. For example, a HIDS
might be used on a critical server where the layout or configuration should remain stable. If any
change occurs to these files or configurations, it will alert the administrator.
3. Hybrid IDS
A Hybrid Intrusion Detection System combines both the features of NIDS and HIDS. This
type of IDS offers more comprehensive monitoring by integrating both network-level and
host-level activities. Hybrid IDS systems can detect a broader range of attacks by correlating
data from both the network traffic and the individual hosts. This dual approach enables it to
provide a more complete view of network security, allowing for better detection of coordinated
or complex attacks that might bypass either a standalone NIDS or HIDS. By combining the
strengths of both types of detection systems, Hybrid IDS improves overall detection capabilities.
Fragmentation: Dividing the packet into smaller packets called fragment and the process is
known as fragmentation. This makes it impossible to identify an intrusion because there can’t be
a malware signature.
Packet Encoding: Encoding packets using methods like Base64 or hexadecimal can hide
malicious content from signature-based IDS.
Encryption: Several security features such as data integrity, confidentiality, and data privacy, are
provided by encryption. Unfortunately, security features are used by malware developers to hide
attacks and avoid detection.
Benefits of IDS
● Detects Malicious Activity: IDS can detect any suspicious activities and alert the
system administrator before any significant damage is done.
● Improves Network Performance: IDS can identify any performance issues on the
network, which can be addressed to improve network performance.
● Compliance Requirements: IDS can help in meeting compliance requirements by
monitoring network activity and generating reports.
● Provides Insights: IDS generates valuable insights into network traffic, which can be
used to identify any weaknesses and improve network security.
Disadvantages of IDS
● False Alarms: IDS can generate false positives, alerting on harmless activities and
causing unnecessary concern.
● Resource Intensive: It can use a lot of system resources, potentially slowing down
network performance.
● Requires Maintenance: Regular updates and tuning are needed to keep the IDS
effective, which can be time-consuming.
● Doesn’t Prevent Attacks: IDS detects and alerts but doesn’t stop attacks, so
additional measures are still needed.
● Complex to Manage: Setting up and managing an IDS can be complex and may
require specialized knowledge.
A broadcast domain is a network segment in which if a device broadcasts a packet then all the
devices in the same broadcast domain will receive it. The devices in the same broadcast domain
will receive all the broadcast packets but it is limited to switches only as routers don’t forward
out the broadcast packet. To forward out the packets to different VLAN (from one VLAN to
another) or broadcast domains, inter Vlan routing is needed. Through VLAN, different small-size
sub-networks are created which are comparatively easy to handle.
There are three ways to connect devices on a VLAN, the type of connections are based on the
connected devices i.e. whether they are VLAN-aware(A device that understands VLAN formats
and VLAN membership) or VLAN-unaware(A device that doesn’t understand VLAN format and
VLAN membership).
All connected devices to a trunk link must be VLAN-aware. All frames on this should
have a special header attached to it called tagged frames.
2. Access link –
It is a combination of the Trunk link and Access link. Here both VLAN-unaware and
VLAN-aware devices are attached and it can have both tagged and untagged frames.
● Improved network security: VLANs can be used to separate network traffic and
limit access to specific network resources. This improves security by preventing
unauthorized access to sensitive data and network resources.
● Better network performance: By segregating network traffic into smaller logical
networks, VLANs can reduce the amount of broadcast traffic and improve network
performance.
● Simplified network management: VLANs allow network administrators to group
devices together logically, rather than physically, which can simplify network
management tasks such as configuration, troubleshooting, and maintenance.
● Flexibility: VLANs can be configured dynamically, allowing network administrators
to quickly and easily adjust network configurations as needed.
● Cost savings: VLANs can help reduce hardware costs by allowing multiple virtual
networks to share a single physical network infrastructure.
● Scalability: VLANs can be used to segment a network into smaller, more
manageable groups as the network grows in size and complexity.
Disadvantages of VLAN
1. Complexity: VLANs can be complex to configure and manage, particularly in large
network devices and protocols, which can limit their usefulness in cloud computing
environments.
5. Limited mobility: VLANs may not support the movement of devices or users
between different network segments, which can limit their usefulness in mobile or
remote cloud computing environments.
6. Cost: Implementing and maintaining VLANs can be costly, especially if specialized
Virtual LANs (VLANs) are widely used in cloud computing environments to improve network
performance and security. Here are a few examples of real-time applications of VLANs:
1. Voice over IP (VoIP) : VLANs can be used to isolate voice traffic from data traffic,
which improves the quality of VoIP calls and reduces the risk of network congestion.
2. Video Conferencing : VLANs can be used to prioritize video traffic and ensure that
it receives the bandwidth and resources it needs for high-quality video conferencing.
3. Remote Access : VLANs can be used to provide secure remote access to cloud-based
applications and resources, by isolating remote users from the rest of the network.
4. Cloud Backup and Recovery : VLANs can be used to isolate backup and recovery
traffic, which reduces the risk of network congestion and improves the performance
of backup and recovery operations.
5. Gaming : VLANs can be used to prioritize gaming traffic, which ensures that gamers
receive the bandwidth and resources they need for a smooth gaming experience.
6. IoT : VLANs can be used to isolate Internet of Things (IoT) devices from the rest of
the network, which improves security and reduces the risk of network congestion.
Authentication
Authentication is the mechanism to identify the user or system or the entity. It ensures the
identity of the person trying to access the information. The authentication is mostly secured by
using username and password. The authorized person whose identity is pre registered can prove
his/her identity and can access the sensitive information.
Integrity
Integrity gives the assurance that the information received is exact and accurate. If the content of
the message is changed after the sender sends it but before reaching the intended receiver, then it
is said that the integrity of the message is lost.
● System Integrity: System Integrity assures that a system performs its intended
function in an unimpaired manner, free from deliberate or inadvertent unauthorized
manipulation of the system.
● Data Integrity: Data Integrity assures that information (both stored and in
transmitted packets) and programs are changed only in a specified and authorized
manner.
Non-Repudiation
Non-repudiation is a mechanism that prevents the denial of the message content sent through a
network. In some cases the sender sends the message and later denies it. But the non-repudiation
does not allow the sender to refuse the receiver.
Access Control
The principle of access control is determined by role management and rule management. Role
management determines who should access the data while rule management determines up to
what extent one can access the data. The information displayed is dependent on the person who
is accessing it.
Availability
The principle of availability states that the resources will be available to authorize party at all
times. Information will not be useful if it is not available to be accessed. Systems should have
sufficient availability of information to satisfy the user request.
Risk management
Risk Assessment: Once risks are identified, they are analyzed to determine their potential
impact and likelihood of occurrence. This helps prioritize which risks need to be addressed first.
Risk Monitoring: Continuous monitoring of the network to detect any unusual activity or
potential threats. Regular security audits and penetration testing are also part of this process.
Risk Response Planning: Developing plans to respond to security incidents when they occur.
This includes having an incident response team and a detailed incident response plan in place.
Risk Communication: Keeping all stakeholders informed about the risks and the measures
taken to mitigate them. Effective communication ensures that everyone understands their role in
maintaining network security.
Risk Mitigation: This step involves implementing measures to reduce the impact or likelihood
of identified risks. This could include deploying firewalls, encryption, intrusion detection
systems, and regular security updates.
The Risk Management Framework (RMF) is a set of criteria that dictate how the United State
government IT systems must be architected, secured, and monitored.Originally developed by the
Department of Defense (DoD), the RMF was adopted by the rest of the US federal information
systems in 2010. Today, the National Institute of Standards and Technology (NIST) maintains
NIST and provides a solid foundation for any data security strategy.
Risk Identification The first, and arguably the most important, part of the RMF is to perform
risk identification. NIST says, “the typical risk factors include threat, vulnerability, impact,
likelihood, and predisposing condition.” During this step, you will brainstorm all the possible
risks you can imagine across all of your systems and then prioritize them using different factors:
Threats are events that could potentially harm the organization by intrusion, destruction, or
disclosure.
Vulnerabilities are weaknesses in the IT systems, security, procedures, and controls that can be
exploited by bad actors (internal or external).
Impact is a measurement of how severe the harm to the organization would be if a particular
vulnerability or threat is compromised.
Likelihood is a measurement of the risk factor based on the probability of an attack on a specific
vulnerability.
Predisposing conditions are a specific factor inside the organization that either increases or
decreases the impact or likelihood that a vulnerability will come into play.
Once you have identified the threats, vulnerabilities, impact, likelihood, and predisposing
conditions, you can calculate and rank the risks your organization needs to address.
Risk Mitigation Organizations take the previous ranked list and start to figure out how to
mitigate the threats from the greatest to the least. At some point in the list, the organization can
decide that risks below this level are not worth addressing, either because there is little likelihood
of that threat getting exploited, or if there are too many greater threats to manage immediately to
fit the low threats into the work plan.
Risk Reporting and Monitoring The RMF requires that organizations maintain a list of known
risks and monitor known risks for compliance with the policies. Statistics on data breaches
indicate that many companies still do not report all of the successful attacks they are exposed to,
which could impact their peers.
exposed to and implement reasonable measures to mitigate them. The RMF breaks down these
•Use NIST standards to categorize information and systems so you can provide an accurate risk
•NIST tells you what kinds of systems and information you should include.
•And what level of security you need to implement based on the categorization.
consistent, comparable, and repeatable approach for selecting and specifying security controls
for systems.”
•Put the controls you selected in the previous step in place and document all the processes and
•Make sure the security controls you implemented are working the way they need to so you can
Channel interference in wireless networks occurs when multiple wireless signals overlap or
interfere with each other, leading to degraded network performance. It is a common issue in
environments with numerous wireless devices operating on the same or adjacent frequency
channels.
•Happens when signals from nearby frequency channels overlap due to insufficient separation.
External Interference
•Caused by non-Wi-Fi devices like microwave ovens, Bluetooth devices, cordless phones, and
•Mitigation: Using higher frequency bands (e.g., 5 GHz, 6 GHz) and shielding techniques.
•Affects high-speed wireless communication when multiple signals overlap due to multipath
propagation.
•Can be reduced using equalization techniques and modulation schemes like OFDM (Orthogonal
•Increased Latency: Delayed communication affecting real-time applications like VoIP and
gaming.
✔ Power Control: Adjust the transmission power to minimize unnecessary signal spread.
dynamically.
✔ Use of MIMO (Multiple Input Multiple Output): Enhances signal strength and reliability.
reliability.
control, and advanced signal processing methods, can help improve communication quality.
In specialized networks like Wireless Body Area Networks (WBANs) and VANETs, interference
Definition Uses a single key for both Uses a pair of keys (public
and decryption.
Key Usage The same key is shared The public key encrypts data,
it.
Security Less secure as the same key is More secure since the private
exchange vulnerable.
Key Distribution Requires a secure method to No need to share the private
share the key between parties. key; only the public key is
shared.
ElGamal
encryption).
HTTPS websites
UNIT-3 SECURITY FUNDAMENTALS
Introduction to Security Principles - Access Control and Authentication - Security Risk Management -
Security Policies and Procedures - Security Incident Response - Security Awareness Training -
Vulnerability Assessment and Management - Physical Security Considerations.
_____________________________________________________________________________
3.1 INTRODUCTION TO SECURITY PRINCIPLES
Key principles of information security and their importance.
The key principles of information security are designed to protect sensitive data and systems from
unauthorized access, damage, or disruption. These principles form the foundation of any effective security
framework and help organizations maintain the confidentiality, integrity, and availability of their information.
1. Confidentiality
Definition: Ensures that sensitive information is accessible only to those who are authorized to see it.
This prevents unauthorized access to personal, financial, or business data.
Importance: Breaches of confidentiality can lead to identity theft, fraud, or the leakage of sensitive
business information, which can have severe financial and reputational consequences for organizations.
2. Integrity
Definition: Ensures that the data is accurate, reliable, and has not been tampered with. It ensures that
information remains in its correct form and is free from unauthorized alterations.
Importance: If data integrity is compromised, it can lead to incorrect decisions, operational disruptions,
and a loss of trust. For example, tampered financial records could result in significant financial losses.
3. Availability
Definition: Ensures that information and systems are accessible when needed by authorized users. It is
about ensuring that the systems are operational and the data is available even during disruptions or
attacks.
Importance: Lack of availability due to an attack or failure can disrupt business operations, causing
financial loss, customer dissatisfaction, or even system outages.
4. Authentication
Definition: Verifying the identity of a user, system, or entity. It ensures that only legitimate users are
granted access.
Importance: Without proper authentication, unauthorized individuals could gain access to sensitive
systems and data.
5. Authorization
Definition: Determining what an authenticated user is allowed to do. This principle ensures that users
can only perform actions that they are authorized to do based on their role and permissions.
Importance: Proper authorization prevents unauthorized activities, such as unauthorized file access or
system changes, which could compromise security.
6. Non-repudiation
Definition: Ensures that actions or transactions cannot be denied by the person who performed them.
This is often achieved through logs or digital signatures.
Importance: Non-repudiation prevents users from denying their actions and provides accountability,
which is crucial for legal and regulatory compliance.
7. Accountability
Definition: Tracking and recording the actions of users and systems. This ensures that users can be held
responsible for their activities, and there is a trace of actions taken on critical systems.
Importance: Accountability helps in monitoring user behavior, detecting suspicious activities, and
providing evidence in case of security incidents or breaches.
8. Risk Management
Definition: Identifying, assessing, and prioritizing risks to the organization's information and systems. It
involves taking steps to reduce the potential impact of security threats.
Importance: Effective risk management ensures that resources are allocated properly to address the
most critical vulnerabilities and mitigate potential security risks.
Defense in depth is a layered security strategy that uses multiple layers of defense to protect information
and systems from threats. The idea is that if one security measure fails, others are in place to mitigate the
impact or prevent the breach altogether. It’s a proactive and comprehensive approach to cyber security,
where multiple security controls work together to provide a robust defense.
The concept can be applied at various levels of an organization’s IT environment, from the network and
systems to applications and user behavior.
Confidentiality
Confidentiality means that only authorized individuals/systems can view sensitive or classified
information. The data being sent over the network should not be accessed by unauthorized individuals.
The attacker may try to capture the data using different tools available on the Internet and gain access to
your information. A primary way to avoid this is to use encryption techniques to safeguard your data so
that even if the attacker gains access to your data, he/she will not be able to decrypt it. Encryption standards
include AES (Advanced Encryption Standard) and DES (Data Encryption Standard). Another way to
protect your data is through a VPN tunnel. VPN stands for Virtual Private Network and helps the data to
move securely over the network.
Integrity
The next thing to talk about is integrity. Well, the idea here is to make sure that data has not been
modified. Corruption of data is a failure to maintain data integrity. To check if our data has been
modified or not, we make use of a hash function.
We have two common types: SHA (Secure Hash Algorithm) and MD5 (Message Direct 5). Now MD5
is a 128-bit hash and SHA is a 160-bit hash if we’re using SHA-1. There are also other SHA methods
that we could use like SHA-0, SHA-2, and SHA-3.
Let’s assume Host ‘A’ wants to send data to Host ‘B’ to maintain integrity. A hash function will run
over the data and produce an arbitrary hash value H1 which is then attached to the data. When Host ‘B’
receives the packet, it runs the same hash function over the data which gives a hash value of H2. Now,
if H1 = H2, this means that the data’s integrity has been maintained and the contents were not modified.
Availability
This means that the network should be readily available to its users. This applies to systems and to data.
To ensure availability, the network administrator should maintain hardware, make regular upgrades,
have a plan for fail-over, and prevent bottlenecks in a network. Attacks such as DoS or DDoS may
render a network unavailable as the resources of the network get exhausted. The impact may be
significant to the companies and users who rely on the network as a business tool. Thus, proper measures
should be taken to prevent such attacks.
Difference between data integrity and data availability
1. Data Integrity:
Definition: Data integrity refers to the accuracy, consistency, and trustworthiness of data throughout its
lifecycle. It ensures that the data remains unaltered, accurate, and reliable when it is created, stored, or
transmitted.
Goal: To make sure that data is correct, complete, and free from unauthorized modifications or
corruption.
Key Considerations:
o Prevention of unauthorized data modification (e.g., through encryption, checksums, and hash
functions).
o Ensuring data consistency during transmission or storage (e.g., error detection and correction).
Example: If a database record is altered unintentionally (due to a bug or malicious attack), that’s a data
integrity issue. Ensuring the correct data is reflected without modification is crucial.
2. Data Availability:
Definition: Data availability refers to ensuring that data is accessible and usable when needed. It focuses
on ensuring that authorized users can always access the required data when required, without
interruption.
Goal: To make sure that systems, databases, and data remain operational and accessible at all times,
minimizing downtime or outages.
Key Considerations:
o Redundancy and failover mechanisms (e.g., backup systems, cloud storage, load balancing).
o Ensuring uptime and access through robust infrastructure and disaster recovery plans.
Example: If a website goes down and users cannot access its data or services, that's a data availability
issue. Ensuring continuous access even in the face of hardware failure or high demand is key.
Key Difference:
Data Integrity focuses on maintaining the accuracy and consistency of the data.
Data Availability focuses on ensuring the data can be accessed and used when needed.
************************************************************************************
The Principle of Least Privilege (PoLP) is a fundamental concept in access control and security. It
states that users, systems, or applications should be granted only the minimum level of access
(privileges) necessary to perform their job functions. This helps limit the potential damage caused by
accidents, errors, or malicious actions by restricting access to only the resources that are essential for the
task at hand.
1. Granting only necessary permissions: Each user or system is given access to only the specific
resources they need, such as files, applications, or network components, and no more.
2. Restricting administrative rights: Only users or systems that truly need administrative or elevated
privileges should be granted such access, and this access should be used sparingly and carefully.
3. Minimizing exposure to risk: By limiting access, it reduces the chances of unauthorized access or
accidental modification of sensitive data.
4. Temporary and on-demand access: In some cases, access may be granted on a temporary or ad-
hoc basis for specific tasks, after which it is revoked.
How Does Access Control Work?
Access control is used to verify the identity of users attempting to log in to digital resources. But it is also
used to grant access to physical buildings and physical devices.
Physical access control
Common examples of physical access controllers include:
1. Barroom bouncers
Bouncers can establish an access control list to verify IDs and ensure people entering bars are of legal
age.
2. Subway turnstiles
Access control is used at subway turnstiles to only allow verified people to use subway systems. Subway
users scan cards that immediately recognize the user and verify they have enough credit to use the
service.
3. Keycard or badge scanners in corporate offices
Organizations can protect their offices by using scanners that provide mandatory access control.
Employees need to scan a keycard or badge to verify their identity before they can access the building.
4. Logical/information access control
Logical access control involves tools and protocols being used to identify, authenticate, and authorize
users in computer systems. The access controller system enforces measures for data, processes,
programs, and systems.
5. Signing into a laptop using a password
A Common form of data loss is through devices being lost or stolen. Users can keep their personal and
corporate data secure by using a password.
6. Unlocking a smartphone with a thumbprint scan
Smartphones can also be protected with access controls that allow only the user to open the device. Users
can secure their smartphones by using biometrics, such as a thumbprint scan, to prevent unauthorized
access to their devices.
7. Remotely accessing an employer’s internal network using a VPN
Smartphones can also be protected with access controls that allow only the user to open the device. Users
can secure their smartphones by using biometrics, such as a thumbprint scan, to prevent unauthorized
access to their devices.
Cloud computing: Cloud environments often use ABAC to handle complex access control
across distributed systems and varied users with different devices, locations, and contexts.
Healthcare: ABAC can ensure that only authorized medical staff have access to patient data,
considering factors such as role, time of day, and access location (e.g., remote access
restrictions).
Financial services: ABAC can enforce policies where users can access financial data only
under specific conditions, such as from a secure device and during business hours.
Examples:
XACML (eXtensible Access Control Markup Language): A standard for defining and
enforcing ABAC policies in an XML format. It evaluates access decisions based on attributes
like user’s department, time of access, and resource classification.
Cloud Identity and Access Management (IAM) systems: Many cloud providers like AWS
and Azure offer ABAC-based IAM systems that manage access using attributes like user
location, device type, and other environmental conditions.
Owner-based: The owner of the resource has the authority to grant or revoke permissions.
Flexible and user-driven: Users can share their resources with others without significant
oversight from administrators.
Granular permissions: Owners can assign specific read, write, or execute permissions.
Applications:
File systems: In operating systems like Windows or Linux, the owner of a file or directory has
control over who can read, modify, or delete the file.
Home or small office environments: Suitable for environments with fewer users and lower
security concerns.
Military and government systems: Used in environments where data sensitivity is critical,
such as classified information.
High-security environments: Where confidentiality and strict control over data access are
necessary (e.g., healthcare or defense sectors).
Role-based: Users are assigned to specific roles that define their access permissions.
Least privilege: Access is restricted to the minimum necessary for a user to perform their job
duties.
Simplifies management: Permissions are granted to roles, not individuals, making it easier to
manage large numbers of users.
Applications:
Enterprise environments: In companies, users are grouped based on job functions (e.g., an
HR employee may have access to payroll data, but a marketing employee will not).
Healthcare: Different roles such as doctors, nurses, and administrative staff might have access
to different levels of patient data.
Education: Teachers, students, and administrative staff may have different access to resources
such as course materials and grades.
Authentication
Authentication is the process of verifying the identity of a user, device, or system before allowing
access to resources. It ensures that the entity attempting to access the system is who they claim to be.
Example: Logging into an online banking system using a username and password is
authentication.
Authentication is a process of verifying the identity of an individual or system to make sure they are who
they claim to be. Effective authentication mechanisms are crucial because they help protect against
unauthorized access, data breaches, and other security threats. The primary importance of authentication
in information security includes:
Authentication ensures that only users with valid credentials are granted access to systems or data. Without
proper authentication, anyone could potentially access sensitive resources, leading to security breaches,
data theft, or malicious activities.
By verifying the identity of users, authentication mechanisms help maintain the confidentiality and
integrity of information. Sensitive data, such as personal information, financial records, and intellectual
property, can be kept safe when access is restricted only to those who need it.
Authentication allows organizations to track and monitor user actions. Each user has a unique identifier
(e.g., username), and by validating their identity, organizations can ensure accountability. If a security
breach occurs, it’s possible to trace the actions of the person or system responsible.
4. Preventing Impersonation
Authentication helps prevent attackers from impersonating legitimate users (e.g., via stolen credentials or
social engineering attacks). Strong authentication mechanisms make it harder for malicious actors to gain
unauthorized access, safeguarding against identity theft and fraud.
There are several methods of authentication, each with varying levels of security and usability. Here are
the most common types:
1. Password-based Authentication
This is the most widely used form of authentication, where users provide a secret string of characters
(password) to prove their identity. While simple, password-based authentication can be vulnerable if
passwords are weak, reused, or stolen.
Best Practices:
Two-factor authentication (2FA) adds an extra layer of security by requiring two forms of authentication:
2FA significantly increases security because even if an attacker steals the password, they still need access
to the second factor (e.g., a phone or token) to complete the authentication process.
MFA extends the concept of 2FA by requiring more than two authentication factors. It typically combines
something you know (password), something you have (security token), and something you are (biometric
authentication like fingerprints or face recognition).
MFA is widely regarded as one of the most secure methods, as it makes it much harder for attackers to
gain unauthorized access.
Multifactor authentication (MFA) plays a crucial role in strengthening security by adding additional layers
of verification to the authentication process. Unlike traditional single-factor authentication, which relies
solely on something the user knows (like a password), MFA requires multiple forms of evidence to verify
a user’s identity. This makes it significantly more difficult for attackers to gain unauthorized access, even
if they compromise one factor (e.g., a password).
Password theft is one of the most common attack vectors. If an attacker gains access to a user's
password (through phishing, data breaches, or other means), they could easily impersonate the
user. However, MFA adds an extra layer of protection. Even if an attacker has a stolen password,
they still need the second or third authentication factor (e.g., a phone, hardware token, or biometric
data) to complete the authentication process.
Example: An attacker may steal a password through a phishing attack, but unless they also have
the victim’s phone or access to the authentication app (such as Google Authenticator), they will
not be able to access the account.
MFA significantly reduces the effectiveness of phishing attacks. In a typical phishing attack, an
attacker tricks a user into providing their login credentials. However, with MFA, even if the
attacker captures the user’s password, they cannot log in without the additional authentication
factor.
For example, time-based one-time passwords (TOTP) sent to a mobile device or generated by a
hardware token are typically valid for only a short period. If an attacker tries to use a stolen
password without the OTP, they will be blocked.
In the case of a large-scale data breach, where attackers obtain millions of usernames and
passwords, the impact is much less severe if MFA is in place. Even if attackers obtain valid
usernames and passwords, the stolen credentials alone are insufficient for access.
Example: If a major online retailer is hacked and user credentials are exposed, those credentials
alone will not be enough for an attacker to access accounts that are protected by MFA.
5. Adaptive Authentication
Some advanced MFA systems incorporate adaptive authentication, which adjusts the level of
verification required based on certain risk factors (such as location, device, or behavior). For
example, if a user attempts to log in from an unfamiliar location or device, the system might require
additional authentication factors, like answering security questions or biometric verification.
This dynamic, risk-based approach helps balance security with user convenience.
Insider threats (e.g., employees or contractors with authorized access) are a significant concern in
many organizations. MFA reduces the risk that insiders can misuse their access, as it requires more
than just login credentials to gain access to critical systems or data.
Example: An employee who leaves their workstation unattended could be at risk of having their
session hijacked. However, with MFA, a second factor (e.g., a phone-based OTP) is needed to
access sensitive data, reducing the likelihood of unauthorized access.
For organizations, MFA is especially important when employees access systems remotely, such as
through Virtual Private Networks (VPNs) or cloud-based applications. Remote access introduces
more security risks because users are connecting from outside the secure corporate network, often
from personal or untrusted devices.
By enforcing MFA for remote access, organizations can ensure that only authorized users, even if
they are working from outside the corporate network, can securely access systems and sensitive
data.
Many industries are governed by compliance regulations that require the use of MFA to protect
sensitive information. Regulations like HIPAA, PCI-DSS, GDPR, and FISMA often mandate the
use of MFA to enhance the security of sensitive data, especially in healthcare, financial services,
and government sectors.
Implementing MFA helps organizations meet these regulatory requirements and avoid fines or
legal consequences for non-compliance.
1. Online Banking: When logging into a bank account, a user may first enter a password (something
they know). Then, the system may require a one-time code sent to the user’s phone via SMS or
generated by an app (something the user has).
2. Workplace Access: An employee may log in using a password (something they know), then
authenticate with a fingerprint scanner (something they are) or a smart card (something they have)
for additional verification.
3. Cloud Services: A user accesses a cloud service with a password (something they know) and then
confirms their identity via a push notification to their phone (something they have) or a hardware
token.
4. Biometric Authentication
Biometric authentication uses unique physical characteristics of the user to verify their identity. Common
biometric methods include:
Fingerprint scanning
Facial recognition
Iris or retina scanning
Voice recognition
5. Token-based Authentication
Token-based authentication involves using a hardware or software token that generates a time-sensitive
code. This is commonly used in systems like online banking or corporate VPNs. The token generates a
unique, temporary code that changes every few seconds, and this code is entered alongside the user's
password.
Examples include:
6. Certificate-based Authentication
This method uses digital certificates (essentially cryptographic keys) to authenticate users or systems. It’s
often used in scenarios requiring high security, such as securing communication between servers or
identifying users in corporate environments.
Certificates are issued by a trusted certificate authority (CA), and they work based on public-key
cryptography. They are harder to steal or forge compared to traditional passwords.
SSO is a system that allows users to authenticate once and gain access to multiple systems without needing
to re-enter their credentials. It's commonly used in large organizations or applications with many
integrated systems, such as Google or Microsoft accounts.
SSO improves user experience and reduces the need to remember multiple passwords, but it can introduce
risks if the initial authentication is compromised.
This method asks users to answer specific security questions, such as "What is your mother’s maiden
name?" or "What was your first pet’s name?". While it provides an additional layer of security, KBA is
often considered weak because the answers can be easily guessed or found through social engineering.
Access control and authentication are both fundamental components of a security framework, but they
serve different roles while working together to safeguard systems, applications, and sensitive data.
Understanding how these two concepts relate and complement each other is crucial for building a robust
security system.
Authentication is the process of verifying the identity of a user, device, or system to ensure that they are
who they claim to be. It is typically the first step in a security framework and focuses on identity
validation. The main goal of authentication is to confirm that the entity requesting access is authorized to
do so.
Password-based authentication
Two-factor or multifactor authentication (MFA)
Biometric verification (fingerprint, facial recognition)
Smart cards or hardware tokens
Access Control determines what authenticated users or systems are allowed to do once their identity is
confirmed. It is the process of managing and regulating access to resources based on policies,
permissions, or roles that define the level of access granted to different entities.
After a user’s identity is authenticated, access control policies are applied to decide what the
user can or cannot do. These policies are usually based on factors like:
o User roles (e.g., administrator, employee, guest)
o Groups (e.g., finance department, IT team)
o Resource types (e.g., file systems, applications, databases)
o Attributes (e.g., time of access, location)
Access control decisions are typically enforced using systems like Role-Based Access Control
(RBAC), Discretionary Access Control (DAC), or Mandatory Access Control (MAC).
Role-Based Access Control (RBAC): Assigning access based on roles (e.g., an admin has full
access, while a regular user has limited access).
Attribute-Based Access Control (ABAC): Access is granted based on attributes (e.g., the time
of day or the location of the user).
Discretionary Access Control (DAC): The owner of a resource (e.g., a file) decides who can
access it.
Mandatory Access Control (MAC): Access to resources is regulated by system-wide policies,
regardless of user preferences.
Imagine a corporate network where an employee logs in to access sensitive financial data:
Authentication: The employee enters their username and password to log into the system. The
system validates the credentials and confirms their identity (e.g., via a second factor like an OTP
or biometric scan, in the case of MFA).
Access Control: Once the employee is authenticated, the access control system checks their role
(e.g., “Finance Manager”) and determines which files or resources they can access. A finance
manager might be allowed to view financial reports but might not have permission to modify
employee payroll data, while a system administrator could have unrestricted access.
************************************************************************************
Risk assessments are a critical component of security planning because they help organizations identify,
evaluate, and prioritize potential risks and vulnerabilities. By understanding these risks, security planners
can create more effective strategies to protect assets, data, and operations.
Here are a few key steps why risk assessments are important in security planning:
1. Identifying Vulnerabilities and Threats: Risk assessments help uncover potential threats (e.g.,
cyber attacks, natural disasters, insider threats) and vulnerabilities (e.g., outdated software,
physical security gaps) that could compromise security. By identifying these, an organization can
address them before they become serious problems.
2. Prioritizing Risks: Not all risks are equally damaging. Risk assessments allow organizations to
evaluate the likelihood and impact of different risks, helping them prioritize actions. This ensures
that limited resources are allocated to addressing the most critical threats first.
3. Cost-Effective Security Measures: Risk assessments help balance the cost of security measures
with the level of risk. By understanding the specific risks, security planners can design cost-
effective measures that reduce risks to an acceptable level without overspending on unnecessary
protections.
4. Regulatory Compliance: Many industries have regulatory requirements (e.g., GDPR, HIPAA,
PCI-DSS) that mandate risk assessments to ensure compliance with security standards. Conducting
these assessments is essential for avoiding penalties and protecting sensitive data.
5. Improving Response and Recovery Plans: Through risk assessments, organizations can identify
scenarios where response and recovery plans need to be developed or strengthened. Knowing
which risks are most likely to occur allows for more focused and actionable contingency plans.
6. Proactive Risk Management: Risk assessments shift an organization's approach from reactive to
proactive security. By anticipating potential issues, organizations can put measures in place that
prevent or mitigate risks before they escalate into actual incidents.
7. Safeguarding Reputation and Trust: Security breaches can damage an organization’s reputation
and erode customer trust. By identifying and addressing risks early, organizations protect their
reputation and the trust of stakeholders.
8. Continuous Improvement: Risk assessments are not one-time events. They should be conducted
regularly to adapt to new threats and vulnerabilities. This continuous cycle of evaluation and
adjustment ensures that the security plan evolves alongside the changing risk landscape.
3) Risk Analysis: Based on the assessment, risks are analyzed in terms of priority and criticality. This
helps identify the most significant risks that should be addressed as a priority.
4) Risk Mitigation: Strategies and measures are developed to mitigate the identified risks. This includes
implementing security controls, employee training, security policies, among other actions.
5) Monitoring and Review: Security risk management is an ongoing process. Regular monitoring of risks
and review of mitigation strategies is essential to ensure their continuous effectiveness. Changes in the
threat landscape and organizational assets may require adjustments to security measures.
while vulnerability management targets weaknesses in systems, processes, or assets to minimize the
likelihood of exploitation.
Security Risk Management is the process of identifying, assessing, prioritizing, and mitigating risks to
protect an organization’s assets, including its people, data, infrastructure, and reputation. This process
ensures that security resources are allocated efficiently to minimize risks and respond to threats
appropriately. It is a systematic approach to safeguarding an organization against potential risks that
could compromise its operations.
Here’s an overview of how organizations typically identify, assess, and prioritize security risks:
1. Risk Identification
Risk identification is the first step in the security risk management process. It involves detecting and
understanding potential security threats and vulnerabilities that could impact the organization. This step
requires a thorough review of internal and external environments to spot risks.
How Organizations Identify Security Risks:
Asset Inventory: Organizations begin by identifying and cataloging all assets, such as hardware,
software, data, intellectual property, and human resources. Knowing what needs to be protected is the
first step in identifying risks.
Threat Identification: This involves identifying potential threats that could exploit vulnerabilities,
such as cyber attacks (e.g., hacking, phishing), natural disasters (e.g., floods, earthquakes), human
threats (e.g., insider threats, sabotage), and technical failures (e.g., server crashes).
Vulnerability Assessment: Organizations evaluate their systems, processes, and operations to
identify vulnerabilities that could be exploited by the identified threats. This includes weaknesses in
technology (e.g., outdated software), physical security (e.g., access controls), and human factors (e.g.,
lack of training).
Historical Data & Trends: By analyzing past incidents (e.g., past data breaches, security failures),
organizations can identify recurring threats and emerging risks.
External Intelligence: Organizations often rely on external sources such as threat intelligence
services, industry reports, and regulatory bodies to identify risks that may affect them. Cyber security
frameworks (e.g., NIST, ISO 27001) can also provide insight into common threats and vulnerabilities.
2. Risk Assessment
Once risks are identified, organizations assess the potential impact and likelihood of those risks
materializing. Risk assessment helps organizations understand the severity of threats and their potential
consequences on the organization.
Risk Probability (Likelihood): This involves estimating how likely a given risk or threat is to occur.
Factors like the historical frequency of the threat, the sophistication of the adversary, and the
organization's vulnerability are considered to estimate likelihood.
Risk Impact (Consequence): This step evaluates the potential consequences if a specific risk occurs.
The impact could be financial (e.g., cost of recovery from a data breach), operational (e.g., business
disruption), legal (e.g., regulatory fines), reputational (e.g., loss of customer trust), or physical (e.g.,
injury to employees).
Risk Quantification: Some organizations use a quantitative approach (e.g., assigning numerical
values to likelihood and impact) to calculate the overall risk score. Others use a qualitative approach,
categorizing risks as high, medium, or low. Tools like risk matrices (likelihood vs. impact) are often
used in this process.
Vulnerability and Threat Analysis: Security teams analyze the vulnerabilities within systems,
networks, and processes to determine which are most likely to be exploited. The organization may
also consider the potential adversaries (e.g., hackers, insiders) and their capability to exploit
weaknesses.
Scenario Modeling: Organizations may run simulations or "what-if" scenarios to understand how
risks might manifest. This can involve penetration testing, vulnerability scans, or red teaming
(simulating attacks to identify weaknesses).
3. Risk Prioritization
After assessing the risks, organizations need to prioritize them based on the level of threat they pose.
Prioritization helps ensure that resources are directed towards addressing the most critical risks first.
How Organizations Prioritize Security Risks:
Risk Matrix: A risk matrix is a common tool for prioritizing risks. Risks are plotted on a grid based
on their likelihood and impact. Risks in the high-likelihood and high-impact quadrant are prioritized
for immediate action, while lower-priority risks may be addressed later or accepted if their impact is
minimal.
Example:
After prioritizing risks, the next step in the security risk management process is risk mitigation. This
involves taking steps to either reduce the likelihood or impact of identified risks.
Mitigation Strategies:
Security risk management is an ongoing process. As new threats emerge and the business environment
changes, risk assessments must be updated regularly.
Ongoing Risk Monitoring: Continuously monitor the effectiveness of security controls and track
evolving threats.
Security Audits: Regular security audits and assessments help ensure that the mitigation measures
remain effective and that new risks are identified and addressed.
Incident Response: If a risk materializes, having a clear incident response plan helps mitigate the
impact and return to normal operations as quickly as possible.
Risk Mitigation
Risk mitigation refers to the process of identifying, assessing, and taking steps to reduce or eliminate the
potential impact of identified risks on an organization. It involves developing strategies and implementing
measures that either prevent a risk from occurring or minimize its impact if it does. Effective risk
mitigation is crucial to maintaining security, operational efficiency, and overall resilience.
Here are the key elements and different strategies for risk mitigation:
1. Risk Avoidance
Definition: This strategy involves eliminating a risk altogether by changing the way the organization
operates. It’s about avoiding activities or actions that could lead to certain risks.
Example: If there’s a risk associated with using a particular software because of known vulnerabilities,
the organization might switch to a more secure alternative to avoid the risk.
2. Risk Reduction
Definition: Risk reduction focuses on taking actions to reduce the likelihood or impact of a risk. This
is often the most common approach to risk mitigation.
Example: Implementing stronger cyber security measures like firewalls, encryption, and multi-factor
authentication (MFA) to reduce the chances of a data breach.
3. Risk Sharing
Definition: This involves transferring the risk to a third party. Often this is done through insurance or
outsourcing. The idea is that the third party assumes some or all of the risk.
Example: Purchasing cyber insurance to cover financial losses in the event of a breach or outsourcing
certain business operations to a provider with specialized risk management.
4. Risk Retention
Definition: This strategy involves accepting the risk when the cost of mitigation is higher than the
cost of the risk itself or when the likelihood or impact is minimal.
Example: A small organization might choose to accept the risk of a minor security breach, choosing
not to invest in costly preventative measures if the potential damage is low.
Definition: Having a plan in place for responding to risks if they materialize. This can involve
emergency response, disaster recovery plans, and business continuity plans.
Example: Developing an incident response plan to quickly address and mitigate the effects of a cyber
security breach, ensuring rapid recovery and minimal impact.
Definition: Educating employees and stakeholders about the risks they may face and the best practices
for mitigating them.
Example: Regularly training staff on phishing attack recognition, data protection practices, and proper
handling of sensitive information to reduce human error as a risk factor.
7. Monitoring and Auditing
Definition: Continuously monitoring risks and the effectiveness of mitigation measures. Auditing
systems, processes, and performance helps identify new risks and check the adequacy of existing
mitigation strategies.
Example: Regular security audits and vulnerability scanning to identify new risks and ensure that
current security measures are still effective.
8. Implementing Safeguards
Definition: Taking technical, physical, and administrative measures to reduce the likelihood of risks
occurring.
Example: Installing firewalls, intrusion detection systems, and securing physical premises to prevent
unauthorized access.
Definition: Having backup systems or duplicate resources in place to continue operations in case of
failure or disaster.
Example: Regularly backing up critical data and ensuring that there are redundant servers or systems
to switch to in case the primary system fails.
************************************************************************************
Security policies and procedures are essential components of an organization’s overall security
framework. They establish guidelines for protecting the organization’s assets—such as data, personnel,
and physical infrastructure—against risks and threats. These policies and procedures help ensure that
security is consistently maintained across the organization and provide a structured approach to
managing risks.
Security Policies
A security policy is a high-level document that outlines the principles, rules, and goals an organization
follows to ensure its security practices align with business objectives and regulatory requirements. These
policies are generally broad and provide a framework for more specific procedures.
1. Scope and Purpose: The policy defines the scope of security within the organization and explains
the purpose of maintaining security practices (e.g., protecting sensitive information, maintaining
business continuity).
2. Roles and Responsibilities: The policy clarifies who is responsible for enforcing and adhering to
security measures. This includes the roles of security officers, IT staff, management, and
employees.
3. Compliance Requirements: The policy addresses compliance with relevant laws, regulations, and
industry standards (e.g., GDPR, HIPAA, PCI-DSS). It outlines the organization’s commitment to
meeting legal and regulatory requirements.
4. Access Control and User Authentication: The policy defines how users will be authenticated,
the use of passwords, multi-factor authentication (MFA), and the access control measures to ensure
only authorized individuals can access specific data and systems.
5. Data Protection: It includes guidelines on how data should be classified, handled, stored,
transmitted, and disposed of to maintain confidentiality and integrity.
6. Incident Response: A high-level framework for responding to security incidents, including how
they should be reported, assessed, contained, and managed.
7. Network and System Security: This includes policies on firewalls, intrusion detection/prevention
systems, and other network security tools.
8. Physical Security: Policies regarding the physical security of organizational assets, such as
securing offices, data centers, and equipment from unauthorized access or theft.
1. Information Security Policy: Describes the overall information security goals and principles,
focusing on protecting organizational data and IT resources.
2. Acceptable Use Policy (AUP): Outlines what is and isn’t acceptable in terms of employee usage
of company assets (e.g., computers, internet, email). It also typically addresses behavior like
downloading unauthorized software or accessing inappropriate websites.
3. Incident Response Policy: Specifies how the organization will respond to and handle security
incidents, such as data breaches or cyberattacks.
4. Access Control Policy: Defines how users access the network, systems, and data, and establishes
authentication methods, password standards, and access levels.
5. Data Privacy and Protection Policy: Focuses on protecting personal and sensitive data, outlining
how data is to be collected, stored, and protected to ensure privacy and compliance with
regulations.
6. Disaster Recovery/Business Continuity Policy: Details procedures for ensuring that the
organization can continue functioning in the event of a disaster, whether natural or man-made.
7. Remote Work Policy: Establishes the security protocols and practices for employees working
remotely, including device security, VPN use, and secure communication tools.
Steps involved in managing and mitigating security risks through effective policy implementation.
The key steps involved in managing and mitigating security risks through effective policy implementation:
1. Risk Assessment
Definition: Identifying and evaluating the risks that may impact the organization’s assets,
operations, and information systems.
Actions:
o Identify Threats and Vulnerabilities: Review existing systems, processes, and
infrastructure to identify potential threats (e.g., cyberattacks, natural disasters) and
vulnerabilities (e.g., outdated software, lack of encryption).
o Assess Impact and Likelihood: Estimate the potential impact of each risk and the
likelihood of its occurrence. This helps prioritize which risks need immediate attention.
o Risk Evaluation: Assign risk levels (e.g., high, medium, low) to different threats based
on their potential impact and probability.
o Document Findings: Maintain detailed records of the identified risks and their
evaluation.
2. Policy Development
Definition: Creating security policies that establish guidelines for managing risks and defining
roles and responsibilities.
Actions:
o Define Objectives: Clearly state the goals of the security policies, such as protecting
sensitive data, ensuring compliance, and preventing unauthorized access.
o Align with Business Needs: Ensure the security policies support the organization's
business goals and operations.
o Incorporate Best Practices: Base policies on industry standards, regulatory
requirements, and recognized frameworks (e.g., NIST, ISO 27001, GDPR).
o Address Specific Risks: Develop policies that address identified risks, such as access
control, encryption, incident response, disaster recovery, and data protection.
Definition: Ensuring that all stakeholders understand the security policies and are aware of their
roles in mitigating risks.
Actions:
o Training and Awareness Programs: Conduct regular training sessions for employees,
contractors, and other stakeholders to ensure they understand security policies and how to
follow them.
o Clear Communication: Distribute policies clearly through intranets, handbooks, and
emails to ensure everyone is on the same page.
o Ongoing Awareness: Regularly remind employees about security best practices through
newsletters, posters, and continuous training.
4. Policy Implementation
Definition: Putting security policies into action by configuring systems, networks, and processes
according to the defined guidelines.
Actions:
o Deploy Security Controls: Implement technical and administrative controls (e.g.,
firewalls, access control systems, data encryption) to enforce security policies.
o Role-Based Access Control (RBAC): Ensure that access to critical systems is based on
users’ roles, ensuring the principle of least privilege.
o Automate Security Measures: Where possible, use automation tools to enforce policies,
such as security patching, monitoring, and incident alerts.
Definition: Continuously monitoring systems and policies to ensure that they are being followed
and to detect any deviations or potential risks.
Actions:
o Real-time Monitoring: Use monitoring tools and software to detect suspicious activities,
unauthorized access, or policy violations in real-time.
o Audit Logs: Maintain logs of user activities, system access, and changes to
configurations to track compliance and investigate potential incidents.
o Enforce Compliance: Regularly audit adherence to security policies and take corrective
actions in case of policy violations.
6. Risk Mitigation and Response
Definition: Reducing or eliminating identified security risks by taking corrective actions based on
monitoring and risk assessments.
Actions:
o Apply Mitigation Strategies: Once risks are identified, apply appropriate mitigation
measures such as system patches, network segmentation, encryption, and user access
controls.
o Incident Response Plan: Develop and implement an incident response plan to quickly
address and resolve security breaches or failures.
o Backup and Recovery: Implement backup strategies and disaster recovery plans to ensure
business continuity in case of data loss or system compromise.
Definition: Continuously reviewing and updating security policies to ensure their relevance and
effectiveness in the face of changing risks and technology.
Actions:
o Policy Audits: Regularly review and audit security policies to ensure they address the latest
threats and comply with updated regulations.
o Feedback Loop: Gather feedback from employees, security teams, and other stakeholders to
improve policies.
o Update Policies: Revise and update security policies based on emerging threats, new
technologies, and changes in business operations.
Definition: Ensuring that security policies align with legal, regulatory, and industry standards.
Actions:
o Stay Informed on Legal Changes: Monitor changes in laws and regulations (e.g., GDPR,
HIPAA, PCI-DSS) to ensure compliance.
o Conduct Compliance Audits: Regularly check if security policies and practices align with
required regulations and industry standards.
o Documentation: Maintain thorough records of compliance audits, risk assessments, and
policy reviews for accountability and legal purposes.
9. Continuous Improvement
Definition: Building a culture of continuous improvement where the security policies and practices
evolve based on new challenges, technology, and organizational needs.
Actions:
o Post-Incident Analysis: After a security incident, conduct a thorough analysis to identify
weaknesses in policies or practices, and implement changes.
o Security Drills: Regularly conduct security drills and tests to assess how well policies hold
up under real-world conditions.
o Adopt New Technologies: Stay updated with emerging security technologies, tools, and
methods, and incorporate them into the security strategy as needed.
Security Procedures
Security procedures are the specific, detailed steps or actions employees and other personnel must follow
to implement the guidelines set forth in the security policies. They are more operational and practical than
policies and are typically developed to ensure consistent, secure practices across the organization.
Policies set the what and why: They define the goals and principles of the organization’s security
framework, explaining what should be done to protect assets and information.
Procedures define the how: They provide the specific steps required to implement the policies.
Without procedures, policies would be theoretical and ineffective because they wouldn’t translate
into action.
**********************************************************************************
A Security incident is any event or occurrence that compromises the confidentiality, integrity, or
availability of an organization’s information, systems, or networks. It can involve both malicious and non-
malicious actions that threaten the security of data or systems. Security incidents can range from small
security breaches to major cyber attacks, and they often require immediate response to prevent further
damage.
1. Data Breaches:
2. Malware Attacks:
Definition: Software that is intentionally designed to cause damage, disrupt operations, or gain
unauthorized access to systems.
Example: Viruses, ransomware, Trojans, spyware, and worms.
3. Phishing Attacks:
Definition: Fraudulent attempts to steal sensitive information (e.g., login credentials, financial
information) by masquerading as a trustworthy entity, often via email or websites.
Example: An attacker sends a fake email from a bank asking the recipient to click a link and
provide personal information.
Definition: An attack where the target system or network is overwhelmed with traffic or requests,
rendering it unusable to legitimate users.
Example: A Distributed Denial-of-Service (DDoS) attack flooding a website with requests to
crash the server.
5. Insider Threats:
6. Unauthorized Access:
7. Privilege Escalation:
8. System Intrusions:
Definition: Attacks where an attacker successfully infiltrates a system to steal information, install
malware, or disrupt operations.
Example: Exploiting a vulnerability in an application to gain access to a server.
A security breach refers to any unauthorized access, disclosure, or destruction of data or systems. It
occurs when an attacker or malicious actor gains access to an organization's sensitive information,
infrastructure, or network resources, often exploiting vulnerabilities.
An Incident Response Plan (IRP) is a well-defined and structured approach to identifying, managing,
and mitigating security incidents that could affect an organization. It outlines the procedures and roles that
should be followed when a security incident occurs to minimize damage, recover quickly, and prevent
similar incidents in the future.
1. Preparation:
o Establish and train an Incident Response Team (IRT) with clearly defined roles and
responsibilities.
o Ensure tools, resources, and documentation (e.g., contact lists, incident reports) are in place.
o Conduct regular security training and awareness programs.
2. Identification:
o Establish a process to detect and confirm security incidents (e.g., monitoring systems,
analyzing alerts).
o Categorize the severity and impact of the incident to determine the appropriate response.
3. Containment:
o Take immediate actions to limit the scope and spread of the incident (e.g., isolating affected
systems, blocking malicious traffic).
o Prioritize containment actions based on the type and severity of the incident.
4. Eradication:
o Identify and eliminate the root cause of the incident (e.g., removing malware, closing
vulnerabilities).
o Ensure that systems are clean and secure before moving forward.
5. Recovery:
o Restore systems and services to normal operations, ensuring that they are fully patched and
secured.
o Monitor systems closely during this phase to ensure no further threats remain.
6. Post-Incident Review:
o Analyze the incident to determine what went well, what could be improved, and how similar
incidents can be prevented.
o Update policies, procedures, and defenses based on the lessons learned.
Security incident response teams (SIRTs) manage and respond to security breaches.
Security Incident Response Teams (SIRTs) play a crucial role in managing and responding to security
breaches to minimize damage, recover operations, and prevent future incidents. These teams are tasked
with handling security incidents efficiently and systematically to ensure the confidentiality, integrity, and
availability of an organization's systems and data.
1. Preparation
Proactive Measures: Before any security breach occurs, SIRTs work to prepare by developing plans,
tools, and procedures for responding to security incidents. This preparation phase is essential for
ensuring a swift and effective response.
Incident Response Plan: SIRTs typically maintain a detailed incident response plan that outlines
the procedures to follow in case of a breach. The plan covers roles, responsibilities, communication
protocols, and step-by-step processes for different types of incidents.
Training and Awareness: SIRTs conduct regular training sessions and simulations (often called
tabletop exercises) to ensure that team members and relevant stakeholders are prepared for real-world
incidents. Employees are also trained to recognize potential threats like phishing attacks or suspicious
activities.
Monitoring Tools: They deploy monitoring systems, such as intrusion detection systems (IDS),
intrusion prevention systems (IPS), SIEM (Security Information and Event Management) tools,
and network monitoring software, to detect anomalies or malicious activity in real time.
2. Identification
Detection of the Incident: The first step in responding to a security breach is identifying that an
incident has occurred. SIRTs rely on automated tools and manual alerts from users, logs, or monitoring
systems to detect potential breaches.
o Signs of a Breach: Indicators of compromise (IoCs), such as unusual network traffic,
unauthorized login attempts, malware alerts, or system anomalies, may be detected during this
phase.
Initial Triage: Once an incident is identified, SIRTs assess the nature of the event (whether it's an
attack, malware infection, or breach) and determine the severity. This is known as the triage process.
Verification: SIRTs validate the incident to determine whether it is a legitimate security breach or a
false alarm. False positives can waste resources, so accurate identification is crucial.
3. Containment
Limit the Impact: Containment is the process of preventing the attack from spreading further within
the network or system. Depending on the nature of the breach, SIRTs may take different containment
actions:
o Isolating Affected Systems: Disconnecting infected machines or segments of the network to
prevent the spread of malware or unauthorized access.
o Blocking Malicious Activity: Blocking or disabling certain services or communication
channels that are being used by attackers, such as IP addresses or user accounts.
o Segregating Network Traffic: Using firewalls or other network controls to limit traffic
between compromised systems and critical assets.
4. Eradication
Removing the Threat: After containment, SIRTs focus on eradicating the root cause of the incident.
This may involve:
o Removing Malware: If the breach involves malware, SIRTs perform a deep scan to identify
and remove malicious software.
o Closing Vulnerabilities: If the breach was facilitated by a vulnerability (such as an unpatched
system or misconfigured service), the team works to patch or fix the security hole.
o Eliminating Backdoors: In cases where attackers have established backdoors (e.g., through
unauthorized access points or malicious scripts), these must be identified and removed.
5. Recovery
Restoring Operations: After eradication, SIRTs work to recover from the breach and restore normal
operations. This may involve:
o Restoring Data: Recovering data from backups if any was lost or compromised during the
breach.
o Rebuilding Systems: In cases where systems or configurations were severely compromised,
the SIRT may rebuild affected systems from a clean state.
o Testing Systems: Ensuring that affected systems are fully functional and secure before
bringing them back online. This includes testing for any remaining signs of the incident.
Gradual Restoration: Recovery is typically done in phases to ensure that the systems are stable and
secure before full restoration occurs.
6. Lessons Learned
Post-Incident Review: After the incident has been managed and systems are back to normal, SIRTs
conduct a post-mortem analysis to understand what happened and how it can be prevented in the
future. This involves:
o Root Cause Analysis: Determining the cause of the breach and how the attackers were able to
exploit vulnerabilities or bypass security controls.
o Effectiveness of Response: Evaluating the performance of the incident response plan and
identifying areas for improvement.
o Reporting: Creating detailed reports on the incident, including timelines, the impact of the
breach, and the steps taken to mitigate it. This documentation is used for future reference and
to inform stakeholders.
Updating Policies and Procedures: Based on the lessons learned, SIRTs may revise security policies,
incident response plans, and preventive measures to address gaps and enhance future responses.
o Improvement of Security Posture: Additional security measures, such as new monitoring
tools, tighter access controls, and enhanced training programs, may be implemented to prevent
similar incidents.
7. Communication
Internal and External Communication: Effective communication is crucial throughout the incident
response process.
o Internal Communication: Keeping stakeholders (e.g., IT staff, management, and employees)
informed about the progress of the response and any necessary actions.
o External Communication: In cases of significant breaches (e.g., data leaks, customer
information exposure), organizations may need to notify customers, regulators, and other
affected parties. This is especially important for compliance with data protection laws (e.g.,
GDPR, CCPA).
Media and Public Relations: For high-profile incidents, managing external communication to protect
the organization’s reputation is critical. SIRTs work with public relations teams to provide accurate
and consistent messaging.
Importance of network security and the role of firewalls, intrusion detection systems, and other
network security measures.
Several tools and systems work together to protect an organization's network. Here are some of the
essential components of network security:
1. Firewalls
Role: A firewall acts as a barrier between trusted internal networks and untrusted external networks
(e.g., the internet). It monitors and controls incoming and outgoing traffic based on predetermined
security rules.
Functionality:
o Packet Filtering: Firewalls inspect network packets and determine whether to allow or block
traffic based on source, destination, and protocol.
o Stateful Inspection: Unlike simple packet filters, firewalls with stateful inspection track the
state of active connections and make decisions based on context, ensuring more accurate
protection.
o Proxy Services: Some firewalls function as proxies that forward requests from users and return
responses from the web, adding an additional layer of security by hiding internal network
addresses.
Importance: Firewalls are one of the first lines of defense in network security, protecting against
unauthorized access, preventing cyberattacks, and limiting the exposure of internal systems to external
threats.
Role: An Intrusion Detection System (IDS) monitors network traffic for signs of malicious activity
or policy violations, while an Intrusion Prevention System (IPS) takes this a step further by actively
preventing identified threats.
IDS Functionality:
o Signature-Based Detection: IDS systems can detect known threats by comparing network
traffic against a database of signatures (known attack patterns).
o Anomaly-Based Detection: Some IDS use machine learning or statistical analysis to identify
deviations from typical network behavior, helping detect new or previously unknown threats.
IPS Functionality:
o Real-Time Blocking: Unlike IDS, which only detects threats, IPS actively blocks malicious
traffic once identified, preventing harm before it reaches critical systems.
o Automated Response: IPS systems can automatically take corrective actions, such as blocking
malicious IP addresses, shutting down infected systems, or alerting administrators.
Importance: IDS and IPS are critical for detecting and responding to threats in real time, helping to
prevent data breaches, malware infections, and other attacks.
Role: A VPN is used to create a secure, encrypted tunnel over a public network (e.g., the internet) for
users to access an organization’s internal network remotely. This is essential for protecting data
transmitted between the remote user and the network.
Functionality:
o Data Encryption: VPNs encrypt data before it is transmitted, ensuring confidentiality even if
the data is intercepted.
o Authentication: VPNs require users to authenticate before they can access the network,
ensuring only authorized personnel can connect remotely.
Importance: VPNs are crucial for secure remote access, particularly for organizations with employees
working from various locations. They help maintain the security of sensitive data in transit.
Role: Network Access Control (NAC) is a security solution that enforces policies for devices trying
to connect to the network. It ensures that only compliant, authorized devices (e.g., those with up-to-
date antivirus software) can access the network.
Functionality:
o Endpoint Health Checks: NAC solutions verify that devices meet security standards (e.g.,
updated patches, antivirus signatures) before granting access.
o Access Restrictions: NAC can restrict access to certain resources based on device type, user
role, or location.
Importance: NAC helps protect the network from threats originating from compromised devices and
ensures that only trusted, compliant endpoints can interact with critical network resources.
Concept of disaster recovery and business continuity planning in the event of a security incident.
Disaster Recovery (DR) is a subset of business continuity planning that focuses specifically on the
restoration of IT systems, data, and infrastructure after a disruptive event. The goal of DR is to minimize
downtime, data loss, and the impact on business operations by quickly recovering critical IT systems.
Business Continuity Planning (BCP) is broader than disaster recovery. While DR focuses on IT systems,
BCP aims to ensure that the entire organization can continue functioning after a security incident or any
type of disaster. This includes maintaining operations, protecting revenue streams, and supporting
customers while systems or services are being restored.
1. Preparation
o Objective: Prepare the organization to effectively respond to security incidents by having the
right tools, processes, and people in place.
o Key Activities:
Develop and maintain an Incident Response Plan (IRP).
Set up monitoring tools, including intrusion detection systems (IDS), firewalls, and
Security Information and Event Management (SIEM) systems.
Conduct regular training and awareness campaigns for staff to recognize phishing
attempts and other common threats.
Regularly test the incident response plan through tabletop exercises and mock
incidents.
o Example: Organizations like Google and Facebook conduct regular security drills and have a
dedicated incident response team in place to ensure a quick and effective response during an
actual security breach. Their preparation helps them quickly identify and respond to incidents.
2. Identification
o Objective: Detect and identify the occurrence of a security incident.
o Key Activities:
Monitor security systems for signs of unusual activity (e.g., unauthorized access,
abnormal data flow).
Analyze alerts from IDS, SIEM, and other monitoring tools to confirm if an incident is
occurring.
Investigate suspicious activities to determine whether they represent a legitimate
security threat.
o Example: In the Equifax breach (2017), the attack was initially detected through a
vulnerability in the Apache Struts framework, which the attackers exploited to gain access to
sensitive data. The breach was identified after the data exfiltration occurred, and further
investigation revealed that a patch for the vulnerability had not been applied.
3. Containment
o Objective: Limit the impact of the security incident and prevent it from spreading further.
o Key Activities:
Short-term containment: Quickly isolate the affected systems (e.g., disconnecting
infected devices from the network).
Long-term containment: Apply temporary fixes or restrictions to prevent the attacker
from exploiting the vulnerability further.
Ensure that the containment actions don’t disrupt business continuity or cause collateral
damage.
o Example: During the WannaCry ransomware attack (2017), organizations were instructed
to disconnect affected machines from the network to prevent the ransomware from spreading
further. Microsoft had already released a patch to address the vulnerability, and organizations
that had not applied it were more severely impacted. Immediate containment efforts helped
stop the ransomware from spreading to other systems.
4. Eradication
o Objective: Remove the root cause of the incident and eliminate any remnants of the threat.
o Key Activities:
Identify and remove any malware or malicious code from infected systems.
Apply patches, fixes, or configuration changes to eliminate vulnerabilities that were
exploited.
Verify that no backdoors or unauthorized access points remain.
o Example: After the NotPetya attack (2017), which targeted Ukrainian organizations but
spread globally, the affected systems were scrubbed of malware and patches were applied to
eliminate the vulnerability. IT teams worked to completely remove the malicious payloads and
close the exploited vulnerabilities to prevent a recurrence.
5. Recovery
o Objective: Restore and validate the affected systems and services to their normal operations.
o Key Activities:
Restoration of services: Begin restoring systems from clean backups or rebuild them
from scratch.
System testing: Perform thorough testing to ensure systems are functioning as
expected and no threats remain.
Gradual reintegration: Slowly bring systems and services back online while
continuing to monitor for any signs of residual threats or new attacks.
Communication: Inform stakeholders about the recovery progress and any actions
they need to take.
o Example: Following the Sony PlayStation Network (PSN) breach (2011), Sony worked to
restore its compromised services over a period of days. The company ensured that all
vulnerabilities were patched before bringing services back online. PSN was then fully restored
after ensuring that no further breaches would occur.
6. Lessons Learned
o Objective: Analyze the incident to identify what worked well and what could be improved in
future responses.
o Key Activities:
Conduct a post-incident review to analyze the timeline of the breach, the response
effectiveness, and the damage caused.
Update the incident response plan based on lessons learned.
Share findings with relevant stakeholders to improve organizational security practices.
Implement measures to prevent similar incidents in the future, such as enhanced
monitoring, additional security controls, and training.
o Example: After the Target breach (2013), where attackers gained access to payment card
data, the company conducted an internal review. It was discovered that the breach could have
been prevented if the company had applied stronger network segmentation and improved the
monitoring of third-party vendor access. The breach led to major changes in Target’s security
practices, including the implementation of EMV (chip) card technology, improved network
segmentation, and enhanced monitoring protocols.
***********************************************************************************
1. Phishing Awareness
o Objective: Educate employees about phishing attacks, which often involve emails or messages
that appear legitimate but are designed to steal credentials or install malware.
o Key Topics:
Recognizing suspicious email addresses or links.
Avoiding clicking on unfamiliar attachments or links.
Verifying the sender before responding to requests for sensitive information.
o Example: Training programs might include simulated phishing attacks to test employees'
awareness and reinforce best practices.
2. Password Management
o Objective: Teach employees how to create strong passwords and maintain secure login
practices to prevent unauthorized access to systems.
o Key Topics:
Using unique, complex passwords (e.g., combinations of upper/lowercase letters,
numbers, and symbols).
Implementing two-factor authentication (2FA) when available.
Avoiding password reuse across different accounts.
Using password managers for secure storage.
o Example: Organizations might provide a demonstration on creating strong passwords and
explain how password managers can help keep credentials secure.
3. Social Engineering Awareness
o Objective: Help employees recognize social engineering attacks, where attackers manipulate
individuals into divulging confidential information.
o Key Topics:
Understanding the tactics used by attackers, such as pretexting, baiting, or
impersonation.
Asking critical questions when someone requests sensitive information.
Avoiding oversharing personal or company information on social media.
o Example: Employees are taught not to respond to unsolicited phone calls or emails asking for
confidential data, even if they appear to come from colleagues or authority figures.
4. Physical Security
o Objective: Emphasize the importance of protecting physical devices and information from
unauthorized access.
o Key Topics:
Securing workstations (e.g., locking screens when stepping away).
Safeguarding sensitive documents and ensuring they are stored in secure locations.
Implementing secure disposal methods for documents and devices (e.g., shredding
paper or wiping hard drives).
o Example: Employees are instructed to never leave their devices unattended in public spaces
and to ensure physical security of sensitive information.
5. Data Protection and Privacy
o Objective: Teach employees about data protection regulations (e.g., GDPR, CCPA) and how
to handle sensitive and personal data securely.
o Key Topics:
Proper data storage, handling, and sharing procedures.
Compliance with legal and regulatory requirements for data protection.
Recognizing and preventing data leaks or unauthorized access to sensitive information.
o Example: Employees are educated on how to handle personally identifiable information (PII)
and ensure it’s transmitted securely (e.g., using encryption).
6. Secure Use of Devices and Networks
o Objective: Instruct employees on how to securely use personal and company devices and
networks to prevent cyber threats.
o Key Topics:
Avoiding unsecured public Wi-Fi for accessing company data.
Using VPNs (Virtual Private Networks) for remote access.
Installing software updates and patches promptly to avoid vulnerabilities.
o Example: Employees may be instructed on how to set up VPNs or avoid using public networks
for work-related activities.
7. Incident Reporting
o Objective: Ensure employees know how to report suspicious activities or security incidents.
o Key Topics:
How to report phishing emails, suspicious behavior, or security breaches to the security
team.
Understanding the importance of timely reporting.
Familiarizing employees with the organization's incident reporting protocols.
o Example: Employees are provided with contact information for the IT help desk or the incident
response team to quickly escalate any security concerns.
Importance of security awareness training for employees and its impact on reducing security risks.
Explanation: A significant portion of security incidents, including data breaches, phishing attacks,
and malware infections, are the result of human mistakes. Employees often unintentionally click
on malicious links, open phishing emails, or fail to follow basic security practices like using weak
passwords.
Impact: Security awareness training directly addresses this vulnerability by educating employees
on how to recognize potential threats and avoid actions that could compromise the organization’s
security.
Example: According to studies, phishing emails are one of the leading causes of data breaches.
When employees are trained to identify phishing attempts, the likelihood of them falling victim to
such attacks is significantly reduced.
Explanation: Employees are often the first to encounter cyber threats, whether it’s through email,
suspicious websites, or insecure networks. By educating employees about security risks and
providing them with the knowledge to identify potential threats, they become an organization's
first line of defense.
Impact: A well-trained workforce is better equipped to recognize and report suspicious activity
early, preventing the escalation of a potential security incident.
Example: Employees trained in recognizing phishing emails or suspicious links can prevent
attackers from gaining unauthorized access to the system by reporting such threats to the security
team immediately.
Explanation: By teaching employees about the consequences of their actions—such as using weak
passwords, downloading unknown attachments, or sharing sensitive information—organizations
can mitigate the likelihood of security breaches caused by these preventable behaviors.
Impact: Security awareness training helps reduce the occurrence of cybersecurity incidents by
minimizing risky behavior. When employees adhere to best practices, such as strong password
management and caution when interacting with external parties, the organization’s overall
exposure to threats is reduced.
Example: A study from CybSafe found that organizations with regular security awareness training
saw a 50% reduction in successful phishing attacks. By addressing employees' habits and
behaviors, these organizations significantly minimized their exposure to attack.
4. Enhances Employee Understanding of Security Policies and Procedures
Explanation: Security awareness training ensures that employees understand the organization's
security policies, procedures, and their personal role in maintaining security. When employees
know what’s expected of them—such as how to handle sensitive data or how to report suspicious
activity—they’re more likely to follow security protocols correctly.
Impact: Clear communication of security policies and procedures leads to consistent security
practices, reducing the likelihood of accidental breaches. Employees who understand company
policies are more diligent in following them, which strengthens the organization's defense posture.
Example: In a regulated environment like healthcare (HIPAA compliance), employees trained on
handling sensitive patient information are more likely to adhere to data privacy regulations,
preventing accidental leaks or violations.
Explanation: Security awareness training fosters a culture of security within the organization.
Employees who are continually reminded of the importance of security are more likely to adopt
security-conscious behaviors and be proactive in safeguarding company data and systems.
Impact: Creating a security-first mindset at all levels of the organization ensures that security isn't
seen as the responsibility of the IT department alone, but as a shared responsibility across all
employees.
Example: In organizations with a strong security culture, employees feel more accountable for
maintaining security, which leads to a collective effort in preventing breaches and mitigating risks.
Explanation: Many industries and organizations are subject to regulations and standards that
require employee training in security awareness. These include GDPR (General Data Protection
Regulation), HIPAA (Health Insurance Portability and Accountability Act), PCI DSS (Payment
Card Industry Data Security Standard), and others.
Impact: Providing employees with regular security awareness training helps organizations comply
with these regulations, avoiding potential fines and legal consequences.
Example: The General Data Protection Regulation (GDPR) mandates that companies educate
employees on data protection practices, including how to handle personal data securely. Regular
training helps ensure compliance with these legal obligations.
Explanation: Cyberattacks, such as ransomware, data breaches, and financial fraud, can result in
significant financial losses, both in terms of direct costs (e.g., paying ransoms, legal fees) and
indirect costs (e.g., reputational damage, customer trust loss).
Impact: Effective security awareness training can prevent costly security incidents, ultimately
saving the company money. By reducing the likelihood of incidents, organizations avoid the
financial and operational costs associated with breaches.
Example: The WannaCry ransomware attack (2017) caused billions of dollars in damages
worldwide. Organizations that had up-to-date software and well-trained employees were more
likely to avoid infection or minimize the damage caused by the attack.
Explanation: Customers and partners expect organizations to protect their personal and sensitive
information. When an organization invests in security awareness training, it demonstrates a
commitment to safeguarding customer data.
Impact: Effective security practices, bolstered by well-trained employees, help to build and
maintain trust with customers, clients, and business partners, protecting the organization’s
reputation.
Example: After a breach, organizations that have demonstrated robust security awareness
programs can reassure customers that they are taking steps to protect their data, which can help
restore trust. Conversely, a breach caused by employee negligence can damage the organization's
reputation.
************************************************************************************
Here’s a step-by-step guide to conducting a vulnerability assessment, along with the tools typically
involved:
Before starting the vulnerability assessment, you need to define the scope, objectives, and methodology
for the assessment. This phase involves identifying which assets will be assessed and determining the
appropriate tools and techniques to use.
Key Activities:
Define the scope: What assets (systems, networks, applications, databases) will be tested? Will it
be a comprehensive assessment or a targeted scan?
Identify critical systems: Determine which assets are mission-critical or store sensitive data and
require more attention.
Establish objectives: Clarify the purpose of the assessment (e.g., regulatory compliance,
improving overall security posture, identifying exposure to external threats).
Tools:
Asset inventory tools: Tools like SolarWinds or ServiceNow help track and maintain an up-to-
date inventory of systems and networks.
Risk assessment frameworks: Standards like NIST or ISO 27001 provide guidelines for
managing vulnerabilities based on risk.
Key Activities:
Asset discovery: Automatically identify all systems and devices in the network, including
unknown or rogue devices.
Network scanning: Scan networked devices to detect open ports, services, and other potential
vulnerabilities.
Software and configuration scanning: Identify outdated software, misconfigurations, or
missing patches that could expose systems to risk.
Tools:
Nessus: A widely-used tool that provides comprehensive vulnerability scanning of networks and
applications.
Qualys: A cloud-based platform for vulnerability management that offers automated scanning
and detailed reporting.
OpenVAS: An open-source tool for vulnerability scanning and assessment, useful for detecting
network and host vulnerabilities.
Nmap: A network scanning tool that detects open ports, services, and other vulnerabilities on
network devices.
Burp Suite: A tool focused on web application vulnerability scanning, helping identify flaws
like SQL injection and XSS.
Once vulnerabilities have been discovered, the next step is to assess their severity, potential impact, and
exploitability. This analysis helps prioritize which vulnerabilities need immediate attention.
Key Activities:
Severity assessment: Assign severity levels to vulnerabilities (e.g., Critical, High, Medium,
Low). This can be done using a scoring system like CVSS (Common Vulnerability Scoring
System).
Risk assessment: Consider factors like the likelihood of exploitation, the impact on business
operations, and the criticality of the affected asset. A high-severity vulnerability affecting a
critical server would take priority over a low-severity issue on a non-critical system.
False positives: Validate the findings and remove false positives (incorrectly identified
vulnerabilities) to avoid wasting resources on issues that don’t exist.
Tools:
CVSS (Common Vulnerability Scoring System): A framework for scoring the severity of
vulnerabilities based on their exploitability and impact.
Risk assessment tools: Tools like RiskWatch and Skybox Security can help assess the overall
risk based on identified vulnerabilities.
4. Vulnerability Prioritization:
Not all vulnerabilities are equally critical, so they need to be prioritized based on their risk. The priority
level will determine how quickly remediation efforts should be carried out.
Key Activities:
Critical vs. Low priority: Prioritize vulnerabilities that can lead to significant damage, such as
critical vulnerabilities that could lead to data breaches, financial loss, or a complete system
compromise.
Exploitability: Consider whether there are known exploits available for a vulnerability. If an
exploit exists and is widely used, it should be addressed sooner.
Business context: Prioritize based on the business impact. For example, vulnerabilities in
customer-facing systems may need to be patched faster than internal systems.
Tools:
RiskMatrix: A tool that helps visualize and prioritize risks based on their likelihood and impact.
Tenable.io: Provides risk-based vulnerability prioritization features based on business context
and threat intelligence.
Rapid7 Nexpose: A vulnerability scanner that also helps in risk-based prioritization, assigning
scores based on business relevance.
Once vulnerabilities are prioritized, the next step is to take action to fix or mitigate the issues.
Remediation can involve patching systems, reconfiguring security settings, or implementing new
security controls.
Key Activities:
Patching: Apply patches to fix software vulnerabilities. Ensure that systems are up to date with
the latest security updates.
Configuration changes: Address misconfigurations that leave systems open to attack (e.g.,
disable unnecessary services, close unused ports, enforce strong authentication).
Compensating controls: When vulnerabilities cannot be immediately fixed (e.g., in legacy
systems), implement alternative security measures such as firewalls or network segmentation.
Tools:
Patch management tools: Tools like ManageEngine Patch Manager and Ivanti help automate
the process of patching vulnerable systems.
Configuration management tools: Tools like Ansible or Chef can automate the configuration
of systems to reduce vulnerabilities.
Firewall/IDS/IPS: Intrusion detection and prevention systems (e.g., Snort, Suricata) can help
detect and block exploitation attempts.
After remediation, it is essential to verify that the vulnerabilities have been properly addressed. This
ensures that the fixes are effective and that no new vulnerabilities have been introduced.
Key Activities:
Tools:
Nessus and Qualys: Re-scan the systems to ensure that the vulnerabilities have been remediated.
Burp Suite: Conduct further testing on web applications to confirm that fixes have been properly
implemented.
Metasploit: Can be used for penetration testing to simulate real-world attacks and verify the
security of the systems after patching.
After completing the vulnerability assessment, it is essential to generate detailed reports and document
the findings, actions taken, and lessons learned.
Key Activities:
Report generation: Produce detailed reports for stakeholders (e.g., management, IT teams,
compliance officers) to outline the vulnerabilities found, their severity, and the steps taken to
mitigate them.
Compliance documentation: Ensure that the findings and remediations meet regulatory or
industry standards (e.g., PCI-DSS, HIPAA, GDPR).
Tools:
Vulnerability Assessment and Management (VAM) is a continuous process that helps organizations
identify, assess, prioritize, and mitigate security vulnerabilities. It plays a crucial role in improving the
overall security posture of an organization by addressing weaknesses that could be exploited by
attackers. The process typically includes several stages, with vulnerability scanning, patch
management, and risk assessment being central to the identification and mitigation of security risks.
1. Vulnerability Assessment Process
The vulnerability assessment process involves systematically identifying and evaluating weaknesses in
an organization's IT environment. This process helps organizations understand potential vulnerabilities
and their potential impact, allowing them to take appropriate actions to address them.
1. Vulnerability Scanning
Vulnerability scanning is a critical activity in the vulnerability assessment process. It uses automated tools
to detect vulnerabilities in systems, networks, and applications.
How it works: Vulnerability scanning tools work by comparing systems against known vulnerability
databases (e.g., CVE database) to identify weaknesses. These scans detect issues like missing patches,
open ports, outdated software, and insecure configurations.
Types of scans:
o Network scanning: Detects vulnerabilities in network infrastructure (e.g., routers, switches,
firewalls).
o Web application scanning: Identifies vulnerabilities in web applications, such as SQL
injection or cross-site scripting (XSS).
o Host-based scanning: Focuses on vulnerabilities in individual devices (e.g., servers,
workstations).
Importance: Scanning provides the foundation for identifying vulnerabilities and gives insight into
which systems need attention.
2. Patch Management
Patch management is the process of regularly updating and applying security patches to software,
operating systems, and applications to close security gaps and fix vulnerabilities.
How it works: When vendors release patches for vulnerabilities, patch management ensures these
patches are tested, approved, and applied to relevant systems across the organization.
Tools: Patch management tools (e.g., ManageEngine Patch Manager Plus, Ivanti Patch for
Windows) automate the patching process, ensuring that all systems remain up-to-date with the latest
security patches.
Importance: Patching is one of the most effective ways to fix vulnerabilities before they can be
exploited by attackers. It reduces the likelihood of successful attacks like ransomware or zero-day
exploits.
3. Risk Assessment
Risk assessment is a process used to evaluate the potential impact of identified vulnerabilities. It involves
calculating the risk based on the likelihood of exploitation and the potential impact on the organization.
How Vulnerability Scanning, Patch Management, and Risk Assessment Work Together
These three components work hand in hand to identify and mitigate security risks:
1. Vulnerability Scanning: Helps identify vulnerabilities within the network, systems, and applications.
It’s the first step in discovering potential weaknesses.
2. Risk Assessment: Once vulnerabilities are identified, risk assessment helps evaluate their severity and
potential impact on the business. This allows the organization to prioritize the remediation of the most
critical vulnerabilities.
3. Patch Management: After vulnerabilities are prioritized, patch management ensures that the
necessary patches are applied to fix those vulnerabilities, reducing the attack surface.
The Vulnerability Management Lifecycle (VML) is a structured, ongoing process that helps
organizations identify, evaluate, prioritize, remediate, and monitor vulnerabilities within their systems,
networks, and applications. The goal of vulnerability management is to reduce the attack surface,
prevent exploitation, and ultimately strengthen the organization's security posture.
Here’s a key components of the vulnerability management lifecycle and how they contribute to an
organization's security posture:
1. Vulnerability Identification
Vulnerability Identification is the first phase in the lifecycle. In this phase, security teams discover
potential vulnerabilities within the organization’s systems, networks, applications, or infrastructure.
Key Activities:
Automated Scanning: Use tools like Nessus, Qualys, OpenVAS, or Rapid7 Nexpose to scan
systems for known vulnerabilities, misconfigurations, and outdated software.
Asset Inventory: Maintain an up-to-date inventory of all IT assets (servers, workstations, network
devices, applications) that need to be assessed for vulnerabilities.
Manual Testing: Complement automated scans with manual penetration testing, code reviews, or
other techniques to identify security weaknesses that may not be easily detected by automated tools.
Proactive Defense: Identifying vulnerabilities early allows an organization to address them before
they are exploited by attackers.
Visibility: Provides a clear understanding of the current security state, which is critical for informed
decision-making.
2. Vulnerability Prioritization
Vulnerability Prioritization helps organizations determine which vulnerabilities pose the most
significant risk and should be addressed first. Not all vulnerabilities are equally critical, and prioritization
ensures resources are focused on the most important issues.
Key Activities:
Risk Assessment: Evaluate the likelihood of a vulnerability being exploited and the potential business
impact. Factors like the criticality of the affected asset, the existence of exploits, and how easily a
vulnerability can be exploited are considered.
Severity Scoring: Use standards like CVSS (Common Vulnerability Scoring System) to assign
severity scores to vulnerabilities, helping to rank them based on risk level.
Business Context: Consider the importance of the vulnerable asset to the organization. For example,
vulnerabilities in customer-facing systems or systems handling sensitive data may be prioritized over
others.
Efficient Resource Allocation: Helps focus remediation efforts on the vulnerabilities that present the
highest risk to the organization, improving overall security efficiency.
Risk Reduction: By addressing the most critical vulnerabilities first, the organization reduces its
exposure to high-impact attacks.
3. Vulnerability Remediation
Vulnerability Remediation is the phase where security teams apply fixes or mitigations to vulnerabilities.
The goal is to eliminate or reduce the potential for exploitation by closing security gaps.
Key Activities:
Patch Management: Apply patches to update software, operating systems, or applications to fix
vulnerabilities. Tools like ManageEngine Patch Manager Plus or Ivanti can automate patch
deployment.
Configuration Changes: Correct misconfigurations, disable unnecessary services, or implement
stronger authentication and access controls.
Security Controls: Implement compensating controls when patches or other fixes are not immediately
possible. This could include firewalls, intrusion detection/prevention systems (IDS/IPS), or network
segmentation.
Upgrades: In cases where vulnerabilities cannot be patched, upgrade outdated software or hardware
to newer, more secure versions.
4. Vulnerability Monitoring
Vulnerability Monitoring is a continuous process that involves tracking the status of vulnerabilities,
assessing new threats, and ensuring the effectiveness of remediation efforts over time. This phase helps
ensure that the organization remains secure as new vulnerabilities emerge and the threat landscape
evolves.
Key Activities:
Continuous Scanning: Perform regular vulnerability scans to identify new vulnerabilities, as well as
to check if previously remediated vulnerabilities have reappeared.
Threat Intelligence: Stay informed of new vulnerabilities and zero-day exploits that may affect the
organization by subscribing to threat intelligence feeds or using vulnerability databases like CVE
(Common Vulnerabilities and Exposures).
Real-Time Monitoring: Use Security Information and Event Management (SIEM) systems, such as
Splunk or QRadar, to monitor for signs of exploitation or attempted attacks.
Re-assessment: Regularly reassess the security posture to determine if new vulnerabilities have been
introduced due to changes in the environment or updates to software.
Ongoing Protection: Continuous monitoring ensures that vulnerabilities are identified and addressed
promptly, maintaining a strong security posture even as new threats emerge.
Adaptation: Enables the organization to stay responsive to changes in the threat landscape, such as
the discovery of new vulnerabilities or exploits targeting existing systems.
Accountability and Compliance: Ongoing monitoring helps demonstrate to regulators, auditors, and
stakeholders that the organization is taking proactive measures to manage vulnerabilities.
1. Vulnerability Identification:
o This is the first line of defense, providing visibility into the security state of the organization’s
infrastructure. Identifying vulnerabilities proactively helps prevent attackers from exploiting
unknown or overlooked weaknesses.
o Security Posture Impact: It ensures that the organization is aware of its vulnerabilities and
can take appropriate action to address them before they are exploited.
2. Vulnerability Prioritization:
o Prioritization ensures that limited security resources are used efficiently, addressing the most
critical vulnerabilities first. It helps the organization focus on the vulnerabilities that have the
highest potential impact on business operations, data confidentiality, and system availability.
o Security Posture Impact: It enables organizations to reduce their overall risk by eliminating
the most dangerous vulnerabilities before focusing on less severe ones.
3. Vulnerability Remediation:
o Remediation is where the organization takes concrete action to fix identified vulnerabilities
through patching, configuration changes, and implementing additional security measures.
Proper remediation minimizes the attack surface and reduces the likelihood of successful
exploitation.
o Security Posture Impact: By remediating vulnerabilities, the organization reduces the chance
of a security breach, improving the defense against cyberattacks.
4. Vulnerability Monitoring:
o Continuous monitoring ensures that vulnerabilities are continuously identified and that
remediation measures remain effective over time. It helps organizations stay ahead of evolving
threats and ensure that security weaknesses do not recur.
o Security Posture Impact: Ongoing monitoring allows for early detection of new
vulnerabilities or attack attempts, ensuring that the organization maintains a strong, adaptive
defense.
************************************************************************************
Physical security considerations are essential components of an organization's overall security strategy.
While cyber security often takes the spotlight in discussions of modern security, physical security is
equally important. It involves protecting an organization’s physical assets, infrastructure, and personnel
from physical threats that could compromise security, safety, or business continuity.
Four levels of Physical security
1. Perimeter Security – Protects the outer boundary of the organization (e.g., fences, gates, lighting,
surveillance).
2. Facility or Building Security – Controls access to the building and its general areas (e.g., entry control,
locks, visitor management).
3. Internal Security – Secures specific areas within the building, especially those with critical
infrastructure or data (e.g., restricted access areas, IDS, security patrols).
4. Asset or Object Security – Protects valuable or sensitive assets from theft, damage, or unauthorized
access (e.g., secure storage, environmental controls, asset tracking).
Give two examples of physical security controls.
1. Access Control Systems
2. Surveillance Cameras
Role of access control in physical security
1. Restricting Unauthorized Access
2. Monitoring and Auditing
3. Protecting Sensitive Areas and Assets
4. Minimizing the Risk of Insider Threats
5. Enforcing the Principle of Least Privileges
Various types of physical security threats and how organizations can mitigate these risks.
Here are some common types of physical security threats and ways organizations can mitigate them:
1. Unauthorized Access
Unauthorized access occurs when individuals gain entry to restricted areas without proper clearance or
credentials.
Mitigation Strategies:
Access Control Systems: Use key cards, biometric authentication, PIN codes, or multi-factor
authentication (MFA) to restrict access to sensitive areas.
Surveillance Cameras: Install CCTV cameras to monitor access points and sensitive areas. This acts
as a deterrent and helps in identifying unauthorized individuals.
Security Personnel: Deploy trained guards at entry points to verify identification, check credentials,
and ensure only authorized personnel enter the facility.
Visitor Management: Implement strict visitor management systems where visitors are logged, given
temporary access badges, and escorted while on the premises.
2. Theft
Theft involves the unlawful removal of assets, such as equipment, data, intellectual property, or inventory,
either by external criminals or insiders.
Mitigation Strategies:
Secure Storage: Use locked cabinets, safes, and secure rooms to store valuable items like cash,
documents, and sensitive equipment.
Asset Tracking: Implement asset management and tracking systems (RFID, barcode scanners) to
monitor valuable assets in real-time.
Environmental Design: Place critical assets in areas with restricted access, preferably with additional
physical barriers (e.g., locked doors, safes).
Employee Background Checks: Conduct thorough background checks on employees, particularly
those with access to sensitive assets, to reduce the risk of internal theft.
3. Vandalism
Vandalism involves the deliberate destruction or defacement of property, which can lead to operational
disruptions or financial loss.
Mitigation Strategies:
Physical Barriers: Install barriers, gates, or fencing around vulnerable areas to prevent unauthorized
individuals from accessing or damaging property.
Lighting: Ensure that areas around the premises are well-lit, particularly in high-risk areas like parking
lots and perimeters, to deter vandals.
Surveillance: Use CCTV cameras to monitor areas prone to vandalism. Regular monitoring helps to
catch perpetrators in the act and can act as a deterrent.
Security Patrols: Conduct regular patrols by security personnel, particularly during off-hours, to
prevent vandalism.
4. Natural Disasters
Natural disasters (e.g., earthquakes, floods, fires, hurricanes) can damage physical infrastructure, disrupt
operations, and put people at risk.
Mitigation Strategies:
Building Codes and Safety Measures: Ensure that buildings meet local safety and environmental
codes. For example, implement fire-resistant building materials, flood barriers, and earthquake-proof
construction.
Emergency Evacuation Plans: Develop and practice emergency evacuation plans. Ensure all
employees are trained to respond to disasters such as fires or earthquakes.
Backup Systems: Install fire suppression systems, water drainage systems, and backup power supplies
to minimize damage caused by disasters and ensure business continuity.
Disaster Recovery Plans: Develop a business continuity and disaster recovery plan that includes
protocols for restoring operations after a natural disaster.
Sabotage occurs when individuals, often employees or contractors, intentionally damage an organization’s
assets or operations. This can include data destruction, physical damage to equipment, or intellectual
property theft.
Mitigation Strategies:
Access Control: Limit access to sensitive areas based on job roles, and regularly review and update
access permissions.
Employee Monitoring: Monitor employee activities, particularly in critical areas like IT systems or
equipment rooms, to detect suspicious behavior.
Security Policies: Develop and enforce strict security policies that outline acceptable behavior, as
well as penalties for violating policies.
Insider Threat Programs: Implement a comprehensive insider threat program that includes
continuous monitoring, employee training, and whistleblower policies to detect and address potential
threats from within.
Physical attacks, such as armed robbery or terrorism, involve violent acts intended to harm personnel,
steal assets, or disrupt operations.
Mitigation Strategies:
Security Guards and Personnel: Employ trained security guards to monitor and patrol premises,
particularly high-risk areas like cash handling or valuable equipment storage.
Metal Detectors and Security Checks: Install metal detectors and conduct security screenings at
entry points to detect weapons or other dangerous items.
Emergency Response Plans: Develop and practice emergency response plans to address various
attack scenarios (e.g., active shooter situations, terrorist threats).
Physical Barriers: Use bollards, secure entry points, and vehicle barriers to protect against vehicle-
based attacks or forced entry.
Fire and explosions can cause catastrophic damage to a facility and endanger employees' lives.
Mitigation Strategies:
Fire Detection and Suppression Systems: Install fire alarms, smoke detectors, and sprinkler systems
throughout the building. These systems can quickly detect and contain fires before they spread.
Fireproofing: Use fire-resistant materials for structural components and critical infrastructure,
especially in areas like server rooms and electrical panels.
Fire Drills and Training: Regularly train employees on fire safety, evacuation procedures, and how
to use fire extinguishers.
Safe Storage of Hazardous Materials: If chemicals or flammable materials are used, ensure they are
stored and handled according to safety protocols to prevent explosions or fires.
8. Power Failures
Power failures or outages can disrupt operations, damage equipment, or lead to security lapses if alarm
systems or surveillance cameras stop working.
Mitigation Strategies:
Uninterruptible Power Supply (UPS): Install UPS systems in critical areas, such as data centers, to
maintain power in the event of an outage and allow for graceful shutdowns.
Backup Generators: Ensure that backup generators are in place to provide power during extended
outages, particularly in facilities with sensitive equipment or systems.
Power Grid Security: Implement systems to monitor power usage and detect unusual consumption
patterns that could indicate a potential power supply issue.
Social engineering involves tricking individuals into granting access to restricted areas or systems through
deception. This could involve someone posing as a maintenance worker, delivery person, or visitor to gain
physical entry.
Mitigation Strategies:
Employee Training: Regularly train employees to recognize and report social engineering tactics,
such as tailgating or impersonation attempts.
Visitor Verification: Ensure all visitors are properly logged, verified, and escorted within the facility.
Tailgating Prevention: Enforce strict access control measures that prevent unauthorized individuals
from following authorized personnel into secure areas.
How access control systems, surveillance, environmental controls and secure facilities contribute to
protecting physical assets from theft, damage or unauthorized access.
Access control systems regulate and monitor who can enter or exit specific areas within a facility,
ensuring that only authorized personnel have access to sensitive or restricted locations.
Restricting Unauthorized Access: Access control systems limit entry to high-risk areas such as
server rooms, data centers, or storage areas, preventing unauthorized individuals from gaining
access to valuable equipment or sensitive data.
Layered Security: Through the use of smart cards, biometric authentication, PIN codes, or
multi-factor authentication, these systems enforce least privilege by ensuring only those with
the appropriate credentials can access specific locations.
Tracking and Auditing: Access control systems often maintain logs of who enters which areas
and when. This tracking is useful for identifying any unusual or unauthorized attempts to access
restricted areas and helps provide an audit trail in case of theft or other security incidents.
Deterring Insider Threats: By controlling access to specific areas based on job roles, access
control systems help prevent insider threats, such as employees stealing or tampering with
equipment.
2. Surveillance (CCTV)
Surveillance systems, such as CCTV cameras, monitor activity within and around the premises,
providing real-time video feeds of high-risk areas, entry points, and restricted zones.
Deterrence: The presence of surveillance cameras acts as a strong deterrent to potential thieves or
vandals. Knowing that they are being watched can discourage individuals from attempting to steal,
damage, or unlawfully access facilities.
Evidence Collection: In the event of theft, damage, or a security breach, surveillance footage can
be used as evidence to identify perpetrators and understand the sequence of events. This can also
help law enforcement in investigations.
Real-time Monitoring: Security personnel can monitor surveillance feeds in real time to detect
suspicious activity and respond immediately to prevent unauthorized access or theft.
Post-Incident Investigation: After an incident, recorded footage can help determine how the
breach occurred, which individuals were involved, and whether any vulnerabilities need to be
addressed in the security system.
3. Environmental Controls
Environmental controls are systems designed to protect assets from environmental threats such as
temperature fluctuations, humidity, fire, water damage, or power failure.
Contribution to Protecting Physical Assets:
Temperature and Humidity Regulation: Sensitive equipment, such as servers, data storage
devices, or laboratory instruments, can be damaged by excessive heat, humidity, or static
electricity. Environmental controls such as air conditioning, dehumidifiers, and temperature
sensors help maintain optimal conditions for the preservation and functionality of these assets.
Fire Prevention and Suppression: Fire safety systems, such as smoke detectors, sprinklers, and
fire suppression systems (e.g., CO2 or FM-200 systems), help prevent and manage fire risks. This
is crucial in areas like data centers or electrical rooms, where fire can cause significant damage to
valuable assets and disrupt operations.
Flood Protection: Water sensors and flood barriers are used to protect areas prone to water
damage, such as basements or ground-level storage. For example, raised flooring in server rooms
helps protect electronic equipment from potential water damage.
Power Management: Backup power supplies, like UPS systems (uninterruptible power supplies)
and generators, ensure that critical equipment continues to operate during a power outage,
protecting data integrity and preventing downtime.
4. Secure Facilities
Secure facilities refer to physical spaces that are designed to provide protection from external threats,
such as break-ins, vandalism, or natural disasters, while ensuring that sensitive assets are safely stored or
housed.
Physical Barriers: Secure facilities often incorporate physical barriers such as fencing, gates,
reinforced doors, and windows that prevent unauthorized individuals from gaining access to the
building or restricted areas. Strong doors, windows with shatterproof glass, and security gates
create an initial barrier against theft and break-ins.
Controlled Entry Points: Secure facilities employ controlled entry points (e.g., security gates,
turnstiles, and guarded entrances) to prevent unauthorized individuals from entering the premises.
Access is granted only to those with proper credentials, ensuring restricted areas are protected.
Locking Mechanisms: High-security locks on doors, windows, and cabinets within secure
facilities ensure that physical assets, such as documents, cash, or expensive equipment, are locked
away from unauthorized access. This may include biometric locks, electronic keycards, or PIN
code systems for added security.
Safe Rooms and Vaults: In facilities dealing with valuable assets (e.g., banks, data centers),
sensitive items may be stored in safe rooms, vaults, or high-security cabinets that are designed
to be difficult to breach. These areas provide an additional level of protection to prevent theft or
damage.
Redundancy: Secure facilities often feature redundant systems for critical infrastructure, such
as backup power sources, fail-safe mechanisms for doors and alarms, and secure storage options.
This redundancy ensures that even if one security measure fails, others are still in place to protect
assets.
**************************************X*****************************************
Course Code/Title:CS3404 Unit:4
I. What is Cryptography?
Cryptography is the science of encrypting and decrypting data to prevent unauthorized access.
Encryption is the process of making the plaintext unreadable to any third party, which generates the
ciphertext. Decryption is the process of reversing the encrypted text to its original readable format,
i.e., plaintext.
Classification of Cryptography
1
II. There are two types of encryption in cryptography:
1. Symmetric Encryption
2. Asymmetric Encryption
Symmetric Encryption algorithm relies on a single key for encryption and decryption of information.
Both the sender and receiver of the message need to have a pre-shared secret key that they will use to
convert the plaintext into ciphertext and vice versa. As shown below in the figure, the key which is
being used for encrypting the original message is decrypting the ciphertext. The key must be kept
private and be known only to the sender and the receiver.
2
Why is Symmetric Key Cryptography Called Private Key Cryptography?
With the entire architecture of Symmetric Cryptography depending on the single key being
used, you can understand why it’s of paramount importance to keep the key secret on all occasions. If
the sender somehow transmits the secret key along with the ciphertext, anyone can intercept the
package and access the information. Consequently, this encryption category is termed private key
cryptography, since a big part of the data’s integrity is riding on the promise that the users can keep
the keys secret. Provided you manage to keep the keys secret, you still have to choose what kind of
ciphers you want to use to encrypt the information. In symmetric-key cryptography, there are broadly
two categories of ciphers that you can employ.
What Are the Types of Ciphers Being Used?
Two types of ciphers can be used in symmetric algorithms. These two types are:
• Stream Ciphers
• Block Ciphers
1) Strean Ciphers
Stream ciphers are the algorithms that encrypt basic information, one byte/bit at a time. You use a
bitstream generation algorithm to create a binary key and encrypt the plaintext.
The process for encryption and decryption using stream ciphers are as follows :
• Get the plaintext to be encrypted.
• Create a binary key using the bitstream generation algorithm.
• Perform XOR operation on the plaintext using the generated binary key.
• The output becomes the ciphertext.
Perform XOR operations on the ciphertext using the same key to get back the plaintext
1) Ceaser Cipher
3
4
2) Vernam Cipher in Cryptography
Vernam Cipher is a method of encrypting alphabetic text. It is one of the Substitution techniques for
converting plain text into cipher text. In this mechanism, we assign a number to each character of the
Plain-Text ,like (a=0,b=1,c=2,…z=25).
Method to take key: In the Vernam cipher algorithm, we take a key to encrypt the plain text whose
length should be equal to the length of the plain text.
Encryption Algorithm
• Assign a number to each character of the plain text and the key according to alphabetical order.
• Bitwise XOR both the number (Corresponding plain-text character number and Key character
number).
• Subtract the number from 26 if the resulting number is greater than or equal to 26, if it isn’t then leave
it.
Example 1:
Plain-Text: O A K
Key: S O N
O ==> 14 = 0 1 1 1 0
S ==> 18 = 1 0 0 1 0
Bitwise XOR Result: 1 1 1 0 0 = 28
Since the resulting number is greater than 26, subtract 26 from it. Then convert the Cipher-Text
character number to the Cipher-Text character.
28 - 26 = 2 ==> C
CIPHER-TEXT: C
Similarly, do the same for the other corresponding characters,
PT: O A K
NO: 14 00 10
KEY: S O N
NO: 18 14 13
New Cipher-Text is after getting the corresponding character from the resulting number.
CT-NO: 02 14 07
CT: C O H
Advantages of the Vernam Cipher
1) Perfect Secrecy 2) Unbreakable with a Truly Random Key 3) No Pattern Recognition 4)
Simple Algorithm
Disadvantages of Vernam Cipher
1) Key Management 2) Key Reuse Compromises Security 3) Key Storage and Transmission 4)
Practicality and Efficiency 5) Key Length Equal to Message Length
Mode of Operation Encrypts data one bit or byte at a Encrypts fixed-size blocks of data
time (e.g., 64 or 128 bits)
Key Usage Generates a continuous key stream Uses a single key to encrypt each
block of data
Encryption Process XORs key stream with plaintext Encrypts each block of
byte-by-byte data independently
5
Aspect Stream Cipher Block Cipher
Speed Generally faster and more efficient Slower due to block processing
for real-time data
Error Propagation Errors typically affect only a single Errors in one block can affect the
byte entire block
Data Size Works well for data of unknown or Works with data that is a multiple of
variable size the block size
6
Step 1: Key Transformation
The DES process uses a 56-bit key, which is obtained by eliminating all the bits present in
every 8th position in a 64-bit key. In this step, a 48-bit key is generated. The 56-bit key is split into
two equal halves and depending upon the number of rounds the bits are shifted to the left in a circular
fashion. Due to this, all the bits in the key are rearranged again. We can observe that some of the bits
get eliminated during the shifting process, producing a 48-bit key. This process is known as
compression permutation.
Step 2: Expansion Permutation
Let's consider an RPT of the 32-bit size that is created in the IP stage. In this step, it is expanded from
32-bit to 48-bit. The RPT of 32-bit size is broken down into 8 chunks of 4 bits each and extra two bits
are added to every chunk, later on, the bits are permutated among themselves leading to 48-bit data.
An XOR function is applied in between the 48-bit key obtained from step 1 and the 48-bit expanded
RPT.
Steps for Encryption
There are multiple steps involved in the steps for data encryption. They are:
1. Permutate the 64-bits in the plain text and divide them into two equal halves.
2. These 32-bit chunks of data will undergo multiple rounds of operations.
3. Apply XOR operation in between expanded right plain text and the compressed key of 48-bit size.
4. The resultant output is sent to the further step known as S-box substitution.
5. Now apply the XOR function to the output and the left plain text and store it in the right plain text.
6. Store the initial right plain text in the left plain text.
7. Both the LPT and RPT halves are forwarded to the next rounds for further operations.
8. At the end of the last round, swap the data in the LPT and RPT.
9. In the last step, apply the inverse permutation step to get the cipher text.
7
Single Round Permutation
AES Algorithm
The AES Encryption algorithm (also known as the Rijndael algorithm) is a symmetric block cipher
algorithm with a block/chunk size of 128 bits. It converts these individual blocks using keys of 128,
192, and 256 bits. Once it encrypts these blocks, it joins them together to form the ciphertext. It is
based on a substitution-permutation network, also known as an SP network. It consists of a series of
linked operations, including replacing inputs with specific outputs (substitutions) and others involving
bit shuffling (permutations).
8
9
The mentioned steps are to be followed for every block sequentially. Upon successfully
encrypting the individual blocks, it joins them together to form the final ciphertext. The steps are
as follows:
• Add Round Key: You pass the block data stored in the state array through an XOR function
with the first key generated (K0). It passes the resultant state array on as input to the next step.
Sub-Bytes: In this step, it converts each byte of the state array into hexadecimal, divided into
two equal parts. These parts are the rows and columns, mapped with a substitution box (S-Box)
to generate new values for the final state array.
• Shift Rows: It swaps the row elements among each other. It skips the first row. It shifts
the elements in the second row, one position to the left. It also shifts the elements from
the third row two consecutive positions to the left, and it shifts the last row three
positions to the left.
• Mix Columns: It multiplies a constant matrix with each column in the state array to get a new
column for the subsequent state array. Once all the columns are multiplied with the same constant
matrix, you get your state array for the next step. This particular step is not to be done in the last
round.
10
Add Round Key: The respective key for the round is XOR’d with the state array is obtained in
the previous step. If this is the last round, the resultant state array becomes the ciphertext for the
specific block; else, it passes as the new state array input for the next round
Asymmetric Encryption
Asymmetric encryption, also known as public-key cryptography, is a type of encryption that
uses a pair of keys to encrypt and decrypt data. The pair of keys includes a public key, which
can be shared with anyone, and a private key, which is kept secret by the owner.
11
The process for the above image is as follows:
• Step 1: Alice uses Bob’s public key to encrypt the message
• Step 2: The encrypted message is sent to Bob
• Step 3: Bob uses his private key to decrypt the message
Applications
• Signatures: Verification of document origin and signature authenticity is possible today thanks
to asymmetric key cryptography.
• TLS/SSL handshake: Asymmetric key cryptography plays a significant role in verifying website
server authenticity, exchanging the necessary encryption keys required, and generating a session
using those keys to ensure maximum security. Instead of the rather insecure HTTP website
format.
• Crypto-currency: Asymmetric key cryptography uses blockchain technology to authorize
cryptocurrency transactions and maintain the integrity of its decentralized architecture.
• Key sharing: This cryptography category can also be used to exchange secret keys for symmetric
encryption since keeping such keys private is of utmost importance in its system
1) Enhanced Security: Asymmetric encryption uses two keys (public and private), making it more
secure than symmetric encryption since the private key is never shared.
2) No Key Distribution Problem:The public key is freely distributed and the private key remains
secret,eliminating the need for secure key exchange.
3) Authentication: Asymmetric encryption can verify the sender's identity using digital signatures,
ensuring data integrity and authenticity.
4) Scalability: It scales well in large environments, as each participant needs only one key pair
(public and private), reducing the complexity of managing multiple keys.
5) Confidentiality and Non-repudiation: It provides both confidentiality (encryption) and non-
repudiation (digital signatures), making it suitable for tasks like secure communications and legal
document exchange.
Disadvantages of Asymmetric Encryption:
1) Slower performance 2)Large Key Sizes 3)Complex key management 4) Vulnerability to private
key exposure 5)Not Ideal for large data
RSA (Rivest–Shamir–Adleman) Algorithm
• The RSA algorithm is one of the most widely used asymmetric encryption
techniques. It provides secure communication, digital signatures, and key exchange in
cryptographic systems. RSA (Rivest–Shamir–Adleman) is an asymmetric cryptographic
algorithm.
• It uses two keys:
o Public Key (e,ne, ne,n) → Used for encryption.
o Private Key (d,nd, nd,n) → Used for decryption.
• RSA is based on the mathematical difficulty of factoring large prime numbers.
• It is widely used in secure communications (SSL/TLS), digital signatures,and
12
cryptocurrencies.
13
14
Hashing Function:
A cryptographic hash function (CHF) is an equation that is widely used to verify the validity of
data. . It has many applications, particularly in information security (e.g. user authentication). A
CHF translates data of various lengths of the message into a fixed-size numerical string the hash.
A cryptographic hash function is a single-directional work, making it extremely difficult to
reverse to recreate the information used to make it.
• The hash function accepts data of a fixed length. The data block size varies between algorithms.
• If the blocks are too small, padding may be used to fill the space. However, regardless of the
kind of hashing used, the output, or hash value, always has the same set length.
• The hash function is then applied as many times as the number of data blocks.
Purpose:
• Secure against unauthorized alterations: It assists you in even minor changes to a message that
will result in the generation of a whole new hash value.
• Protect passwords and operate at various speeds: Many websites allow you to save your
passwords so that you don’t have to remember them each time you log in. However, keeping
plaintext passwords on a public-facing server is risky since it exposes the information to thieves.
Websites commonly use hash passwords to create hash values, which they then store.
Procedure:
• Input Data: Start with the data (of any length) that needs to be hashed, such as text, numbers, or
files.
• Apply Hashing Algorithm: Use a specific hashing algorithm (e.g., SHA-256, MD5) to process
the input data.
• Generate Fixed-Length Output: The algorithm produces a fixed-length hash value (digest),
regardless of the input size.
• Verify Integrity: Compare hash values to detect any changes in the original data—if the
input changes, the hash will be significantly different.
Example
• One common example of a hash function in cryptography is its use in password
verification. the cryptographic hash function transforms the user's password into a hash
value, which is then stored instead of the plaintext password. This aspect of the
cryptographic hash function enhances security, as the hash value cannot be easily
reverted to the original password.
• Another significant application is in signature generation and verification. Here, a hash
function in cryptography is used to create a unique hash of a message, which is then
encrypted with a private key to form a digital signature. This process underscores the
importance of the cryptographic hash function in ensuring data integrity and
authentication.
15
Advantages of Cryptographic Hash Functions:
1. Data Integrity : Quickly detects any alterations in data.
2. Efficiency: Fast and resource-efficient, making it suitable for large datasets.
3. Irreversibility: Infeasible to reverse-engineer the input from the hash.
4. Fixed Output Size: Produces a consistent, fixed-length hash regardless of input size.
The first message is hashed using SHA-1 to get the hash digest
"06b73bd57b3b938786daed820cb9fa4561bf0e8e". The hash digest for the second, analogous
message will look like "66da9f3b8d9d83f34770a14c38276a69433a535b" if it is hashed with
SHA-1. The avalanche effect is what is known for this. This phenomenon is crucial for
cryptography since it implies that even the smallest alteration to the message being entered
entirely alters the output. As a result, attackers won't be able to decipher what the hash digest.
initially said or determine whether the message was altered while in route and inform the
message's recipient.
SHAs can aid in identifying any modifications made to an original message. A user
can determine whether even one letter has been altered by consulting the original hash digest
since the hash digests will be entirely different. The fact that SHAs are deterministic is one of
their key features. This implies that any machine or user may reproduce the hash digest if they
know the hash algorithm that was used. Every SSL certificate on the Internet must have been
hashed with the SHA-2 procedure because of the determinism of SHAs.
Digital Signature:
A digital signature is a cryptographic mechanism used to verify the authenticity and integrity
of digital messages, documents, or software. It ensures that the content has not been altered
during transmission and confirms the identity of the sender. Digital signatures rely on
asymmetric encryption, using a pair of keys—public and private. The sender uses their private
key to create a unique signature for a message or document, and the recipient uses the
16
corresponding public key to verify that the signature is valid. This process establishes trust in
electronic transactions, making it essential for secure communication in many areas like legal
agreements, financial transactions, and software distribution.
17
servers, user procedures, and other relevant protocols.
The cryptographic key management lifecycle is essential for the secure creation, handling, and
destruction of cryptographic keys, which form the foundation of secure communications and data
protection. The lifecycle ensures that cryptographic keys are generated, distributed, stored, used, and
destroyed securely to prevent unauthorized access or misuse. Below is an outline of each key phase of
the lifecycle.
Key Generation
Key generation is the first phase in the lifecycle, where cryptographic keys are created. This
process is critical because the security of the cryptographic system depends on the randomness
and strength of the keys. Symmetric keys, such as those used in AES encryption, and
asymmetric key pairs, such as those used in RSA encryption, must be generated using secure
algorithms and processes to ensure they are sufficiently unpredictable and resistant to attacks.
Secure environments, such as hardware security modules (HSMs) or trusted systems, are often
used to ensure that key generation is not compromised.
Key Establishment
Once generated, cryptographic keys need to be securely distributed to the parties that require
them. This process is known as key establishment and involves securely transmitting keys to
authorized users or systems. In symmetric encryption, a single key must be securely shared
between two parties. For asymmetric encryption, the public key can be freely distributed, while
the private key remains secret. Secure key exchange protocols, such as Diffie-Hellman or Elliptic
Curve Diffie-Hellman (ECDH), are commonly used to establish keys in a secure manner,
ensuring that attackers cannot intercept or tamper with the keys during transmission.
Key Storage
Key storage involves keeping cryptographic keys safe during their lifetime. Storing keys s
securely is essential because unauthorized access to keys can lead to a complete
compromise of the system. Keys are typically stored in secure locations such as hardware
security modules (HSMs), encrypted databases, or secure key vaults. In addition to physical
security, software encryption techniques are often used to further protect stored keys. Proper
access control mechanisms must be implemented to ensure that only authorized users or systems
can retrieve and use the stored keys.
Key Usage
Once securely stored, cryptographic keys are used for their intended purpose, such as encryption,
decryption, signing, or verifying data. During this phase, it's important to ensure that keys are
used only for their designated cryptographic operations and that unauthorized use is prevented.
Secure key usage policies should be in place to define how and when keys can be used.
Additionally, monitoring and auditing key usage can help detect any suspicious activities, such
as unauthorized access attempts or key misuse.
18
Key Archival
When cryptographic keys are no longer in active use but must be retained for compliance
or auditing purposes, they are moved into archival storage. Key archival involves securely
storing keys in a way that they are protected but retrievable if needed in the future, for instance,
to decrypt old encrypted data or verify digital signatures. Archived keys should be stored
securely, and access to them should be highly restricted, as they can still be used to access
sensitive information.
Key Destruction
The final phase of the key management lifecycle is key destruction. Once keys are no longer
needed, or their lifetime has expired, they must be securely destroyed to prevent unauthorized
use in the future. Proper key destruction ensures that the keys cannot be recovered or
reconstructed. This typically involves securely erasing keys from storage, physically destroying
hardware where keys were stored, or using cryptographic algorithms to render the key data
unrecoverable. Secure destruction is essential to prevent data breaches or unauthorized
decryption of sensitive data long after the keys' operational use.
Secure Socket Layer (SSL)
SSL, or Secure Sockets Layer, is an Internet security protocol that encrypts data to keep it
safe. It was created by Netscape in 1995 to ensure privacy, authentication, and data integrity in
online communications. SSL is the older version of what we now call TLS (Transport Layer
Security).Websites using SSL/TLS have “HTTPS” in their URL instead of “HTTP”.
• Encryption: SSL encrypts data transmitted over the web, ensuring privacy. If someone intercepts
the data, they will see only a jumble of characters that is nearly impossible to decode.
• Authentication: SSL starts an authentication process called a handshake between two devices to
confirm their identities, making sure both parties are who they claim to be.
• Data Integrity: SSL digitally signs data to ensure it hasn’t been tampered with, verifying that the
data received is exactly what was sent by the sender.
Why is SSL Important?
Originally, data on the web was transmitted in plaintext, making it easy for anyone who
intercepted the message to read it. For example, if someone logged into their email account, their
username and password would travel across the Internet unprotected.
SSL was created to solve this problem and protect user privacy. By encrypting data between a
user and a web server, SSL ensures that anyone who intercepts the data sees only a scrambled
mess of characters. This keeps the user’s login credentials safe, visible only to the email service.
Additionally, SSL helps prevent cyber attacks by:
• Authenticating Web Servers: Ensuring that users are connecting to the legitimate website, not a
fake one set up by attackers.
• Preventing Data Tampering: Acting like a tamper-proof seal, SSL ensures that the data sent and
received hasn’t been altered during transit.
Secure Socket Layer Protocols
• SSL Record Protocol
• Handshake Protocol
• Change-Cipher Spec Protocol
Alert Protocol
19
1. SSL Record Protocol
SSL Record provides two services to SSL connection.
• Confidentiality
• Message Integrity
In the SSL Record Protocol application data is divided into fragments. The fragment is
compressed and then encrypted MAC (Message Authentication Code) generated by algorithms
like SHA (Secure Hash Protocol) and MD5 (Message Digest) is appended. After that encryption
of the data is done and in last SSL header is appended to the data.
20
2. Handshake Protocol
Handshake Protocol is used to establish sessions. This protocol allows the client and server to
authenticate each other by sending a series of messages to each other. Handshake protocol uses
four phases to complete its cycle.
• Phase-1: In Phase-1 both Client and Server send hello-packets to each other. In this IP
session, cipher suite and protocol version are exchanged for security purposes.
• Phase-2: Server sends his certificate and Server-key-exchange. The server end phase- 2
by sending the Server-hello-end packet.
• Phase-3: In this phase, Client replies to the server by sending his certificate and Client-
exchange-key.
• Phase-4: In Phase-4 Change-cipher suite occurs and after this the Handshake Protocol
ends.
21
Change-Cipher Protocol
This protocol uses the SSL record protocol. Unless Handshake Protocol is completed, the SSL
record output will be in a pending state. After the handshake protocol, the Pending state is
converted into the current state. Change-cipher protocol consists of a single
message which is 1 byte in length and can have only one value. This protocol’s purpose is to
cause the pending state to be copied into the current state.
Alert Protocol
This protocol is used to convey SSL-related alerts to the peer entity. Each message in this
protocol contains 2 bytes.
The level is further classified into two parts:
This Alert has no impact on the connection between sender and receiver. Some of them are:
Warning (level = 1):
• Bad Certificate: When the received certificate is corrupt.
• No Certificate: When an appropriate certificate is not available.
• Certificate Expired: When a certificate has expired.
• Certificate Unknown: When some other unspecified issue arose in processing the certificate,
rendering it unacceptable.
• Close Notify: It notifies that the sender will no longer send any messages in the connection.
• Unsupported Certificate: The type of certificate received is not supported.
• Certificate Revoked: The certificate received is in revocation list.
22
Fatal Error (level = 2):
This Alert breaks the connection between sender and receiver. The connection will be stopped,
cannot be resumed but can be restarted. Some of them are :
• Handshake Failure: When the sender is unable to negotiate an acceptable set of security
parameters given the options available.
• Decompression Failure: When the decompression function receives improper input.
• Illegal Parameters: When a field is out of range or inconsistent with other fields.
• Bad Record MAC: When an incorrect MAC was received.
• Unexpected Message: When an inappropriate message is received. The second byte in the Alert
protocol describes the error.
Salient Features of Secure Socket Layer
• The advantage of this approach is that the service can be tailored to the specific needs of the
given application.
• Secure Socket Layer was originated by Netscape.
• SSL is designed to make use of TCP to provide reliable end-to-end secure service.
• This is a two-layered protocol.
SSL Certificate
SSL (Secure Sockets Layer) certificate is a digital certificate used to secure and verify the identity
of a website or an online service. The certificate is issued by a trusted third-party called a
Certificate Authority (CA), who verifies the identity of the website or service before issuing the
certificate.
Types of SSL Certificates:
• Single-Domain SSL Certificate
• Wildcard SSL Certificate
• Multi-Domain SSL Certificate
SSL certificates have different validation levels, which determine how thoroughly a business or
organization is vetted:
• Domain Validation (DV)
Organization Validation (OV)
Benefits of TLS:
• Encryption: TLS/SSL can help to secure transmitted data using encryption.
• Interoperability: TLS/SSL works with most web browsers, including Microsoft Internet
Explorer and on most operating systems and web servers.
• Algorithm flexibility: TLS/SSL provides operations for authentication mechanism,
encryption algorithms and hashing algorithm that are used during the secure session.
23
• Ease of Deployment: Many applications TLS/SSL temporarily on a windows server
2003 operating systems.
• Ease of Use: Because we implement TLS/SSL beneath the application layer, most of its
operations are completely invisible to client.
Working of TLS
The client connect to server (using TCP). The client sends number of specification:
1. Version of SSL/TLS.
2. which cipher suites, compression method it wants to use.
• The server checks what the highest SSL/TLS version is that is supported by them both.
• Picks a cipher suite from one of the clients option and optionally picks a compression
method.
• After this the basic setup is done, the server provides its certificate.
• This certificate must be trusted either by the client itself or a party that the client trusts.
• Having verified the certificate and being certain this server really is who he claims to be
(and not a man in the middle), a key is exchanged. This can be a public key,
“PreMasterSecret” or simply nothing depending upon cipher suite.
• Both the server and client can now compute the key for symmetric encryption.
The handshake is finished and the two hosts can communicate securely.
Significance of TLS:
Enhanced Security Features:
TLS employs a variety of cryptographic algorithms to provide a secure communication channel. This
includes symmetric encryption algorithms like AES (Advanced Encryption Standard) and asymmetric
algorithms like RSA and Diffie-Hellman key exchange. Additionally, TLS supports various hash
functions for message integrity, such as SHA-256, ensuring that data remains confidential and unaltered
during transit.
Certificate-Based Authentication:
When a client connects to a server, the server presents its digital certificate, which includes its public key
and other identifying information. The client verifies the authenticity of the certificate using trusted root
certificates stored locally or provided by a trusted authority, thereby establishing the server’s identity.
Forward Secrecy:
TLS supports forward secrecy, a crucial security feature that ensures that even if an attacker compromises
the server’s private key in the future, they cannot decrypt past communications.
TLS Handshake Protocol:
It involves multiple steps, including negotiating the TLS version, cipher suite, and exchanging
cryptographic parameters. The handshake concludes with the exchange of key material used to derive
session keys for encrypting and decrypting data.
Perfect Forward Secrecy (PFS):
Perfect Forward Secrecy is an advanced feature supported by TLS that ensures the confidentiality of past
sessions even if the long-term secret keys are compromised. With PFS, each session key is derived
independently, providing an additional layer of security against potential key compromise.
TLS Deployment Best Practices:
This includes regularly updating TLS configurations to support the latest cryptographic standards and
protocols, disabling deprecated algorithms and cipher suites, and keeping certificates up-to-date with
strong key lengths.
24
25
. User Requests a Certificate:
• A user (end entity) wants to obtain a digital certificate for secure communication.
• The request is sent to the Registration Authority (RA).
2️.Identity Verification by RA:
• The RA checks the user's credentials and verifies their identity.
• If valid, the RA forwards the request to the Certificate Authority (CA).
3️.Certificate Issuance by CA:
• The Certificate Authority (CA) generates a digital certificate containing:
• The user's public key
• The certificate details (validity, issuer, etc.)
• The CA digitally signs the certificate to prove its authenticity.
4.Certificate Storage & Distribution:
• The issued certificate is stored in the Certificate Repository.
• Users and applications can retrieve the certificate when needed.
5.Revocation (If Needed):
• If a certificate is compromised or expired, it is revoked.
• The Certificate Revocation List (CRL) is updated to block its use.
6️.Users Verify the Certificate:
• Other users or applications retrieve the certificate for secure communication.
• The certificate is verified using the CA’s public key.
Cryptography in Blockchain:
A blockchain is a decentralized, digital ledger that records transactions across multiple
computers in a way that ensures security, transparency, and immutability. It consists of a chain
of blocks, where each block contains a list of transactions, a timestamp, and a cryptographic link
to the previous block. Blockchain technology is used in cryptocurrencies like Bitcoin and
Ethereum, but its applications extend to various industries, offering a secure way to track assets,
verify identities, and execute contracts without the need for a central authority.
Role of Cryptography in blockchain:
The method used to secure data from unauthorised access is called cryptography. Blockchain
was built on the concept of enabling secure communication between two parties therefore, it uses
cryptography to secure the transactions that take place between two nodes on the network
Two main cryptographic methods used are asymmetric encryption and hashing.
Asymmetric Encryption: In blockchain technology, asymmetric encryption is crucial for
ensuring the authenticity and integrity of transactions. It utilizes a pair of cryptographic keys: a
public key and a private key. The public key is openly distributed and used by others to verify
the authenticity of transactions, while the private key remains confidential and is used to sign
transactions. When a user initiates a transaction, they sign it with their private key, generating a
digital signature that acts as a proof of ownership and authorization. This signature can be
validated by anyone with the corresponding public key, ensuring that the transaction is legitimate
and has not been altered. The use of asymmetric encryption thus helps in preventing unauthorized
access and tampering, as only the holder of the private key can create valid signatures that align
with the public key.
26
Hashing: Hashing is another fundamental cryptographic method used in blockchain to ensure
data integrity and immutability. Each block in a blockchain contains a unique hash value that is
generated based on the block’s data. This hash value serves as a digital fingerprint of the block's
contents. Additionally, each block includes the hash of the previous block, creating a chain of
blocks that are securely linked. This chaining process ensures that any alteration in a block’s
data would change its hash value, which would then invalidate the hashes of all subsequent
blocks. As a result, any attempt to tamper with the data in a block would be easily detectable, as
it would disrupt the entire chain. Hashing thus plays a critical role in preserving the security and
consistency of the blockchain by making it virtually impossible to alter historical data without
altering all subsequent blocks. This inherent immutability is a cornerstone of blockchain's
reliability and trustworthiness.
Benefits:
• Avalanche Effect: This property ensures that a small change in the input data will result in a
drastically different hash value. This makes it difficult for attackers to predict how changes will
affect the hash, thereby improving security and making tampering with data evident.
• Uniqueness: Hash functions are designed to produce a unique hash value for each unique input.
Even a slight change in the input data will result in a completely different hash. This uniqueness
helps in identifying and differentiating between distinct pieces of data and maintaining the
integrity of each block in the blockchain.
• Deterministic: Hash functions are deterministic, meaning that the same input will always
produce the same hash value. This consistency allows for reliable verification of data, as the hash
can be recomputed and compared against the original hash value to confirm that the data has not
been altered.
• Prevention of Reverse Engineering: Hash functions are designed to be one-way functions,
meaning that it is computationally infeasible to reverse-engineer the original input from its hash
value. This property protects data confidentiality by ensuring that even if the hash is exposed,
the original data cannot be easily reconstructed.
27
• Vulnerability
The level of difficulty and complexity of the mathematical problem used in cryptographic techniques
determine the level of security. The less complex the problem, the more vulnerable the cryptographic
technique iS
Cryptography is the foundation of blockchain technology. It provides the tools needed to encrypt
data, record transactions, and send cryptocurrency securely, all without a centralised authority. It
ensures all the blocks get added to the chain without any limit. Hashing in cryptography allows huge
amounts of transactions to be stored on the network while protecting them from hackers.
Transactions are made safe, reliable and scalable with the help of cryptography.
These benefits of cryptography in blockchain have led a large number of multinationals and emerging
startups to adopt this technology over the past few years. It is important to keep up with ongoing
blockchain projects to better understand its uses and applications in the real world.
What are the main differences between RSA and Diffie-Hellman key exchange?
Calculate the SHA-256 hash of a message like "Hello, World!", follow these steps:
OUTPUT:
SHA-256 Hash: dffd6021bb2bd5b0af676290809ec3a53191dd81c7f70a4b28688a362182986f
29
The process of creating and verifying a digital signature
31
Risks of Poor Key Management
Analyze the role of hash functions in digital signatures and authentication protocols.
Analyze the differences between symmetric and asymmetric encryption, providing examples of their
use cases.
33
1. Symmetric Encryption
Definition
Symmetric encryption uses a single key for both encryption and decryption. The same key is
shared between the sender and receiver.
How It Works?
1. The sender encrypts the plaintext message using a secret key.
2. The receiver decrypts the ciphertext using the same key.
Examples of Symmetric Algorithms
• AES (Advanced Encryption Standard) – Used in Wi-Fi security (WPA2), secure messaging
apps.
• DES (Data Encryption Standard) – Older algorithm, now considered weak.
• 3DES (Triple DES) – A more secure version of DES, used in legacy systems.
• Blowfish & Twofish – Used in VPNs and secure data storage.
Advantages of Symmetric Encryption
Fast and efficient – Suitable for large amounts of data.
Less computational overhead – Requires less processing power.
Disadvantages of Symmetric Encryption
Key distribution problem – Securely sharing the secret key is challenging.
Less secure for large-scale use – If the key is compromised, all encrypted data can be
decrypted.
Use Cases of Symmetric Encryption
File and disk encryption – BitLocker, VeraCrypt, AES-based encryption tools.
Wi-Fi security – WPA2 uses AES for securing network traffic.
Secure communication within a private system – Encrypting messages between devices in
a closed network.
2. Asymmetric Encryption
Definition
Asymmetric encryption (public-key cryptography) uses two keys:
• A public key (used for encryption).
• A private key (used for decryption).
How It Works?
1. The sender encrypts the message using the recipient’s public key.
2. The recipient decrypts it using their private key (which only they possess).
Examples of Asymmetric Algorithms
• RSA (Rivest-Shamir-Adleman) – Used in SSL/TLS certificates, email encryption (PGP).
• ECC (Elliptic Curve Cryptography) – Used in modern secure communications due to its
efficiency.
• Diffie-Hellman Key Exchange – Used for securely exchanging keys over an insecure network.
Advantages of Asymmetric Encryption
More secure key exchange – No need to share the private key.
Digital signatures – Provides authentication, integrity, and non-repudiation.
Works well in open networks – Suitable for Internet-based communications.
Disadvantages of Asymmetric Encryption
Slower than symmetric encryption – Requires more computational power.
Not efficient for encrypting large amounts of data – Often combined with symmetric
encryption.
Use Cases of Asymmetric Encryption
TLS/SSL (Secure Web Browsing) – Websites use RSA/ECC for secure HTTPS connections.
Email Security (PGP/GPG) – Ensures end-to-end encryption of emails.
Cryptocurrency Transactions – Uses asymmetric encryption for wallet security (Bitcoin,
Ethereum).
Digital Signatures – Used for verifying document authenticity in e-government and finance.
34
35
Course Code/ Title: IT3404/ INTRODUCTION TO CYBER SECURITY Unit :5
Security policies and procedures play a crucial role in protecting an organization’s information,
systems, and assets. They provide a structured approach to managing security risks, ensuring
compliance with regulations, and guiding employee behavior.
Security Policies
Security policies are formal documents that define an organization’s stance on security-related
matters. They establish high-level rules and expectations for managing data, access controls, and
system security. These policies serve as a foundation for implementing security measures and
ensuring that employees, contractors, and stakeholders follow best practices.
Key Characteristics:
Strategic and broad – Focus on the organization’s overall security goals.
Mandatory compliance – Employees and third parties must follow the rules.
Aligned with legal and industry regulations – Helps meet compliance standards.
Examples of Security Policies:
1. Information Security Policy: Defines how sensitive company and customer data should
be protected from unauthorized access, modification, and disclosure.
2. Access Control Policy: Specifies who can access what data and under what conditions,
ensuring that only authorized users can interact with sensitive information.
3. Acceptable Use Policy (AUP): Outlines the proper use of company resources, such as
email, internet, and IT systems, to prevent misuse and security risks.
4. Data Retention and Disposal Policy: Defines how long data should be stored and the
methods for securely deleting it when no longer needed.
Course Code/ Title: IT3404/ INTRODUCTION TO CYBER SECURITY Unit :5
5. Remote Work Security Policy: Establishes rules for securing company data when
employees work remotely, including VPN usage and device security.
Security Procedures
Security procedures are step-by-step instructions that detail how to implement security policies
in practice. While policies define what needs to be done, procedures explain how to do it. They
ensure consistency, minimize risks, and provide clear guidelines for employees handling
security-related tasks.
4. User Access Request Procedure: Explains how employees can request access to
specific systems and the approval process involved.
5. Security Awareness Training Procedure: Defines how employees should be trained on
cybersecurity best practices, phishing awareness, and secure handling of sensitive data.
1. Risk Reduction
By defining clear security guidelines, companies can prevent data breaches, unauthorized access,
and other cyber threats that could harm their operations.
Security policies and procedures form the backbone of an organization's cybersecurity framework.
Policies provide high-level guidance on security expectations, while procedures outline the
specific steps needed to implement them effectively. Together, they help organizations protect
sensitive data, comply with legal requirements, and reduce security risks.To maintain a strong
security posture, businesses must regularly update their policies, train employees, and enforce
compliance. By doing so, they can safeguard their digital assets and ensure long-term business
continuity in an ever-changing cybersecurity landscape.
Policy Development
Policy development is the process of creating structured guidelines that align with an
organization’s objectives and regulatory requirements. It involves multiple steps to ensure that the
policy is relevant, practical, and enforceable.
1.Identifying the Need for a Policy
Organizations develop policies to address specific needs, such as legal compliance, security risks,
operational efficiency, or ethical concerns. Some common reasons for creating policies include:
Ensuring compliance with industry regulations (e.g., GDPR, HIPAA, ISO 27001).
Protecting sensitive data and IT infrastructure.
Establishing workplace conduct and behavioral expectations.
Course Code/ Title: IT3404/ INTRODUCTION TO CYBER SECURITY Unit :5
4. Compliance and Consequences: Details on how compliance will be monitored and the
penalties for violations.
6.Review and Approval
Before a policy is finalized, it must be reviewed by key stakeholders, such as:
Legal and compliance teams to ensure it meets regulatory requirements.
IT and security teams to validate technical aspects.
Human resources and management to ensure practical implementation.
After review, senior management must formally approve the policy to provide legitimacy and
ensure organizational support.
Policy Implementation
Once a policy is developed, it must be effectively implemented to ensure compliance and achieve
its intended objectives. The implementation process involves communication, training,
monitoring, and continuous improvement.
Policy development and implementation are essential for ensuring security, compliance, and
operational efficiency within an organization. By following a structured approach—identifying
needs, conducting risk assessments, defining clear objectives, and ensuring effective
communication and enforcement—organizations can create policies that protect assets and guide
employee behavior. Regular evaluations and updates ensure policies remain effective in an ever-
changing business and security landscape. A strong policy framework not only mitigates risks but
also fosters a culture of accountability and continuous improvement.
Course Code/ Title: IT3404/ INTRODUCTION TO CYBER SECURITY Unit :5
Access control policies are essential for organizations to regulate and manage who can access
specific resources, systems, and data. These policies define rules and procedures to ensure that
only authorized individuals can access sensitive information while preventing unauthorized
access. Implementing strong access control measures helps protect confidential data, maintain
operational security, and comply with regulatory requirements.
Example: A junior accountant can access financial reports but cannot approve transactions or
modify budgets.
Enforce Strong Password Policies: Require complex passwords and periodic password
changes.
Use Role-Based Access Control (RBAC): Assign permissions based on job roles to
improve efficiency and security.
Implement Logging and Monitoring: Continuously track access activity to detect
suspicious behavior.
Train Employees on Access Control Policies: Educate staff on security best practices
and the importance of following access rules.
Access control policies are essential for protecting sensitive data, maintaining security, and
ensuring compliance with industry regulations. By implementing appropriate access control
models, enforcing authentication mechanisms, and following best practices, organizations can
prevent unauthorized access and reduce security risks. Regular monitoring, audits, and continuous
improvement of access policies help maintain a secure and efficient access management system.
Organizations must strike a balance between security and usability to create a robust access control
framework that supports business operations while safeguarding critical assets.
Data classification and handling policies are essential for protecting an organization’s sensitive
information. These policies define how data should be categorized, stored, transmitted, and
protected based on its sensitivity and importance. Proper classification helps reduce security risks,
ensures compliance with regulations, and improves data management.
1. Public Data
Public data refers to information that is openly available and does not pose any risk to the
organization if shared or accessed by anyone. It requires minimal security measures since it is
intended for unrestricted distribution.
Characteristics of Public Data:
Freely accessible to the public.
No confidentiality or security concerns.
Disclosure does not harm the organization.
Examples of Public Data:
Company website content.
Press releases and marketing materials.
Published research papers.
Public financial statements.
Handling Requirements:
Can be stored on public websites or cloud platforms.
No encryption or access restrictions required.
Regular updates to ensure accuracy.
Course Code/ Title: IT3404/ INTRODUCTION TO CYBER SECURITY Unit :5
2. Internal Data
Internal data is intended for use within the organization and is not meant for public access. While
not highly sensitive, unauthorized disclosure could cause minor operational risks.
Characteristics of Internal Data:
Restricted to employees and authorized personnel.
Not intended for public distribution.
Unauthorized access could cause minor disruptions.
Examples of Internal Data:
Employee handbooks and company policies.
Internal reports and meeting minutes.
Business operation procedures.
Internal emails and memos.
Handling Requirements:
Stored on secured internal systems or intranet.
Access restricted to employees or specific teams.
Basic security measures such as password protection.
Controlled sharing within the organization.
3. Confidential Data
Confidential data includes sensitive information that, if accessed by unauthorized individuals,
could cause harm to the organization, its employees, or customers. This level of data requires
strong security controls to prevent data breaches.
Data handling policies are essential for organizations to ensure the secure processing, storage,
transmission, and disposal of data based on its classification. These policies define how data should
be managed at every stage of its lifecycle to prevent unauthorized access, data breaches, and
compliance violations. By implementing strong data handling policies, organizations can protect
sensitive information, comply with regulations, and maintain the trust of stakeholders.
Data must be transmitted securely to prevent unauthorized interception. Public data can be shared
freely, while internal data should only be exchanged through company-approved channels.
Confidential and highly confidential data require encryption before transmission, ensuring data
remains protected even if intercepted. Secure file transfer protocols (SFTP), virtual private
networks (VPNs), and encrypted emails are commonly used for transmitting sensitive data.
Organizations should also implement data loss prevention (DLP) tools to prevent employees
from accidentally or intentionally sharing sensitive information outside the organization.
Payment Card Industry Data Security Standard (PCI DSS): Regulates secure handling
of credit card information.
ISO 27001: International standard for information security management.
Non-compliance can result in legal penalties, financial losses, and reputational damage.
Organizations should conduct regular compliance audits and update policies to align with changing
regulations.
Data handling policies play a critical role in protecting an organization's information assets. By
implementing secure storage, controlled access, encrypted transmission, and proper disposal
methods, organizations can mitigate risks and prevent data breaches. Compliance with industry
regulations ensures legal protection, while employee training and monitoring enhance overall data
security. As cyber threats continue to evolve, organizations must regularly update their data
handling policies to maintain a strong security posture and safeguard sensitive information.
An Incident Response Plan (IRP) is a structured framework that organizations use to detect,
respond to, and recover from cybersecurity incidents. It ensures that security teams can minimize
damage, restore operations quickly, and strengthen defenses to prevent future attacks.
2.Identification focuses on detecting and confirming security incidents using logs, alerts, and
reports. Organizations must classify incidents based on severity, gather initial evidence, and
determine the appropriate response to contain the threat effectively.
3.Containment aims to prevent the incident from spreading by isolating affected systems and
accounts. Short-term containment may involve disconnecting compromised devices, while long-
term containment includes applying patches and strengthening access controls.
4.Eradication ensures that the root cause of the incident is removed from the environment. This
step involves scanning for hidden threats, cleaning affected systems, and implementing security
measures to prevent recurrence.
5.Recovery involves restoring affected systems and verifying that normal operations resume
securely. Organizations should closely monitor recovered systems for any lingering threats and
ensure that all vulnerabilities have been addressed before fully reactivating services.
6.Lessons Learned is the final step, where the incident is analyzed to identify weaknesses and
improve future response efforts. A post-incident review helps update security policies, enhance
training, and strengthen defenses to prevent similar incidents in the future.
Components of an IRP
Example:
Conduct regular incident response drills (tabletop exercises).
Implement automated threat detection & response tools.
Course Code/ Title: IT3404/ INTRODUCTION TO CYBER SECURITY Unit :5
Employees are educated on handling sensitive information, encrypting data, and complying
with data protection laws.
Training covers the importance of secure file sharing, data classification, and avoiding
unauthorized access.
e. Social Engineering Awareness
Employees learn about common social engineering tactics, such as pretexting, baiting, and
impersonation scams.
Training teaches employees how to verify requests for sensitive information and avoid
manipulation by cybercriminals.
f. Mobile Device and Remote Work Security
Training covers security risks associated with mobile devices, such as using unsecured Wi-
Fi and downloading unverified apps.
Employees are trained on secure remote work practices, including VPN usage, endpoint
security, and safe collaboration tools.
g. Incident Reporting and Response
Employees are encouraged to report suspicious activities and potential security breaches to
IT or security teams.
Training explains how to identify security incidents, who to contact, and what steps to take
if a breach occurs.
h. Physical Security Best Practices
Employees learn the importance of securing workstations, locking screens when away, and
protecting access badges.
Training includes guidance on handling USB drives, avoiding tailgating, and securing
printed documents.
Security awareness training programs play a vital role in reducing cyber risks and strengthening
an organization’s security posture. By educating employees on cybersecurity best practices,
organizations can prevent data breaches, improve compliance, and foster a security-conscious
culture. Implementing a structured and engaging training program ensures that employees remain
vigilant against cyber threats and contribute to a safer digital environment.
Course Code/ Title: IT3404/ INTRODUCTION TO CYBER SECURITY Unit :5
Regular security audits and vulnerability scans must be conducted to ensure compliance.
Non-compliance can lead to financial penalties, loss of payment processing privileges, and
reputational damage.
Objective: Identifying relevant laws, such as GDPR (for data privacy), HIPAA (for
healthcare), and PCI-DSS (for financial transactions), ensures the company understands its
legal obligations based on its industry and geography.
5. Train Employees
Training: Regular compliance training ensures that employees understand the regulations
relevant to their roles and responsibilities.
Importance: Employee awareness is critical in preventing violations, reducing risks, and
fostering a culture of compliance within the organization.
6. Set Up Monitoring
Monitoring Mechanisms: Establish continuous monitoring systems to track compliance
activities and detect potential violations early.
Key Outcome: Proactive monitoring allows businesses to identify and address compliance
breaches before they escalate, ensuring timely corrective actions.
8. Document Controls
Documentation: Businesses must maintain detailed records of compliance-related
activities, including policies, procedures, and audit reports.
Significance: Proper documentation serves as evidence of compliance during external
audits and ensures that regulatory authorities can verify adherence to laws.
Course Code/ Title: IT3404/ INTRODUCTION TO CYBER SECURITY Unit :5
Compliance and regulatory policies are essential for organizations to ensure legal and secure
operations. Adhering to industry regulations such as GDPR, HIPAA, and PCI-DSS helps protect
sensitive data, avoid financial penalties, and enhance customer trust. Implementing strong security
policies, conducting regular audits, and training employees on compliance best practices are key
to maintaining regulatory adherence. Businesses should continuously review and update their
compliance strategies to stay ahead of evolving cybersecurity threats and regulatory requirements.
Course Code/ Title: IT3404/ INTRODUCTION TO CYBER SECURITY Unit :5
Real-Time Threat Detection: Organizations can identify security threats immediately and
take action before significant damage occurs.
Improved Compliance Management: Continuous monitoring ensures that organizations
consistently adhere to industry regulations and security policies.
Faster Incident Response: Security teams can respond to cyber threats quickly using real-
time monitoring insights.
Reduced Attack Surface: Proactive monitoring helps minimize vulnerabilities by
continuously assessing security risks.
Minimized Downtime: Security threats are detected early, preventing disruptions to
business operations.
Organizations use these tools to ensure compliance with cloud security best practices and
prevent unauthorized access.