0% found this document useful (0 votes)
11 views20 pages

DS-UNIT1 NOTES

adsa aginemsnts

Uploaded by

tecnologyhub96
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views20 pages

DS-UNIT1 NOTES

adsa aginemsnts

Uploaded by

tecnologyhub96
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

E2UC507T - DISTRIBUTED COMPUTING

UNIT-1

TRENDS IN DISTRIBUTED SYSTEMS AND SYSTEM MODEL


SYLLABUS:
Trends in Distributed Systems - Resource Sharing – Challenges - Introduction to Physical
Models - Architectural Models - Fundamental models - Types of Networks – Network
Principles - Internet Protocols.

Distributed systems
A distributed system consists of a collection of autonomous computers, connected
through a network and distribution middleware, which enables computers to coordinate their
activities and to share the resources of the system, so that users perceive the system as a single,
integrated computing facility.
Common Characteristics

Certain common characteristics can be used to assess distributed systems

 Resource Sharing
 Openness
 Concurrency
 Scalability
 Fault Tolerance
 Transparency

Resource Sharing

 Ability to use any hardware, software or data anywhere in the system.


 Resource manager controls access, provides naming scheme and controls
concurrency.
 Resource sharing model (e.g. client/server or object-based) describing how

 resources are provided,


 they are used and
 provider and user interact with each other.

Openness

 Openness is concerned with extensions and improvements of distributed


systems.
 Detailed interfaces of components need to be published.
 New components have to be integrated with existing components.
 Differences in data representation of interface types on
different processors (of different vendors) have to be
resolved.

Concurrency

Components in distributed systems are executed in concurrent processes.

 Components access and update shared resources (e.g. variables,


databases, device drivers).
 Integrity of the system may be violated if concurrent updates are not
coordinated.
o Inconsistent analysis

Scalability

 Adaption of distributed systems to


• accommodate more users
• respond faster (this is the hard one)
 Usually done by adding more and/or faster processors.
 Components should not need to be changed when scale of a system increases.
 Design components to be scalable

Fault Tolerance

Hardware, software and networks fail!

 Distributed systems must maintain availability


even at low levels of
hardware/software/network reliability.
 Fault tolerance is achieved by

• recovery
• redundancy

Transparency

Distributed systems should be perceived by users and application


programmers as a whole rather than as a collection of cooperating
components.

• Transparency has different dimensions that were identified by ANSA.


• These represent various properties that distributed systems should have.
Access Transparency
Enables local and remote information objects to be accessed using identical operations.

• Example: File system operations in NFS.


• Example: Navigation in the Web.
• Example: SQL Queries

Location Transparency
Enables information objects to be accessed without knowledge of their location.

• Example: File system operations in NFS


• Example: Pages in the Web
• Example: Tables in distributed databases

Concurrency Transparency

Enables several processes to operate concurrently using shared information


objects without interference between them.

• Example: NFS
• Example: Automatic teller machine network
• Example: Database management system

Replication Transparency

Enables multiple instances of information objects to be used to increase


reliability and performance without knowledge of the replicas by users or
application programs

• Example: Distributed DBMS


• Example: Mirroring Web Pages.

Failure Transparency

• Enables the concealment of faults


• Allows users and applications to complete their tasks despite
the failure of other components.
• Example: Database Management System

Migration Transparency

Allows the movement of information objects within a system without


affecting the operations of users or application programs

• Example: NFS
• Example: Web Pages

Performance Transparency
Allows the system to be reconfigured to improve performance as loads vary.

• Example: Distributed make.

Scaling Transparency
Allows the system and applications to expand in scale without change to the
system structure or the application algortithms.

• Example: World-Wide-Web Example: Distributed Database


Types of Networks
The networks used in distributed systems are built from a variety of transmission media
including
• wire, cable, fibre and wireless channels; hardware devices, including routers,
switches, bridges, hubs, repeaters and network interfaces; and software
components, including protocol stacks, communication handlers and drivers.

• The resulting functionality and performance available to distributed system


and application programs is affected by all of these.
• The network performance parameters that are of primary important are:

• The latency and the point-to-point data transfer rate


• Latency is the delay that occurs after a send operation is executed and before
data starts to arrive at the destination computer.

• It can be measured as the time required to transfer an empty message.


• Here we are considering only network latency, which forms a part of the
process-to-process latency

• Data transfer rate is the speed at which data can be transferred between two
computers in the network once transmission has begun (Bits per Sec)
• Message transmission time

= latency + length ⁄ data transfer rate


Longer messages have to be segmented and the transmission time is the sum
of the times for the segments

Personal area networks (PANs)


• PANs are a subcategory of local networks in which the various digital devices carried
by a user are connected by a low-cost, low-energy network.
• Then Wired PAN, wireless personal area networks (WPANs) are of increasing
importance due to the number of personal devices such as mobile phones, tablets, digital
cameras, music players and so on that are now carried by many people

Local area networks (LANs)


1. LANs carry messages at relatively high speeds between computers connected by a
single communication medium, such as twisted copper wire, coaxial cable or optical
fibre.
2. A segment is a section of cable that serves a department or a floor of a building and
may have many computers attached.

3. No routing of messages is required within a segment, since the medium provides direct
connections between all of the computers connected to it.
4. In local area networks, the total system bandwidth is high and latency is low, except
when message traffic is very high

Wide area networks (WANs)


1. WANs carry messages at lower speeds between nodes that are often in different
organizations and may be separated by large distances.

2. They may be located in different cities, countries or continents. The communication


medium is a set of communication circuits linking a set of dedicated computers called
routers.
3. They manage the communication network and route messages or packets to their
destinations. In most networks, the routing operations introduce a delay at each point
in the route,
4. so the total latency for the transmission of a message depends on the route that it follows
and the traffic loads in the various network segments that it traverses.

Metropolitan area networks (MANs)


1. This type of network is based on the high bandwidth copper and fibre optic cabling
recently installed in some towns and cities for the transmission of video, voice and other
data over distances of up to 50 kilometres.

2. A variety of technologies have been used to implement the routing of data in MANs,
ranging from Ethernet to ATM.
3. The DSL (Digital Subscriber Line) and cable modem connections now available in
many countries are an example.
4. DSL typically uses ATM switches located in telephone exchanges to route digital data
onto twisted pairs of copper wire.

Wireless local area networks (WLANs)


1. WLANs are designed for use in place of wired LANs to provide connectivity for mobile
devices, or simply to remove the need for a wired infrastructure.

2. To connect computers within homes and office buildings to each other and the Internet.
They are in widespread use in several variants of the IEEE 802.11 standard (WiFi),

• Wireless metropolitan area networks (WMANs)


1. The IEEE 802.16 WiMAX standard is targeted at this class of network. It aims to
provide an alternative to wired connections to home and office buildings

Wireless wide area networks (WWANs)


1. Most mobile phone networks are based on digital wireless network technologies such
as the GSM (Global System for Mobile communication) standard.
2. The cellular networks mentioned above offer relatively low data rates 9.6 to 33 kbps –
but the ‘third generation’ (3G)
3. The underlying technology is referred to as UMTS (Universal Mobile
Telecommunications System). A path has also been defined to evolve UMTS towards
4G data rates of up to 100 Mbps

Network Principles

• The basis for all computer networks is the packet-switching technique.


• Packets are queued in a buffer and transmitted when the link is available.
• Communication is asynchronous messages arrive at their destination after a delay that
varies depending upon the time that packets take to travel through the network.

• Circuit-switching technology that depends on the underlying conventional telephony


Data streaming
1. The transmission and display of audio and video in real time is referred to as streaming.
2. It requires much higher bandwidths than most other forms of communication in
distributed systems.

3. The timely delivery of audio and video streams depends upon the availability of
connections with adequate quality of service, bandwidth, latency and reliability must
all be considered.

4. Ideally, adequate quality of service should be guaranteed. ATM networks are designed
to provide high bandwidth and low latencies and to support QoS by the reservation of
network resources.
Circuit switching
1. At one time telephone networks were the only telecommunication networks.
2. when a caller dialled a number, the pair of wires from her phone to the local exchange
was connected by an automatic switch at the exchange to the pair of wires connected to
the other party’s phone.

3. For a long-distance call the process was similar but the connection would be switched
through a number of intervening exchanges to its destination.
4. This system is sometimes referred to as the plain old telephone system, or POTS. It is
a typical circuit-switching network.

Broadcast
1. Broadcasting is a transmission technique that involves no switching.

2. Everything is transmitted to every node, and it is up to potential receivers to notice


transmissions addressed to them.
3. Some LAN technologies, including Ethernet, are based on broadcasting.

4. Wireless networking is necessarily based on broadcasting, but in the absence of fixed


circuits the broadcasts are arranged to reach nodes grouped in cells

Protocols
1. The term protocol is used to refer to a well-known set of rules and formats to be used
for communication between processes in order to perform a given task.
2. The definition of a protocol has two important parts to it:

3. A specification of the sequence of messages that must be exchanged;


4. A specification of the format of the data in the message

5. A protocol is implemented by a pair of software modules located in the sending and


receiving computers. For example, a transport protocol transmits messages of any
length from a sending process to a receiving process

Internet Protocol (IP)


The Internet Protocol (IP) which is pivotal in computer network protocols is responsible for
the transmission of data packages from or to devices that are connected to the Internet or any
other network. Moreover, it provides the addressing and routing mechanisms that the devices
require for their communications. IP addresses represent the unique identifiers given to each
device on a network to be able to route data packets to their receivers. IP operates at the network
layer of the OSI model. Consequently, IP operates together with other protocols, including CP
(Connection Protocol) and UDP (User Datagram Protocol), to provide reliable and efficient
communication for different devices.

Types of Internet Protocol


Internet Protocols are of different types having different uses. These are mentioned below:
1. TCP/IP(Transmission Control Protocol/ Internet Protocol)
2. SMTP(Simple Mail Transfer Protocol)
3. PPP(Point-to-Point Protocol)

4. FTP (File Transfer Protocol)


5. SFTP(Secure File Transfer Protocol)

6. HTTP(Hyper Text Transfer Protocol)


7. HTTPS(HyperText Transfer Protocol Secure)

8. TELNET(Terminal Network)
9. POP3(Post Office Protocol 3)

10. IPv4
11. IPv6

12. ICMP
13. UDP
14. IMAP

15. SSH
16. Gopher

1. TCP/IP(Transmission Control Protocol/ Internet Protocol)


These are a set of standard rules that allows different types of computers to communicate with
each other. The IP protocol ensures that each computer that is connected to the Internet is
having a specific serial number called the IP address. TCP specifies how data is exchanged
over the internet and how it should be broken into IP packets. It also makes sure that the packets
have information about the source of the message data, the destination of the message data, the
sequence in which the message data should be re-assembled, and checks if the message has
been sent correctly to the specific destination. The TCP is also known as a connection-oriented
protocol.
For more details, please refer TCP/IP Model article.

2. SMTP(Simple Mail Transfer Protocol)


These protocols are important for sending and distributing outgoing emails. This protocol uses
the header of the mail to get the email id of the receiver and enters the mail into the queue of
outgoing mail. And as soon as it delivers the mail to the receiving email id, it removes the email
from the outgoing list. The message or the electronic mail may consider the text, video, image,
etc. It helps in setting up some communication server rules.
3. PPP(Point-to-Point Protocol)
It is a communication protocol that is used to create a direct connection between two
communicating devices. This protocol defines the rules using which two devices will
authenticate with each other and exchange information with each other. For example, A user
connects his PC to the server of an Internet Service Provider and also uses PPP. Similarly, for
connecting two routers for direct communication it uses PPP.

4. FTP (File Transfer Protocol)


This protocol is used for transferring files from one system to the other. This works on a client-
server model. When a machine requests for file transfer from another machine, the FTO sets
up a connection between the two and authenticates each other using their ID and Password.
And, the desired file transfer takes place between the machines.

5. SFTP(Secure File Transfer Protocol)


SFTP which is also known as SSH FTP refers to File Transfer Protocol (FTP) over Secure Shell
(SSH) as it encrypts both commands and data while in transmission. SFTP acts as an extension
to SSH and encrypts files and data then sends them over a secure shell data stream. This
protocol is used to remotely connect to other systems while executing commands from the
command line.

6. HTTP(Hyper Text Transfer Protocol)


This protocol is used to transfer hypertexts over the internet and it is defined by the www(world
wide web) for information transfer. This protocol defines how the information needs to be
formatted and transmitted. And, it also defines the various actions the web browsers should
take in response to the calls made to access a particular web page. Whenever a user opens their
web browser, the user will indirectly use HTTP as this is the protocol that is being used to share
text, images, and other multimedia files on the World Wide Web.

Note: Hypertext refers to the special format of the text that can contain links to other texts.

7. HTTPS(HyperText Transfer Protocol Secure)


HTTPS is an extension of the Hypertext Transfer Protocol (HTTP). It is used for secure
communication over a computer network with the SSL/TLS protocol for encryption and
authentication. So, generally, a website has an HTTP protocol but if the website is such that it
receives some sensitive information such as credit card details, debit card details, OTP, etc then
it requires an SSL certificate installed to make the website more secure. So, before entering any
sensitive information on a website, we should check if the link is HTTPS or not. If it is not
HTTPS then it may not be secure enough to enter sensitive information.

8. TELNET(Terminal Network)
TELNET is a standard TCP/IP protocol used for virtual terminal service given by ISO. This
enables one local machine to connect with another. The computer which is being connected is
called a remote computer and which is connecting is called the local computer. TELNET
operation lets us display anything being performed on the remote computer in the local
computer. This operates on the client/server principle. The local computer uses the telnet client
program whereas the remote computer uses the telnet server program.

9. POP3(Post Office Protocol 3)


POP3 stands for Post Office Protocol version 3. It has two Message Access Agents (MAAs)
where one is client MAA (Message Access Agent) and another is server MAA(Message Access
Agent) for accessing the messages from the mailbox. This protocol helps us to retrieve and
manage emails from the mailbox on the receiver mail server to the receiver’s computer. This is
implied between the receiver and the receiver mail server. It can also be called a one-way client-
server protocol. The POP3 WORKS ON THE 2 PORTS I.E. PORT 110 AND PORT 995.

10. IPv4
The fourth and initially widely used version of the Internet Protocol is called IPv4 (Internet
Protocol version 4). It is the most popular version of the Internet Protocol and is in charge of
distributing data packets throughout the network. Maximum unique addresses for IPv4 are
4,294,967,296 (232), which are possible due to the use of 32-bit addresses. The network
address and the host address are the two components of each address. The host address
identifies a particular device within the network, whereas the network address identifies the
network to which the host belongs. In the “dotted decimal” notation, which is the standard for
IPv4 addresses, each octet (8 bits) of the address is represented by its decimal value and
separated by a dot (e.g. 192.168.1.1).

11. IPv6
The most recent version of the Internet Protocol, IPv6, was created to address the IPv4
protocol’s drawbacks. A maximum of 4.3 billion unique addresses are possible with IPv4’s 32-
bit addresses. Contrarily, IPv6 uses 128-bit addresses, which enable a significantly greater
number of unique addresses. This is significant because IPv4 addresses were running out and
there are an increasing number of devices that require internet access. Additionally, IPv6 offers
enhanced security features like integrated authentication and encryption as well as better
support for mobile devices. IPv6 support has spread among websites and internet service
providers, and it is anticipated to gradually displace IPv4 as the main internet protocol.

For more details, please refer Differences between IPv4 and IPv6 article.

12. ICMP
ICMP (Internet Control Message Protocol) is a network protocol that is used to send error
messages and operational information about network conditions. It is an integral part of the
Internet Protocol (IP) suite and is used to help diagnose and troubleshoot issues with network
connectivity. ICMP messages are typically generated by network devices, such as routers, in
response to errors or exceptional conditions encountered in forwarding a datagram. Some
examples of ICMP messages include:

 Echo Request and Echo Reply (ping)


 Destination Unreachable
 Time Exceeded
 Redirect

ICMP can also be used by network management tools to test the reachability of a host and
measure the round-trip time for packets to travel from the source to the destination and back.
It should be noted that ICMP is not a secure protocol, it can be used in some types of network
attacks like DDoS amplification.

13. UDP
UDP (User Datagram Protocol) is a connectionless, unreliable transport layer protocol. Unlike
TCP, it does not establish a reliable connection between devices before transmitting data, and
it does not guarantee that data packets will be received in the order they were sent or that they
will be received at all. Instead, UDP simply sends packets of data to a destination without any
error checking or flow control. UDP is typically used for real-time applications such as
streaming video and audio, online gaming, and VoIP (Voice over Internet Protocol) where a
small amount of lost data is acceptable and low latency is important. UDP is faster than TCP
because it has less overhead. It doesn’t need to establish a connection, so it can send data
packets immediately. It also doesn’t need to wait for confirmation that the data was received
before sending more, so it can transmit data at a higher rate.

14. IMAP
IMAP (Internet Message Access Protocol) is a protocol used for retrieving emails from a mail
server. It allows users to access and manage their emails on the server, rather than downloading
them to a local device. This means that the user can access their emails from multiple devices
and the emails will be synced across all devices. IMAP is more flexible than POP3 (Post Office
Protocol version 3) as it allows users to access and organize their emails on the server, and also
allows multiple users to access the same mailbox.

15. SSH
SSH (Secure Shell) is a protocol used for secure remote login and other secure network
services. It provides a secure and encrypted way to remotely access and manage servers,
network devices, and other computer systems. SSH uses public-key cryptography to
authenticate the user and encrypt the data being transmitted, making it much more secure than
traditional remote login protocols such as Telnet. SSH also allows for secure file transfers using
the SCP (Secure Copy) and SFTP (Secure File Transfer Protocol) protocols. It is widely used
in Unix-based operating systems and is also available for Windows. It is commonly used by
system administrators, developers, and other technical users to remotely access and manage
servers and other network devices.

16. Gopher
Gopher is a type of file retrieval protocol that provides downloadable files with some
description for easy management, retrieving, and searching of files. All the files are arranged
on a remote computer in a stratified manner. It is an old protocol and it is not much used
nowadays.
Distributed Computing System Models
Distributed computing is a system where processing and data storage is distributed across
multiple devices or systems, rather than handled by a single central device. In this article, we
will see Distributed Computing System Models.

Important Topics for Distributed Computing System Models


 Types of Distributed Computing System Models
o Physical Model

o Architectural Model
o Fundamental Model

Types of Distributed Computing System Models

1. Physical Model
A physical model represents the underlying hardware elements of a distributed system. It
encompasses the hardware composition of a distributed system in terms of computers and other
devices and their interconnections. It is primarily used to design, manage, implement, and
determine the performance of a distributed system.
A physical model majorly consists of the following components:

1. Nodes
Nodes are the end devices that can process data, execute tasks, and communicate with the other
nodes. These end devices are generally the computers at the user end or can be servers,
workstations, etc.

 Nodes provision the distributed system with an interface in the presentation layer that
enables the user to interact with other back-end devices, or nodes, that can be used for
storage and database services, processing, web browsing, etc.
 Each node has an Operating System, execution environment, and different middleware
requirements that facilitate communication and other vital tasks.,
2. Links
Links are the communication channels between different nodes and intermediate devices.
These may be wired or wireless. Wired links or physical media are implemented using copper
wires, fiber optic cables, etc. The choice of the medium depends on the environmental
conditions and the requirements. Generally, physical links are required for high-performance
and real-time computing. Different connection types that can be implemented are as follows:
 Point-to-point links: Establish a connection and allow data transfer between only two
nodes.
 Broadcast links: It enables a single node to transmit data to multiple nodes
simultaneously.
 Multi-Access links: Multiple nodes share the same communication channel to transfer
data. Requires protocols to avoid interference while transmission.

3. Middleware
These are the softwares installed and executed on the nodes. By running middleware on each
node, the distributed computing system achieves a decentralised control and decision-making.
It handles various tasks like communication with other nodes, resource management, fault
tolerance, synchronisation of different nodes and security to prevent malicious and
unauthorised access.

4. Network Topology
This defines the arrangement of nodes and links in the distributed computing system. The most
common network topologies that are implemented are bus, star, mesh, ring or hybrid. Choice
of topology is done by determining the exact use cases and the requirements.

5. Communication Protocols
Communication protocols are the set rules and procedures for transmitting data from in the
links. Examples of these protocols include TCP, UDP, HTTPS, MQTT etc. These allow the
nodes to communicate and interpret the data.

2. Architectural Model
Architectural model in distributed computing system is the overall design and structure of the
system, and how its different components are organised to interact with each other and provide
the desired functionalities. It is an overview of the system, on how will the development,
deployment and operations take place. Construction of a good architectural model is required
for efficient cost usage, and highly improved scalability of the applications.

The key aspects of architectural model are:

1. Client-Server model
It is a centralised approach in which the clients initiate requests for services and severs respond
by providing those services. It mainly works on the request-response model where the client
sends a request to the server and the server processes it, and responds to the client accordingly.

 It can be achieved by using TCP/IP, HTTP protocols on the transport layer.


 This is mainly used in web services, cloud computing, database management systems
etc.

2. Peer-to-peer model
It is a decentralised approach in which all the distributed computing nodes, known as peers,
are all the same in terms of computing capabilities and can both request as well as provide
services to other peers. It is a highly scalable model because the peers can join and leave the
system dynamically, which makes it an ad-hoc form of network.

 The resources are distributed and the peers need to look out for the required resources
as and when required.
 The communication is directly done amongst the peers without any intermediaries
according to some set rules and procedures defined in the P2P networks.

 The best example of this type of computing is BitTorrent.

3. Layered model
It involves organising the system into multiple layers, where each layer will provision a specific
service. Each layer communicated with the adjacent layers using certain well-defined protocols
without affecting the integrity of the system. A hierarchical structure is obtained where each
layer abstracts the underlying complexity of lower layers.
4. Micro-services model
In this system, a complex application or task, is decomposed into multiple independent tasks
and these services running on different servers. Each service performs only a single function
and is focussed on a specific business-capability. This makes the overall system more
maintainable, scalable and easier to understand. Services can be independently developed,
deployed and scaled without affecting the ongoing services.
3. Fundamental Model
The fundamental model in a distributed computing system is a broad conceptual framework
that helps in understanding the key aspects of the distributed systems. These are concerned
with more formal description of properties that are generally common in all architectural
models. It represents the essential components that are required to understand a distributed
system’s behaviour. Three fundamental models are as follows:

1. Interaction Model
Distributed computing systems are full of many processes interacting with each other in highly
complex ways. Interaction model provides a framework to understand the mechanisms and
patterns that are used for communication and coordination among various processes. Different
components that are important in this model are –
 Message Passing – It deals with passing messages that may contain, data, instructions,
a service request, or process synchronisation between different computing nodes. It may
be synchronous or asynchronous depending on the types of tasks and processes.
 Publish/Subscribe Systems – Also known as pub/sub system. In this the publishing
process can publish a message over a topic and the processes that are subscribed to that
topic can take it up and execute the process for themselves. It is more important in an
event-driven architecture.

2. Remote Procedure Call (RPC)


It is a communication paradigm that has an ability to invoke a new process or a method on a
remote process as if it were a local procedure call. The client process makes a procedure call
using RPC and then the message is passed to the required server process using communication
protocols. These message passing protocols are abstracted and the result once obtained from
the server process, is sent back to the client process to continue execution.

1. Failure Model
This model addresses the faults and failures that occur in the distributed computing system. It
provides a framework to identify and rectify the faults that occur or may occur in the system.
Fault tolerance mechanisms are implemented so as to handle failures by replication and error
detection and recovery methods. Different failures that may occur are:
 Crash failures – A process or node unexpectedly stops functioning.

 Omission failures – It involves a loss of message, resulting in absence of required


communication.
 Timing failures – The process deviates from its expected time quantum and may lead
to delays or unsynchronised response times.
 Byzantine failures – The process may send malicious or unexpected messages that
conflict with the set protocols.

2. Security Model
Distributed computing systems may suffer malicious attacks, unauthorised access and data
breaches. Security model provides a framework for understanding the security requirements,
threats, vulnerabilities, and mechanisms to safeguard the system and its resources. Various
aspects that are vital in the security model are:

 Authentication: It verifies the identity of the users accessing the system. It ensures that
only the authorised and trusted entities get access. It involves –

o Password-based authentication: Users provide a unique password to prove


their identity.
o Public-key cryptography: Entities possess a private key and a corresponding
public key, allowing verification of their authenticity.
o Multi-factor authentication: Multiple factors, such as passwords, biometrics,
or security tokens, are used to validate identity.

 Encryption:
o It is the process of transforming data into a format that is unreadable without a
decryption key. It protects sensitive information from unauthorized access or
disclosure.

 Data Integrity:
o Data integrity mechanisms protect against unauthorised modifications or
tampering of data. They ensure that data remains unchanged during storage,
transmission, or processing. Data integrity mechanisms include:

o Hash functions – Generating a hash value or checksum from data to


verify its integrity.
o Digital signatures – Using cryptographic techniques to sign data and
verify its authenticity and integrity.

You might also like