0% found this document useful (0 votes)
222 views22 pages

SS 3 Lesson Note On Ict

This lesson note provides information on database security for a class of 15-year old students. It defines database security as measures to protect databases from internal and external threats. The challenges of database security include internet-based attacks, insider threats, and access control. Best practices for database security include physical security of databases, encryption, password management, isolating sensitive databases, and auditing. Database security is important for data protection, regulatory compliance, and safeguarding organizations' reputations.

Uploaded by

Isaiah Samson
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
222 views22 pages

SS 3 Lesson Note On Ict

This lesson note provides information on database security for a class of 15-year old students. It defines database security as measures to protect databases from internal and external threats. The challenges of database security include internet-based attacks, insider threats, and access control. Best practices for database security include physical security of databases, encryption, password management, isolating sensitive databases, and auditing. Database security is important for data protection, regulatory compliance, and safeguarding organizations' reputations.

Uploaded by

Isaiah Samson
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

LESSON NOTE FOR WEEK 2 ENDING

SUBJECT INFORMATION AND COMMUNICATION TECHNOLOGY


CLASS SS 3
SEX MIXED
AGE 15 YEARS
PEROID
DURATION
TIME
DATE
NO: OF STUDENTS
TEACHER MR. ISAIAH SAMSON
TOPIC INDEXES: PRIMARY AND SECONDARY

OBJECTIVE: By the end of the lesson, students should be able to;


1. Define data management
2. Explain indexes
3. Identify types of indexes
REFERENCE: New Basic Data Processing For Senior Secondary By Onyeama Ugonma et al.
Online: https://www.codecademy.com/article/sql-indexes).

INTRODUCTION: WHAT ARE PRIMARY AND SECONDARY INDEX?


Primary and secondary indexes are important components of databases that help in
efficient data retrieval. Let's understand what primary and secondary indexes are and how
they differ from each other.

PRIMARY INDEX
A primary index is an index on a set of fields that includes the unique primary key
for the field and is guaranteed not to contain duplicates. It is typically created
when the database is created and is used as the primary means of accessing data in
the database. It ensures fast access to data based on the primary key. They are two
types of primary index: Dense index and Sparse index

SECONDARY INDEX
A secondary index provides an alternate means of accessing data in a database, in
addition to the primary index. Unlike the primary index, a secondary index is not
based on the primary key and may have duplicates. It is created on non-key fields
of the table and allows for efficient retrieval of data based on those fields.
For example, if you have a database of employees and you want to retrieve data
based on the employee's name, you can create a secondary index on the name
field. This secondary index will help in faster retrieval of data based on the
employee's name.
Differences between Primary and Secondary Indexes:
1. Primary indexes are based on the primary key of the table, while
secondary indexes are based on non-key fields.
2. Primary indexes are guaranteed not to contain duplicates, while secondary
indexes may have duplicates.
3. Primary indexes are automatically created when the table is activated,
while secondary indexes need to be explicitly created by the database
administrator or developer.
4. Primary indexes are used as the primary means of accessing data in the
database, while secondary indexes provide an alternate means of accessing
data.
In summary, primary indexes are based on the primary key and provide fast access to
data, while secondary indexes are based on non-key fields and provide an alternate
means of accessing data based on those fields.

COMPOSITE SEARCH KEYS AND SECONDARY INDEXES


Composite search keys and secondary indexes are important concepts in database
indexing.
Composite Search Key:
A composite search key is a combination of multiple columns used to create an
index. Instead of indexing a single column, a composite search key allows you to
index multiple columns together. This can be useful when you frequently query
the database using multiple columns in the WHERE clause.
For example, if you have a table with columns like `name`, `age`, `pan_no`, and
`phone_no`, you can create a composite search key on a combination of these
columns. This index will help speed up queries that involve searching or filtering
based on these columns.
Secondary Indexes:
In the context of composite search keys, a secondary index can be created on the
composite search key itself. This means that you can create an index on the
combination of multiple columns used in the composite search key. This
secondary index will help improve the performance of queries that involve
searching or filtering based on the composite search key columns.
Certainly! Here's an example of how a composite search key can be used in a query:
Let's say we have a table called "Employees" with the following columns:
"EmployeeID", "FirstName", "LastName", and "Department". We want to search
for an employee using a composite key consisting of the "FirstName" and
"LastName" columns.
The query would look like this:
SELECT * FROM Employees WHERE FirstName = 'John' AND LastName =
'Doe';
This query will return all the rows from the "Employees" table where the
"FirstName" is 'John' and the "LastName" is 'Doe'. By using the composite key,
we can uniquely identify the specific employee we are searching for.

EVALUATION:
1. What is indexing?
2. Explain primary and secondary indexes
3. Why do we use composite search keys in index?
LESSON NOTE FOR WEEK 3 ENDING
SUBJECT INFORMATION AND COMMUNICATION TECHNOLOGY
CLASS SS 3
SEX MIXED
AGE 15 YEARS
PEROID
DURATION
TIME
DATE
NO: OF STUDENTS
TEACHER MR. ISAIAH SAMSON
TOPIC DATABASE SECURITY

OBJECTIVE: By the end of the lesson, student’s should be able to;


1. Define database security
2. What are the challenges of database security?
3. Importance of database security
REFERENCE: New Basic Data Processing For Senior Secondary By Onyeama Ugonma et al.
Online: (https://www.sumologic.com/blog/what-is-database-security/).

INTRODUCTION: DEFINITION OF DATABASE SECURITY


Database security refers to the measures and practices implemented to protect databases
from both internal and external threats. It involves safeguarding the database itself, the
data it contains, the database management system, and the applications that access it.
With the increasing number of data breaches and the potential damage they can cause to a
company’s reputation and compliance with regulations, database security has become
essential for organizations.
Challenges of Database Security:
1. Internet-based attacks: Hackers constantly develop new methods to infiltrate
databases and steal data, making it crucial to have strong security measures in
place to prevent security breaches
2. Insider threats: Authorized users with legitimate access to databases can
misuse their credentials and compromise data security. It is challenging to
guard against such exfiltration vulnerabilities
3. Access control: Ensuring that users with legitimate access to databases only
have access to the data they need for their work is essential to minimize the
risk of data compromise
Layers of Database Security:
1. Database Level: Security measures implemented within the database itself,
such as data masking, tokenization, and encryption, help protect the data at
rest
2. Access Level: Access control lists and permissions are used to control who
can access specific data or systems containing the data
3. Perimeter Level: Firewalls and virtual private networks (VPNs) are deployed
to control access to databases from external networks
Best Practices for Database Security:
1. Physical database security: Protecting the physical hardware where the data
is stored, implementing backup and disaster recovery measures, and
separating web servers and applications from the database server are crucial
practices.
2. Web applications and firewalls: Implementing database firewalls and using
application access management software for web applications help prevent
unauthorized access.
3. Database encryption: Encrypting data at rest and in motion ensures that even
if someone gains access to the data, it remains meaningless without the proper
decryption keys.
4. Password and permission management: Implementing strong password
policies, using multi-factor authentication, and regularly updating access and
permission lists help maintain database security
5. Isolating sensitive databases: Isolating sensitive databases makes it difficult
for unauthorized users to access them, reducing the risk of data breaches
6. Database auditing: Regularly auditing database log files helps identify
unauthorized access and reduces the impact of breaches by alerting
administrators

IMPORTANT DATABASE SECURITY


Database security is of utmost importance in today's digital landscape. It plays a critical
role in protecting sensitive data, ensuring regulatory compliance, and safeguarding the
reputation of organizations.
Here are some key reasons why database security is important:
1. Data Protection: Databases store vast amounts of sensitive and confidential
information, including personal data, financial records, and intellectual
property. Implementing robust security measures helps prevent unauthorized
access, data breaches, and the potential misuse or theft of valuable
information
2. Compliance with Regulations: Organizations are subject to various data
protection regulations, such as the General Data Protection Regulation
(GDPR) and the California Consumer Privacy Act (CCPA). These regulations
require organizations to implement appropriate security measures to protect
personal data. Failure to comply with these regulations can result in severe
penalties and legal consequences
3. Business Continuity: Database security is crucial for ensuring business
continuity. A security breach can disrupt operations, lead to data loss or
corruption, and cause significant financial and reputational damage. By
implementing robust security measures, organizations can minimize the risk
of such incidents and ensure the availability and integrity of their data.
4. Customer Trust and Reputation: Data breaches can severely damage an
organization's reputation and erode customer trust. Customers expect their
personal information to be handled securely, and a breach can lead to loss of
customers, negative publicity, and long-term damage to the brand. By
prioritizing database security, organizations can demonstrate their
commitment to protecting customer data and maintain trust.
5. Intellectual Property Protection: Databases often contain valuable
intellectual property, trade secrets, and proprietary information. Unauthorized
access to this data can result in financial losses, competitive disadvantages,
and compromised business strategies. Implementing strong security measures
helps safeguard intellectual property and maintain a competitive edge.

EVALUATION:
1. Define database security
2. What are the challenges of database security?
3. Importance of database security
4. What are the best practices in database security?
LESSON NOTE FOR WEEK 4 ENDING
SUBJECT INFORMATION AND COOMUNICATION TECHNOLOGY
CLASS SS 3
SEX MIXED
AGE 15 YEARS
PEROID
DURATION
TIME
DATE
NO: OF STUDENTS
TEACHER MR. ISAIAH SAMSON
TOPIC ACCESS CONTROL AND ENCRYPTION

OBJECTIVE: By the end of the lesson, student’s should be able to;


1. Define Access control
2. What are the types of Access controls
3. Define encryption
4. Identify the role of database Administrator
REFERENCE: New Basic Data Processing For Senior Secondary By Onyeama Ugonma et al.

INTRODUCTION: WHAT IS ACCESS CONTROL?


Access control is a crucial aspect of database security that ensures that only authorized
individuals or entities can access and manipulate the data within a database. It involves
implementing measures to authenticate users, determine their level of authorization, and
enforce restrictions on their actions within the database. Access control helps prevent
unauthorized access, data breaches, and misuse of sensitive information.
Types of Access Control:
1. Discretionary Access Control (DAC): In DAC, the data owner determines
access rights and permissions for individual users or groups. The owner has
discretion over granting or revoking access based on their own criteria.
2. Mandatory Access Control (MAC): MAC is a more rigid access control
model where access rights are determined by a central authority based on
security classifications and labels. Users are granted access based on their
security clearances
3. Role-Based Access Control (RBAC): RBAC grants access based on
predefined roles assigned to users. Each role has a set of permissions
associated with it, and users are assigned roles based on their job
responsibilities
4. Attribute-Based Access Control (ABAC): ABAC uses attributes such as
user characteristics, environmental conditions, and resource properties to
determine access. It allows for more dynamic and fine-grained access control
based on multiple factors.
Components of Access Control:
1. Authentication: Authentication verifies the identity of users attempting to
access the database. It ensures that users are who they claim to be by
validating their credentials, such as usernames and passwords, biometrics, or
multi-factor authentication.
2. Authorization: Once a user is authenticated, authorization determines the
level of access they have within the database. It involves granting or denying
permissions based on the user's role, privileges, or other attributes.
3. Access Enforcement: Access enforcement mechanisms ensure that users can
only perform actions within the database that they are authorized to do. This
includes restrictions on reading, writing, modifying, or deleting data.
4. Audit and Monitoring: Access control systems often include auditing and
monitoring capabilities to track user activities within the database. This helps
detect any unauthorized access attempts or suspicious behavior.
Benefits of Access Control in Database Security:
1. Data Protection: Access control prevents unauthorized users from accessing
sensitive data, reducing the risk of data breaches and unauthorized disclosure
2. Compliance: Implementing access control measures helps organizations meet
regulatory requirements for data protection and privacy, such as GDPR or
HIPAA
3. Least Privilege: Access control ensures that users only have access to the data
and functionalities necessary for their roles, reducing the risk of accidental or
intentional misuse
4. Accountability: Access control systems provide an audit trail of user
activities, enabling organizations to identify and hold individuals accountable
for any unauthorized actions.
5. Business Continuity: By preventing unauthorized access and protecting data
integrity, access control contributes to maintaining the availability and
reliability of the database

THE ROLE OF A DATABASE ADMINISTRATOR (DBA) IN SECURITY


The role of a database administrator (DBA) in security is crucial for ensuring the
protection and integrity of a database. DBAs play a vital role in safeguarding sensitive
data from unauthorized access, ensuring data privacy, and preventing data breaches.
Here are some key responsibilities of a DBA in security:
1. Implementing Security Measures: DBAs are responsible for implementing
various security measures to protect the database. This includes setting up user
authentication and authorization mechanisms, ensuring secure access controls,
and implementing encryption techniques to protect data at rest and in transit.
2. Database Auditing: DBAs perform regular audits of the database to identify
any security vulnerabilities or unauthorized activities. They monitor and
analyze database logs, track user activities, and investigate any suspicious or
unauthorized access attempts.
3. Patch Management: DBAs are responsible for keeping the database software
up to date with the latest security patches and updates. They regularly apply
patches to address any known security vulnerabilities and ensure the database
is protected against potential threats.
4. Security Planning and Policy Development: DBAs work closely with
security teams and management to develop security plans and policies for the
database. They assess security risks, define security requirements, and
establish guidelines and procedures to ensure compliance with industry
regulations and best practices.
5. User Training and Education: DBAs provide training and education to
database users on security best practices. They educate users on password
management, data access controls, and other security measures to promote a
culture of security awareness within the organization.
Overall, the role of a DBA in security is to ensure the confidentiality, integrity, and
availability of the database and its data. They work proactively to implement security
measures, monitor and respond to security threats, and maintain the overall security
posture of the database

ENCRYPTION
Encryption is a method of converting data into a secure and unreadable format to protect
it from unauthorized access or interception. It involves using algorithms and keys to
transform plaintext data into ciphertext, which can only be decrypted back into its
original form with the correct decryption key.
Encryption serves several important purposes in information security:
1. Data Confidentiality
2. Data Integrity
3. Secure Communication
4. Compliance with Regulations
5. Protection against Data Breaches
It's important to note that encryption is not a foolproof solution, and proper key
management and secure implementation are essential. Additionally, encryption does not
protect against all types of attacks, such as attacks targeting the endpoints or
vulnerabilities in the encryption algorithms themselves.
Therefore, a comprehensive security strategy should include multiple layers of protection
in addition to encryption.
EVALUATION:
1. Define Access control
2. What are the types of Access controls?
3. Define encryption
4. Identify the role of database Administrator
5. Encryption serves several important purposes in information security. List four
LESSON NOTE FOR WEEK 5 ENDING
SUBJECT INFORMATION AND COMMUNICATION TECHNOLOGY
CLASS SS 3
SEX MIXED
AGE 15 YEARS
PEROID
DURATION
TIME
DATE
NO: OF STUDENTS
TEACHER MR. ISAIAH SAMSON
TOPIC CRASH RECOVERY

OBJECTIVE: By the end of the lesson, student’s should be able to;


1. Define Crash Recovery
2. Explain Aries
3. Explain the analysis, Redo and Undo
REFERENCE: New Basic Data Processing For Senior Secondary By Onyeama Ugonma et al.

INTRODUCTION: CRASH RECOVERY


Crash recovery refers to the process of restoring a computer system or software
application to a stable and functional state after a crash or unexpected failure. A crash
typically occurs when a system or application experiences a severe error or fault that
causes it to cease functioning properly, resulting in an abrupt termination of operations.
The process of crash recovery typically involves several steps:
1. Identifying the cause of the crash
2. Analyzing any error logs or crash dumps
3. Repairing or restoring any corrupted data or files
4. Restarting the system or application in a controlled manner
In some cases, crash recovery may also involve implementing mechanisms to
prevent future crashes or mitigate the impact of crashes on the system.
The primary goal of crash recovery is to bring the database back to a state where
it reflects the most recent committed transactions and ensures data integrity.

WHAT ARE ARIES?


Aries is a protocol for crash recovery in distributed database systems, and it has been
influential in the field of database research. However, it is important to note that Aries
itself is a protocol specification rather than a specific implementation.
Some examples of distributed database systems that incorporate concepts from the Aries
protocol or employ similar crash recovery techniques include:
1. IBM Db2: Database 2 is a widely used relational database management
system (RDBMS) developed by IBM. It incorporates the Aries protocol's
principles, such as write-ahead logging and two-phase commit, to ensure crash
recovery and maintain data consistency in distributed environments.
2. Oracle RAC (Real Application Clusters): Oracle RAC is a feature of the
Oracle Database that allows multiple servers to work together as a single,
highly available database. It employs sophisticated crash recovery
mechanisms inspired by the Aries protocol to handle failures and maintain
database durability.
3. Microsoft SQL Server Always On Availability Groups: SQL Server Always
On Availability Groups is a high-availability and disaster recovery solution in
Microsoft SQL Server. It leverages techniques similar to those used in Aries
for crash recovery, including transaction log-based recovery and distributed
commit protocols.
4. PostgreSQL: PostgreSQL is an open-source object-relational database
management system. While it doesn't explicitly implement the Aries protocol,
it provides crash recovery mechanisms based on write-ahead logging (WAL)
and transaction log replay, which share similarities with Aries.

ANALYSIS, REDO AND UNDO


Redo and undo are two essential operations in database systems that play a crucial role in
ensuring data consistency and durability.
Redo:
Redo is an operation performed during crash recovery to reapply or re-execute the
changes made by committed transactions from the transaction log. The purpose of
redo is to bring the database to a consistent state after a failure or crash.
Undo:
Undo is an operation performed during crash recovery to roll back or reverse the
changes made by transactions that were active but not yet committed at the time
of the failure. The purpose of undo is to maintain the consistency of the database
by discarding uncommitted changes.
Both redo and undo operations are essential for achieving ACID properties in
database systems. Redo ensures durability by reapplying committed changes,
while undo maintains consistency by rolling back uncommitted changes. By
performing these operations during crash recovery, the DBMS can recover the
database to a consistent and durable state, even after a failure or crash.
EVALUATION:
1. Define Crash Recovery
2. Explain Aries
3. Explain the analysis, Redo and Undo
4. List 4 examples of distributed database systems that incorporate concepts from
the Aries protocol
LESSON NOTE FOR WEEK 6 ENDING
SUBJECT INFORMATION AND COMMUNICATION TECHNOLOGY
CLASS SS 3
SEX MIXED
AGE 15 YEARS
PEROID
DURATION
TIME
DATE
NO: OF STUDENTS
TEACHER MR. ISAIAH SAMSON
TOPIC RECOVERY RELATED TO DATA STRUCTURE

OBJECTIVE: By the end of the lesson, students should be able to;


1. Explain Recovery Related To Database
2. What is Checkpointing?
3. What is Media Recovery
REFERENCE: New Basic Data Processing For Senior Secondary By Onyeama Ugonma et al.

INTRODUCTION: RECOVERY RELATED TO DATA STRUCTURE


Recovery related to data structure refers to the process of restoring or reconstructing data
structures to a consistent and usable state after a failure or error. Data structures are
fundamental components of computer systems and software applications that organize
and store data efficiently.
In the context of recovery, data structure recovery typically involves two main
aspects:
Consistency of Data Structures: For example, in a relational database system,
recovery processes may involve checking the consistency of indexes, primary
keys, foreign keys, and other constraints.
Reconstruction of Data Structures: Recovery processes may involve
reconstructing or rebuilding these data structures from available information or
backups. It may involve techniques such as data replication, data reconciliation,
data reorganization, or even data migration from backups or redundant copies.

CHECKPOINTING
Checkpointing is a technique used in computer systems to create consistent and
recoverable states of a system or application at specific points in time. It involves
capturing the state of the system or application and storing it as a checkpoint, which can
be used for recovery purposes in case of failures or errors.
The process of checkpointing typically involves the following steps:
1. State Capture: The system or application's state is captured by saving critical
data structures, variables, or other relevant information. This may include the
contents of memory, processor registers, file system metadata, database
buffers, and other important system components.
2. Checkpoint Creation: The captured state is then stored as a checkpoint in a
persistent storage medium such as disk or non-volatile memory. This ensures
that the checkpoint survives system failures or crashes.
3. Recovery Point: Checkpoints are typically created periodically or at specific
milestones in the system's execution. These points are chosen based on criteria
such as time intervals, completion of specific tasks, or reaching specific
consistency points in the application.
4. Recovery Process: This involves loading the checkpointed state and resuming
execution from that point, potentially applying additional recovery techniques
such as redo or undo operations to bring the system or application to a
consistent state.
Checkpointing provides several benefits:
1. Fault Tolerance
2. Recovery Efficiency
3. Consistency

MEDIA RECOVERY
Media recovery is a process in database management systems (DBMS) that involves
restoring and recovering data from media failures.
The primary goal of media recovery is to bring the database back to a consistent and
usable state after a media failure.
This typically involves two main steps:
Restore: The restore phase of media recovery focuses on recovering the database
files or data from backup copies or other forms of data protection.
During the restore phase, the following steps are typically performed:
a. Identifying the affected files
b. Retrieving backups
c. Restoring the files
Recovery: The recovery phase of media recovery focuses on applying the
necessary changes or updates to the database to bring it up to date with the state it
had at the time of the media failure.
During the recovery phase, the following steps are typically performed:
a. Analyzing the transaction log
b. Redoing changes
c. Ensuring consistency
Media recovery is essential for maintaining data durability and availability in the
face of media failures.

EVALUATION:
1. Explain Recovery Related To Database
2. What is Checkpointing?
3. What is Media Recovery
4. State the process of checkpointing
LESSON NOTE FOR WEEK 7-9 ENDING
SUBJECT INFORMATION AND COMMUNICATION TECHNOLOGY
CLASS SS 3
SEX MIXED
AGE
PEROID
DURATION
TIME
DATE
NO: OF STUDENTS
TEACHER MR. ISAIAH SAMSON
TOPIC PARALLEL AND DISTRIBUTED DATABASE

OBJECTIVE: By the end of the lesson, student’s should be able to;


1. Define parallel and Distributed Database
2. Explain the goal of parallel processing
3. The benefits of processing
REFERENCE: New Basic Data Processing For Senior Secondary By Onyeama Ugonma et al.

INTRODUCTION: DEFINITION OF PARALLEL AND DISTRIBUTED DATABASE


Parallel and distributed databases are two types of database systems that leverage
multiple computing resources to process and store data efficiently. While they share
similarities in terms of utilizing parallelism and distributing data across multiple nodes,
they have distinct characteristics and deployment models.
PARALLEL DATABASES:
Parallel databases are designed to handle large-scale data processing by dividing the
workload across multiple processors or computing nodes. In a parallel database system,
data is partitioned and distributed among multiple processors, each equipped with its own
memory and processing capabilities. Each processor operates on its subset of data
simultaneously, allowing for parallel execution of queries and operations.
Key features of parallel databases include:
1. Data partitioning: The data is divided into smaller subsets and distributed
across multiple processors or nodes.
2. Shared-nothing architecture: Each processor or node operates
independently, with its own memory and disk storage.
3. Parallel query execution: Queries are divided into subtasks that can be
executed concurrently on different processors, speeding up query processing.
4. Load balancing: Workload is balanced across processors to ensure efficient
utilization of resources.
5. Inter-node communication: Processors may communicate with each other
when required to exchange data or coordinate operations.
DISTRIBUTED DATABASES:
Distributed databases are designed to store and manage data across multiple
interconnected nodes or computing systems. In a distributed database system, data is
partitioned and replicated across different nodes, providing data distribution, fault
tolerance, and high availability.
Key features of distributed databases include:
1. Data partitioning and replication: Data is divided into partitions that are
distributed across multiple nodes. Replication may be used to store multiple
copies of data for improved fault tolerance and data availability.
2. Distributed query processing: Queries are processed by coordinating among
multiple nodes, which may involve data retrieval, aggregation, and
consolidation across distributed data partitions.
3. Distributed transaction management: Transactions can span multiple nodes,
requiring coordination and synchronization to ensure atomicity, consistency,
isolation, and durability (ACID properties) across the distributed system.
4. Data consistency mechanisms: Distributed databases employ various
techniques, such as distributed locking, two-phase commit protocols, or
conflict resolution mechanisms, to maintain data consistency across nodes.

PARALLEL PROCESSING
Parallel processing refers to the simultaneous execution of multiple tasks or processes to
achieve faster and more efficient computation. It involves dividing a task into smaller
subtasks that can be executed concurrently, either on multiple processors within a single
computing system or across multiple computing systems in a distributed environment.

PARALLEL PROCESSING OFFERS SEVERAL BENEFITS


1. Increased Performance: By executing tasks concurrently, parallel processing
can significantly reduce the overall execution time. Instead of waiting for one
task to complete before starting the next, multiple tasks are processed
simultaneously, leveraging the available computing resources effectively. This
leads to improved throughput and faster completion of computational tasks.
2. Scalability: Parallel processing enables the system to scale its computational
capacity by adding more processors or computing nodes. As the workload
increases, additional processors can be utilized, allowing for efficient
utilization of resources and accommodating larger workloads without
sacrificing performance.
3. Resource Utilization: Parallel processing allows for better utilization of
available computing resources. Instead of having idle processors or computing
nodes, parallel tasks can be allocated to make use of all available resources,
maximizing the overall system efficiency.
4. Handling Large Data Sets: Parallel processing is particularly beneficial for
processing large data sets. By dividing the data into smaller chunks and
assigning them to different processors or nodes, data-intensive tasks such as
data analysis, data mining, or simulations can be performed more efficiently.
Each processor or node operates on its portion of the data, enabling faster
processing and analysis of large datasets.
5. Fault Tolerance: In a parallel processing system, if one processor or node
fails, the remaining processors or nodes can continue the processing without
interrupting the overall computation. This fault tolerance capability enhances
system reliability and availability.
6. Real-time Processing: Parallel processing can be advantageous for real-time
applications that require immediate or near-instantaneous results. By
distributing the workload across multiple processors or nodes, parallel
processing can help meet the strict timing requirements of real-time systems.
7. Complex Problem Solving: Parallel processing enables the efficient
execution of complex computational tasks that can be divided into smaller,
independent subproblems. These subproblems can be solved concurrently,
allowing for faster and more effective solutions to complex problems.

THE GOAL OF PARALLEL PROCESSING


The goal of parallel processing is to improve the overall performance, efficiency, and
scalability of computing systems by executing tasks or processes simultaneously on
multiple processors or computing nodes.
However, for tasks that can be parallelized, parallel processing offers substantial
performance improvements and scalability, making it a valuable technique in various
domains, including scientific computing, data analysis, simulations, and high-
performance computing.

PROBLEMS PARALLEL PROCESSING


1. Data Dependencies: Some tasks or algorithms have dependencies among
their subtasks, requiring specific ordering or synchronization. Managing these
dependencies in a parallel processing environment can be complex and may
introduce overhead due to synchronization and communication between
processors or nodes.
2. Load Imbalance: In a parallel processing system, workload distribution
across processors or nodes may not be perfectly balanced. Some processors or
nodes may have more work than others, leading to underutilization of
resources or performance bottlenecks. Load balancing techniques need to be
employed to distribute the workload evenly and optimize resource utilization.
3. Communication Overhead: Parallel processing often involves
communication and coordination among processors or nodes. Data exchange,
synchronization, and sharing of intermediate results can introduce
communication overhead, negatively impacting performance. Efficient
communication protocols and algorithms must be implemented to minimize
this overhead.
4. Scalability Limits: While parallel processing allows for scalability, there can
be limits to the achievable scalability due to factors such as interconnect
bandwidth, memory capacity, or synchronization requirements. Scaling
beyond a certain point may become challenging, and the cost-effectiveness of
adding more processors or nodes may diminish.
5. Fault Tolerance: Parallel processing systems need to address fault tolerance
to ensure system reliability and data integrity. Handling failures of processors
or nodes, maintaining data consistency, and recovering from failures without
interrupting the overall computation can be complex and require sophisticated
fault tolerance mechanisms.
6. Programming Complexity: Developing and programming parallel
algorithms can be more challenging than sequential algorithms. Parallel
programming requires considerations of data partitioning, synchronization,
and communication, which can introduce additional complexity and increase
the chances of programming errors.
7. Debugging and Testing: Identifying and diagnosing errors or bugs in a
parallel processing system can be more difficult than in sequential systems.
Debugging and testing parallel algorithms often require specialized tools and
techniques to trace and analyze the execution of multiple concurrent processes
or threads.
The primary objectives of parallel processing can be summarized as follows:
1. Speedup
2. Increased Throughput
3. Scalability
4. Resource Utilization
5. Handling Large Data Sets
6. Concurrency and Real-time Processing
7. Problem Solving

EVALUATION:
1. Define parallel and Distributed Database
2. Explain the goal of parallel processing
3. The benefits of processing
4. What are the primary objectives of parallel processing?
LESSON NOTE FOR WEEK 10 ENDING
SUBJECT INFORMATION AND COMMUNICATION TECHNOLOGY
CLASS SS 3
SEX MIXED
AGE 15 YEARS
PEROID
DURATION
TIME
DATE
NO: OF STUDENTS
TEACHER MR. ISAIAH SAMSON
TOPIC NETWORKING

OBJECTIVE: By the end of the lesson, student’s should be able to;


1. What is Networking ?
2. The various types of Networking
REFERENCE: New Basic Data Processing For Senior Secondary By Onyeama Ugonma et al.

INTRODUCTION: DEFINITION OF NETWORKING


Networking refers to the practice of connecting computers and other devices together to
share resources, exchange information, and communicate with each other. It involves the
design, implementation, management, and maintenance of both hardware and software
components that enable devices to communicate and interact within a network.
There are various types of networking, each serving different purposes and catering to
different needs.
Here are some common types of networking:
1. Local Area Network (LAN): A LAN is a network that covers a small
geographical area, such as a home, office building, or school. It is typically
used to connect computers, printers, and other devices within a limited area.
2. Wide Area Network (WAN): A WAN is a network that spans a large
geographical area, such as multiple offices or cities. It connects LANs and
other networks over long distances, often utilizing public or private
telecommunication services.
3. Metropolitan Area Network (MAN): A MAN is a network that covers a
larger geographical area than a LAN but smaller than a WAN. It typically
connects multiple LANs within a city or metropolitan area.
4. Wireless Local Area Network (WLAN): A WLAN is a type of LAN that
uses wireless communication instead of wired connections. It enables devices
to connect to the network using Wi-Fi technology.
5. Virtual Private Network (VPN): A VPN is a secure network that allows
users to access a private network over a public network, such as the internet. It
provides encryption and authentication to ensure privacy and security.
6. Campus Area Network (CAN): A CAN is a network that connects multiple
LANs within a university campus, corporate campus, or similar large-scale
environment.
7. Storage Area Network (SAN): A SAN is a specialized network that provides
high-speed access to consolidated, block-level data storage. It is commonly
used in data centers or large-scale storage environments.
8. Peer-to-Peer Network (P2P): In a P2P network, devices are connected
directly to each other without the need for a central server. Each device can
act as a client and a server, allowing for decentralized sharing of resources and
information.
9. Client-Server Network: In a client-server network, devices are organized
into clients and servers. Clients request resources or services from servers,
which provide the requested resources or services.

EVALUATION:
1. What is Networking ?
2. List and explain 5 types of Networking

You might also like