Serial and Parallel First 3 Lecture
Serial and Parallel First 3 Lecture
Architecture
The main difference between serial and parallel processing in computer architecture is that serial
processing performs a single task at a time while parallel processing performs multiple tasks at a
time.
Computer architecture defines the functionality, organization, and implementation of a computer
system. It explains how the computer system is designed and the technologies it is compatible
with. The processor is one of the most essential components in the computer system. It executes
instructions and completes the tasks assigned to it. There are two main types of processing as
serial and parallel processing.
Key Terms
Computer Architecture, Parallel Processing, Serial processing
What is Serial Processing in Computer Architecture
In serial processing, the processor completes one task at a time. After completing that, it executes
the other tasks in a sequential manner. An operating system executes many programs and each of
them has multiple tasks. The processor has to complete all these tasks, but it completes one task
at a time. The other tasks wait in the queue until the processor completes the current task. In
other words, all tasks are processed sequentially. Hence, this type of processing is called serial
processing or sequential processing. Machines such as Pentium 3 and Pentium 4 perform serial
processing.
We can understand the functionality of serial processing using the following analogy. Assume a
supermarket with multiple queues and only one cashier. The cashier finishes billing the products
of one customer and then moves on to another customer. He performs billing one after the other.
We can understand the functionality of parallel processing using the following example. In a
supermarket, there are multiple queues, and there is a cashier for each queue. Each cashier bills
the products of the customers in his own queue.
Number of processors
A major difference between serial and parallel processing is that there is a single processor in
serial processing, but there are multiple processors in parallel processing.
Performance
Therefore, the performance of parallel processing is higher than in serial processing.
Work Load
In serial processing, the workload of the processor is higher. However, in parallel processing, the
workload per processor is lower. Thus, this is an important difference between serial and parallel
processing.
Data Transferring
Moreover, in serial processing, data transfers are in bit-by-bit format. However, in parallel
processing, data transfers are in byte form (8 bits).
Required time
Time taken is also a difference between serial and parallel processing. That is; serial processing
requires more time than parallel processing to complete a task.
Cost
Furthermore, parallel processing is more costly than serial processing as it uses multiple
processors.
Conclusion
There are two types of processing as serial and parallel processing in a computer system. The
main difference between serial and parallel processing in computer architecture is that serial
processing performs a single task at a time while the parallel processing performs multiple tasks
at a time. In brief, the performance of parallel processing is higher than the serial processing.
Course Introduction
With advances in computer architecture, high performance
multiprocessor computers have become readily available & affordable.
As a result supercomputing is accessible to a large segment of
industry that was once restricted to military research & large
corporations.
The course is comprised of architecture & programming of Parallel
computing systems; design concepts, principles, paradigms, models,
performance evaluation & applications.
Introduction
Parallel computing system are the simultaneous execution of the single task (split
up and adapted) on multiple processors in order to obtain results faster.
The idea is based on the fact that the process of solving problem usually can be
divided into smaller tasks (divide and conquer), which may be carried out
simultaneously with some coordination.
The term parallel computing architecture sometimes used for a computer with
more than one processor ( few to thousands), available for processing. The recent
multicore processor (Chips with more than one processor core) are some
commercial examples which bring parallel computing to the desktop.
Parallel Computing
Multi-Core & Multi- Processor
Distributed Systems
We define a distributed system as one in which hardware or
software located at networked computers communicate and
coordinate their actions only by passing messages.
The simple definition covers the entire range of systems in which
networked computers can usefully be deployed.
Online meeting
Parallel & Distributed Computing: Fall-2022
lec#2
Why Use Parallel Computing?
In whether forecasting, a mathematical model of the behavior of the earth’s
atmosphere is developed in which the most important variables are
the wind speed, air temperature, humidity and atmospheric pressure.
The objective of numerical weather modeling is to predict the status of the atmosphere
at a particular region at a specified future time based on the current and past
observations of the values of atmospheric variables.
The Future:
During the past 20 years, the trends indicated by ever faster networks,
distributed systems, & multi-processor computer architectures
(even at the desktop level) clearly show that
parallelism is the future of computing
Parallel Computing
In simple terms, parallel computing is breaking up a task into smaller pieces and
executing those pieces at the same time, each on their own processor or on a set
of computers that have been networked together. Let's look at a simple example.
Say we have this equation:
Y = (4 x 5) + (1 x 6) + (5 x 3)
On a single processor, the steps needed to calculate a value for Y might look like:
Step 1: Y = 20 + (1 x 6) + (5 x 3)
Step 2: Y = 20 + 6 + (5 x 3)
Step 3: Y = 20 + 6 + 15
Step 4: Y = 41
But in a parallel computing scenario, with three processors or computers, the
steps look something like:
Step 1: Y = 20 + 6 + 15
Step 2: Y = 41
Sequential computing
program that process images and reports how many are
mages ← [ "pet1.jpg", "pet2.jpg", "pet3.jpg", "pet4.jpg"]
numCats ← 0
FOR EACH image IN images
{
foundCat ← detectCat(image)
IF foundCat
{
numCats ← numCats + 1
}
}
The program begins with a list of filenames and a variable to store the number of
images of cats, initialized to 0. A loop iterates through the list, repeatedly calling a
function that detects whether the image has a cat and updating the variable.
Now let's analyze the program further, by annotating each operation with how
many seconds it takes and how many times it's called:
images ← [ "pet1.jpg", "pet2.jpg", "pet3.jpg", "pet4.jp
3 1
]
1 1 numCats ← 0
1 4 FOR EACH image IN images
10 4 foundCat ← detectCat(image)
1 4 IF foundCat
2 4 numCats ← numCats + 1
Note: the values in the time column are made up, the actual time would vary
based on the implementation of this pseudocode and the hardware running it.
We can calculate the total time taken by multiplying the number of seconds for
each operation by the number of calls and summing the results. For this program,
that'd be:
(3×1)+(1×1)+(1×4)+(10×4)+(1×4)+(2×4)=60
This timeline visualizes the time taken by the computer:
Parallel computing
The exact mechanisms for parallelizing a program depend on the particular programming
language and computing environment, which we won't dive into here.
1 1 numCats ← 0
1 4 FOR EACH image IN images
10 4 foundCat ← detectCat(image)
1 4 IF foundCat
2 4 numCats ← numCats + 1
The time taken by the parallelized operations depends on the number of parallel
processors executing operations at once. If I run this program on a computer with
4 cores, then each core can process one of the 4 images, so the parallel portion
will only take as long as a single images takes:
This timeline visualizes the time taken by a computer that splits the program
across 4 processes: