Chapter 6 Advanced Topics
Chapter 6 Advanced Topics
Chapter – 6
Advanced Topics
6.1 Multiprocessing Systems
Traditionally, the computer has been viewed as a sequential machine. Most computer
programming languages require the programmer to specify algorithms as sequence of
instructions. Processor executes programs by executing machine instructions in a sequence and
one at a time. This view of the computer has never been entirely true. At the micro operation
level, multiple control signals are generated at the same time. Instruction pipelining and the
overlapping of fetch & execute instructions from the same program in parallel.
As computer technology has evolved and as the cost of computer hardware has dropped,
computer designers have sought more and more opportunities for parallelism, usually to enhance
performance and availability. Multiprocessing is an example of parallelism which uses multiple
CPUS sharing the common resources such as memory, storage device etc.
Here the processors can communicate with each other through memory. The CPUs can directly
exchange signals as indicated by dotted line. The organization of multiprocessor system can be
divided into three types.
Compiled by: Er. Hari Aryal Email: [email protected] Reference: W. Stallings & A. S. Tanenbaum | 1
Microprocessors Chapter 6 : Advanced Topics
Simplicity:
The physical interface and the addressing time sharing logic of each processor remains the same
as in a single processor system, so it is very simplest approach.
Flexibility:
It is easy to expand the system by attaching more CPUs to the bus.
Reliability:
The failure of any attached device should not the failure of the whole system.
Drawback:
The speed of the system is limited by the cycle time because all memory references must pass
through the common bus.
2. Multiport memory:
Each processor and /O module has dedicated path to each memory module this system
has more performance and complexity than earlier one. For this system, it is possible to
configure portions of memory as private to one or more CPUs and or I/O modules. This
feature allows increasing security against unauthorized access and the storage of recovery
routines in areas of memory not susceptible to modification by other processors.
Compiled by: Er. Hari Aryal Email: [email protected] Reference: W. Stallings & A. S. Tanenbaum | 2
Microprocessors Chapter 6 : Advanced Topics
Data. Each of these dimensions can have only one of two possible states: Single or
Multiple.
The matrix below defines the 4 possible classifications according to Flynn:
SISD SIMD
Single Instruction, Single Data Single Instruction, Multiple Data
MISD MIMD
Multiple Instruction, Single Data Multiple Instruction, Multiple Data
Compiled by: Er. Hari Aryal Email: [email protected] Reference: W. Stallings & A. S. Tanenbaum | 5
Microprocessors Chapter 6 : Advanced Topics
Compiled by: Er. Hari Aryal Email: [email protected] Reference: W. Stallings & A. S. Tanenbaum | 6
Microprocessors Chapter 6 : Advanced Topics
Data parallelism
Data parallelism is parallelism inherent in program loops, which focuses on distributing the data
across different computing nodes to be processed in parallel. "Parallelizing loops often leads to
similar (not necessarily identical) operation sequences or functions being performed on elements
of a large data structure." Many scientific and engineering applications exhibit data parallelism.
A loop-carried dependency is the dependence of a loop iteration on the output of one or more
previous iterations. Loop-carried dependencies prevent the parallelization of loops. For example,
consider the following pseudocode that computes the first few Fibonacci numbers:
PREV1 := 0
PREV2 := 1
do:
CUR := PREV1 + PREV2
PREV1 := PREV2
PREV2 := CUR
while (CUR < 10)
This loop cannot be parallelized because CUR depends on itself (PREV2) and PREV1, which are
computed in each loop iteration. Since each iteration depends on the result of the previous one,
they cannot be performed in parallel. As the size of a problem gets bigger, the amount of data-
parallelism available usually does as well.
This varies, depending upon who you talk to. In the past, a CPU (Central Processing Unit) was a
singular execution component for a computer. Then, multiple CPUs were incorporated into a
node. Then, individual CPUs were subdivided into multiple "cores", each being a unique
execution unit. CPUs with multiple cores are sometimes called "sockets". The result is a node
with multiple CPUs, each containing multiple cores.
During the past 20+ years, the trends indicated by ever faster networks, distributed
systems, and multi-processor computer architectures (even at the desktop level) clearly
show that parallelism is the future of computing.
Compiled by: Er. Hari Aryal Email: [email protected] Reference: W. Stallings & A. S. Tanenbaum | 7
Microprocessors Chapter 6 : Advanced Topics
In this same time period, there has been a greater than 1000x increase in supercomputer
performance, with no end currently in sight.
Inter-process communication
In computing, Inter-process communication (IPC) is a set of methods for the exchange of data
among multiple threads in one or more processes. Processes may be running on one or more
computers connected by a network. IPC methods are divided into methods for message passing,
synchronization, shared memory, and remote procedure calls (RPC). The method of IPC used
may vary based on the bandwidth and latency of communication between the threads, and the
type of data being communicated.
There are several reasons for providing an environment that allows process cooperation:
Information sharing
Speedup
Modularity
Convenience
Privilege separation
IPC may also be referred to as inter-thread communication and inter-application communication.
The combination of IPC with the address space concept is the foundation for address space
independence/isolation.
The single operating system controls the use of system resources in a multiprocessing
environment. In this system, multiple jobs or process may be active at one time. The
responsibility of operating system or system software is to schedule the execution and to allocate
resources. The functions of multiprocessor operating system are:
– An interface between users and machine
– Resource management
– Memory management
– Prevent deadlocks
– Abnormal program termination
– Process scheduling
– Managers security
Resource Allocation
In computing, resource allocation is necessary for any application to be run on the system.
When the user opens any program this will be counted as a process, and therefore requires the
computer to allocate certain resources for it to be able to run. Such resources could be access to a
section of the computer's memory, data in a device interface buffer, one or more files, or the
required amount of processing power.
A computer with a single processor can only perform one process at a time, regardless of the
amount of programs loaded by the user (or initiated on start-up). Computers using single
processors appear to be running multiple programs at once because the processor quickly
alternates between programs, processing what is needed in very small amounts of time. This
Compiled by: Er. Hari Aryal Email: [email protected] Reference: W. Stallings & A. S. Tanenbaum | 8
Microprocessors Chapter 6 : Advanced Topics
process is known as multitasking or time slicing. The time allocation is automatic, however
higher or lower priority may be given to certain processes, essentially giving high priority
programs more/bigger slices of the processor's time.
Deadlock
A process requests resources; if the resources are not available at that time, the process enters a
wait state. Waiting processes may never again change state, because the resources they have
requested are held by other waiting processes. This situation is called a deadlock.
Processes need access to resources in reasonable order. Suppose a process holds resource A and
requests resource B, at same time another process holds B and requests A; both are blocked and
remain in deadlock.
A set of processes is deadlocked if each process in the set is waiting for an event that only
another process in the set can cause. Usually the event is release of a currently held resource.
None of the processes can run, release resources and then be awakened.
OS Features
1. Process Management
Operating system manages the process on hardware level and user level. To create, block,
terminate, request for memory, Forking, releasing of memory etc. in multi tasking operating
system the multiple processes can be handle and many processes can be create ,block and
terminate ,run etc. it allow multiple process to exist at any time and where only one process can
execute at a time. The other process may be performing I/O and waiting. The process manager
implements the process abstraction and creating the model to use the CPU. It is the major part of
operating system and its major goal is scheduling, process synchronization mechanism and
deadlock strategy
Compiled by: Er. Hari Aryal Email: [email protected] Reference: W. Stallings & A. S. Tanenbaum | 9
Microprocessors Chapter 6 : Advanced Topics
2. File Managent
Is a computer program and provide interface with file system. it manage the file like creation,
deletion, copy, rename etc files typically display in hierarchical and some file
manager provide network connectivity like ftp, nfs, smb or webdav.
3. Memory Management
Is a way to control the computer memory on the logic of data structure? It provides the way to
program to allocate in main memory at their request. The main purpose of this
manager is; It allocates the process to main memory; minimize the accessing time and process
address allocate in a location of primary memory. The feature of memory manager on multi
tasking is following.
Relocation
Protection
Sharing
Logical organization
Physical organization
4. Device Management
Allow the user to view its capability and control the device through operating system. Which
device may be enabling or disable or install or ignore the functionality of the device. In
Microsoft windows operating system the control panel applet is the device manager .it also built
on web application server model. Device Manager provides three graphical user interfaces
(GUIs). Device manager manage the following:
Device configuration
Inventory collection
S/W distribution
Initial provisioning
5. Resource management
Is a way to create, manage and allocate the resources? Operating system is a responsible to all
activities which is done in computer. Resource manager is the major part of operating system
.the main concept of operating system is managing and allocate the resources. The resources of
the computer are storage devices, communication and I/O devices etc. these all resources manage
and allocate or de allocate by resource manager.
Program execution: The system must be able to load a program into memory and to run that
program. The program must be able to end its execution, either normally or abnormally
(indicating error).
Compiled by: Er. Hari Aryal Email: [email protected] Reference: W. Stallings & A. S. Tanenbaum | 10
Microprocessors Chapter 6 : Advanced Topics
I/O operations: A running program may require I/O. This 1/0 may involve a file or an I/O
device. For specific devices, special functions may be desired (such as to rewind a tape drive, or
to blank a CRT screen). For efficiency and protection, users usually cannot control 1/0 devices
directly. Therefore, the
operating system must provide a means to do I/O.
File-system manipulation: The file system is of particular interest. Obviously, programs need to
read and write files. Programs also need to create and delete files by name.
Error detection: The operating system constantly needs to be aware of possible errors. Errors
may occur in the CPU and memory hardware (such as a memory error or a power failure), in I/O
devices (such as a parity error on tape, a connection failure on a network, or lack of paper in the
printer),
and in the user program (such as an arithmetic overflow, an attempt to access an illegal memory
location, or a too-great use of CPU time). For each type of error, the operating system should
take the appropriate action to ensure correct and consistent computing. In addition, another set of
operating-system functions exists not for helping the user, but for ensuring the efficient operation
of the system itself. Systems with multiple users can gain efficiency by sharing the computer
resources among the users.
Resource allocation: When multiple users are logged on the system or multiple jobs are running
at the same time, resources must be allocated to each of them. Many different types of resources
are managed by the operating system. Some (such as CPU cycles, main memory, and file
storage) may have special allocation code, whereas others (such as I/O devices) may have much
more general request and release code. For instance, in determining how best to use the CPU,
operating systems have CPU-scheduling routines that take into account the speed of the CPU, the
jobs that must be executed, the number of registers available, and other factors. There might also
be routines to allocate a tape drive for use by a job. One such routine locates an unused tape
drive and marks an internal table
to record the drive's new user. Another routine is used to clear that table. These routines may also
allocate plotters, modems, and other peripheral devices.
Accounting: We want to keep track of which users use how many and which kinds of computer
resources. This record keeping may be used for accounting (so that users can be billed) or simply
for accumulating usage statistics. Usage statistics may be a valuable tool for researchers who
wish to reconfigure the system to improve computing services.
Compiled by: Er. Hari Aryal Email: [email protected] Reference: W. Stallings & A. S. Tanenbaum | 11
Microprocessors Chapter 6 : Advanced Topics
Protection: The owners of information stored in a multiuser computer system may want to
control use of that information. When several disjointed processes execute concurrently, it
should not be possible for one process to interfere with the others, or with the operating system
itself. Protection involves ensuring that all access to system resources is controlled. Security of
the system from outsiders is also important. Such security starts with each user having to
authenticate himself to the system, usually by means of a password, to be allowed access to the
resources. It extends to defending external 1/0 devices, including modems and network adapters,
from invalid access attempts, and to recording all such connections for detection of break-ins. If
a system is to be protected and secure, precautions must be instituted throughout it. A chain is
only as strong as its weakest link.
Pipelining is the process of fetching one instruction when another instruction is executing
in parallel. Due to complex instruction this feature cannot be heavily used in CISC
machines.
Micro-operations form the instruction and instruction form he micro-program which is
written in control memory to perform timing and sequencing of the micro-operations
implemented in CISC.
CISC machines have large number of complex instructions based on multiple numbers
of addressing modes.
CISC machines processer does not consist of large number of registers due to large cost.
So these machines have to perform various memory read and write operations.
CIS Machines are preferable where the speed of processer is not the prime issue and
where general applications are to be handled. Processers like 8085, 8086,
8086,8086,8086,8086 etc are based on CISC processers and even today’s pc.
performing these data DSP can transfer the discrete data to D\A converter which further to
speaker to convert electrical signal to sound.
The whole function is carried out by DSP processers using hardware like microphone,
transducer, A\D converter, D\A converter, speaker etc and software like C or MATLAB
which carried out FFT (fast Fourier transform). DSP processors have low processing speed
due to very curtail signals need to be operated. The micro-processers and computers we
used today are based on non Neumann architecture where the instruction defines both the
operation and data.
So the DSP processors should be fast processing and for that we need to design best
architecture which follows Harvard Architecture where separate buses for instructions and
data are used. DSP chips are specially designed for particular application and they are not
used for general type of processing like microprocessors do. There are very few
manufacturers for DSP chip, one of them is the Texas instruments, USA. Its TMS320C
series is worldwide popular and can be used for implementing various types of signal
processing application.
Compiled by: Er. Hari Aryal Email: [email protected] Reference: W. Stallings & A. S. Tanenbaum | 14