CH17 COA10e
CH17 COA10e
William Stallings
Computer Organization
and Architecture
10th Edition
Simplicity
Simplest approach to multiprocessor organization
Flexibility
Generally easy to expand the system by attaching more
processors to the bus
Reliability
The bus is essentially a passive medium and the failure of
any attached device should not cause failure of the whole
system
Scheduling
Any processor may perform scheduling so conflicts must be avoided
Scheduler must assign ready processes to available processors
Synchronization
With multiple active processes having potential access to shared address spaces or I/O resources, care
must be taken to provide effective synchronization
Synchronization is a facility that enforces mutual exclusion and event ordering
Memory management
In addition to dealing with all of the issues found on uniprocessor machines, the OS needs to exploit the
available hardware parallelism to achieve the best performance
Paging mechanisms on different processors must be coordinated to enforce consistency when several
processors share a page or segment and to decide on page replacement
Modified
The line in the cache has been modified and is available
only in this cache
Exclusive
The line in the cache is the same as that in main memory
and is not present in any other cache
Shared
The line in the cache is the same as that in main memory
and may be present in another cache
Invalid
The line in the cache does not contain valid data
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Table 17.1
MESI Cache Line States
Thread:
• Dispatchable unit of work within a
process
Process:
• Includes processor context (which • An instance of program running on
includes the program counter and computer
stack pointer) and data area for • Two key characteristics:
stack • Resource ownership
• Executes sequentially and is • Scheduling/execution
interruptible so that the processor
can turn to another thread
Process switch
• Operation that switches the
processor from one process to
another by saving all the process
control data, registers, and other
information for the first and replacing
them with the process information for
the second
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Implicit and Explicit
Multithreading
All commercial processors and most
experimental ones use explicit
multithreading
Concurrently execute instructions from
different explicit threads
Interleave instructions from different threads
on shared pipelines or parallel execution on
parallel pipelines
Benefits:
Absolute scalability
Incremental scalability
High availability
Superior price/performance
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Table 17.2
Clustering Methods: Benefits and
Limitations
Failover
The function of switching applications and data resources over from a failed
system to an alternative system in the cluster
Failback
Restoration of applications and data resources to the original system once
it has been fixed
Load balancing
Incremental scalability
Automatically include new computers in scheduling
Middleware needs to recognize that processes may switch between
machines
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Parallelizing Computation
SMP Clustering
Easier to manage and Far superior in terms of
configure incremental and absolute
scalability
Much closer to the original
single processor model for Superior in terms of
which nearly all availability
applications are written
All components of the
Less physical space and system can readily be made
lower power consumption highly redundant
Well established and stable