0% found this document useful (0 votes)
3 views

CS_COA_Chapter 1

Chapter 1 introduces the structure and function of computer organization and architecture, highlighting key components such as the CPU, main memory, I/O, and system interconnection. It distinguishes between computer architecture, which affects program execution, and computer organization, which involves the hardware details that implement architectural specifications. The chapter also covers fundamental concepts like logic gates, combinational and sequential circuits, and memory elements such as flip-flops and registers.

Uploaded by

nafyjabesa1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

CS_COA_Chapter 1

Chapter 1 introduces the structure and function of computer organization and architecture, highlighting key components such as the CPU, main memory, I/O, and system interconnection. It distinguishes between computer architecture, which affects program execution, and computer organization, which involves the hardware details that implement architectural specifications. The chapter also covers fundamental concepts like logic gates, combinational and sequential circuits, and memory elements such as flip-flops and registers.

Uploaded by

nafyjabesa1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Chapter 1

Introduction to Computer Organization and Architecture

A computer system, like any system, consists of an interrelated set of components. The system is
best characterized in terms of structure and function.
 Structure refers the way in which components are interconnected, and
 Function The operation of each individual component as part of the structure.
There are four main structural components:
 Central processing unit (CPU): Controls the operation of the computer and performs its
data processing functions; often simply referred to as processor.
 Main memory: Stores data.
 I/O: Moves data between the computer and its external environment.
 System interconnection: Some mechanism that provides for communication among
CPU, main memory, and I/O. A common example of system interconnection is by means
of a system bus, consisting of a number of conducting wires to which all the other
components attach.
Furthermore, a computer’s organization is hierarchical. Each major component can be further
described by decomposing it into its major subcomponents and describing their structure and
function. This hierarchical organization is described from the top down as the following:
 Computer system: Major components are processor, memory, I/O.
 Processor: Major components are control unit, registers, ALU, and instruction execution unit.
 Control unit: Provides control signals for the operation and coordination of all processor
components.
Computer Organization and Architecture
In describing computers, a distinction is often made between computer architecture and
computer organization.
Computer architecture refers to those attributes of a system visible to a programmer. Those
attributes have a direct impact on the logical execution of a program. Examples of architectural
attributes include the instruction set, the number of bits used to represent various data types (e.g.,
numbers, characters), I/O mechanisms, and techniques for addressing memory.
Computer organization refers to the operational units and their interconnections that realize the
architectural specifications.

1
Organizational attributes include those hardware details transparent to the programmer, such as
control signals; interfaces between the computer and peripherals; and the memory technology
used. For example, it is an architectural design issue whether a computer will have a multiply
instruction. It is an organizational issue whether that instruction will be implemented by a special
multiply unit or by a mechanism that makes repeated use of the add unit of the system. The
organizational decision may be based on the anticipated frequency of use of the multiply
instruction, the relative speed of the two approaches, and the cost and physical size of a special
multiply unit.
1.1. Logic gates and Boolean algebra
The fundamental building block of all digital logic circuits is the gate. Logical functions are
implemented by the interconnection of gates.
A gate is an electronic circuit that produces an output signal that is a simple Boolean operation
on its input signals. The basic gates used in digital logic are AND, OR, NOT, NAND, NOR,
and XOR. Figure 1.1 depicts these six gates. Each gate is defined in three ways: graphic symbol,
algebraic notation, and truth table.
All of the gates except NOT can have more than two inputs. Thus, (X + Y + Z) can be
implemented with a single OR gate with three inputs. When one or more of the values at the
input are changed, the correct output signal appears almost instantaneously, delayed only by the
propagation time of signals through the gate (known as the gate delay).

Figure 1.1 Basic Logic Gates


2
Assert a signal is to cause a signal line to make a transition from its logically false (0) state to its
logically true (1) state. The true (1) state is either a high or low voltage state, depending on the
type of electronic circuitry. Typically, not all gate types are used in implementation. Design and
fabrication are simpler if only one or two types of gates are used. Thus, it is important to identify
functionally complete sets of gates. This means that any Boolean function can be implemented
using only the gates in the set.
The following are functionally complete sets:
 AND, OR, NOT
 AND, NOT
 OR, NOT
 NAND
 NOR
It should be clear that AND, OR, and NOT gates constitute a functionally complete set, because
they represent the three operations of Boolean algebra. For the AND and NOT gates to form a
functionally complete set, there must be a way to synthesize the OR operation from the AND and
NOT operations. This can be done by applying DeMorgan’s theorem:

Similarly, the OR and NOT operations are functionally complete because they can be used to
synthesize the AND operation. Figure 11.2 shows how the AND, OR, and NOT functions can be
implemented solely with NAND gates, and Figure 11.3 shows the same thing for NOR gates. For
this reason, digital circuits can be, and frequently are, implemented solely with NAND gates or
solely with NOR gates.

Figure 1.2 Some Uses of NAND Gates


3
Figure 1.3 Some Uses of NOR Gates
A logic gate is an electronic device that makes logical decisions based on the different
combinations of digital signals available on its inputs. A digital logic gate can have more than
one input signal but has only one digital output signal.
AND Gate
AND gate is a type of digital logic gate, which has an output that is normally at logic level “0”
and goes “HIGH” to a logic level “1” when all of its inputs are at logic level “1”. The output of
AND gate returns “LOW” when any of its inputs are at a logic level “0”.

A) Truth Table B) Logic Diagram

Figure 1.4 Truth table and logic diagram for F = x + y’z.

Figure 1.5 Two logical diagrams for the same Boolean function.

4
1.2. Combinational Circuit
A combinational circuit is an interconnected set of gates whose output at any time is a function
of the input at that time. As with a single gate, the appearance of the input is followed almost
immediately by the appearance of the output, with only gate delays. In general terms, a
combinational circuit consists of n binary inputs and m binary outputs. As with a gate, a
combinational circuit can be defined in three ways:
 Truth table: For each of the 2n possible combinations of input signals, the binary value of
each of the m output signals is listed.
 Graphical symbols: The interconnected layout of gates is depicted.
 Boolean equations: Each output signal is expressed as a Boolean function of its input
signals.
1.3. Sequential Circuit
Combinational circuits implement the essential functions of a digital computer. However, except
for the special case of ROM, they provide no memory or state information, elements also
essential to the operation of a digital computer. For the latter purposes, a more complex form of
digital logic circuit is used: the sequential circuit.
The current output of a sequential circuit depends not only on the current input, but also on the
past history of inputs. Another and generally more useful way to view it is that the current output
of a sequential circuit depends on the current input and the current state of that circuit.
1.3.1. Flip flops
The simplest form of sequential circuit is the flip-flop. Flip-flop is a memory element for which
the output is equal to the value of the stored state inside the element and for which the internal
state is changed only on a clock edge. There are a variety of flip-flops, all of which share two
properties:
 The flip- flop is a bi-stable device. It exists in one of two states and, in the absence of
input, remains in that state. Thus, the flip-flop can function as a 1-bit memory.
 The flip- flop has two outputs, which are always the complements of each other. These
are generally labeled Q and Q.
Registers: A register is a digital circuit used within the CPU to store one or more bits of data.
@ Two basic types of registers are commonly used: parallel registers and shift registers.
 Parallel Registers: A parallel register consists of a set of 1-bit memories that can be
read or written simultaneously. It is used to store data.

5
 Shift Register: A shift register accepts and/or transfers information serially. Shift
registers can be used to interface to serial I/O devices. In addition, they can be used
within the ALU to perform logical shift and rotate functions.
 Another useful category of sequential circuit is the counter. A counter is a register whose
value is easily incremented by 1 modulo the capacity of the register; that is, after the
maximum value is achieved the next increment sets the counter value to 0. Thus, a
register made up of n flip-flops can count up to 2n - 1.
 An example of a counter in the CPU is the program counter. Counters can be designated
as asynchronous or synchronous, depending on the way in which they operate.

Asynchronous counters are relatively slow because the output of one flip-flop triggers a change
in the status of the next flip-flop. In a synchronous counter, all of the flip-flops change state at
the same time. Because the latter type is much faster, it is the kind used in CPUs. An
asynchronous counter is also referred to as a ripple counter, because the change that occurs to
increment the counter starts at one end and “ripples” through to the other end.

The ripple counter has the disadvantage of the delay involved in changing value, which is
proportional to the length of the counter. To overcome this disadvantage, CPUs make use of
synchronous counters, in which all of the flip-flops of the counter change at the same time.

You might also like