0% found this document useful (0 votes)
50 views

Computer Organization & Architecture As Core Competency

This document discusses digital circuits and computer organization. It begins by explaining logic gates, Boolean algebra, and basic digital components like logic gates, flip-flops, and counters. It then discusses number systems, data types, and coding techniques. Next, it covers common digital components like integrated circuits, memory units, and registers. It concludes by explaining register transfer language, computer instructions, timing and control in basic computer design.

Uploaded by

Holy Shit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

Computer Organization & Architecture As Core Competency

This document discusses digital circuits and computer organization. It begins by explaining logic gates, Boolean algebra, and basic digital components like logic gates, flip-flops, and counters. It then discusses number systems, data types, and coding techniques. Next, it covers common digital components like integrated circuits, memory units, and registers. It concludes by explaining register transfer language, computer instructions, timing and control in basic computer design.

Uploaded by

Holy Shit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

1.

Introduction
Logic gates and Boolean algebra: Logic gates are basic electronic components used to implement
Boolean functions, which are mathematical expressions that describe the behavior of digital
circuits. Boolean algebra is a branch of algebra that deals with variables that have only two possible
values, typically represented as 0 and 1, and the logical operations that can be performed on these
variables, such as AND, OR, NOT, and XOR. Logic gates are physical implementations of these
operations, and they can be combined in various ways to implement more complex circuits.
Combinational circuit: A combinational circuit is a type of digital circuit in which the output
depends only on the current input values and not on any previous inputs or the circuit's state.
Combinational circuits are constructed using logic gates and can perform functions such as
addition, subtraction, multiplication, and division. Examples of combinational circuits include
adders, subtractors, multiplexers, and demultiplexers.
Flip flops: A flip-flop is a type of electronic circuit that can store a single bit of information (either
0 or 1) and can be used to implement memory elements in digital circuits. There are several types
of flip-flops, such as SR flip-flops, D flip-flops, JK flip-flops, and T flip-flops, each with its own
unique characteristics and applications. Flip-flops are essential components of sequential circuits,
which are digital circuits that have a memory element and whose output depends on both the
current input values and the circuit's state.
Sequential circuit: A sequential circuit is a type of digital circuit in which the output depends not
only on the current input values but also on the circuit's state, which is determined by its previous
inputs and the values stored in its memory elements. Sequential circuits are constructed using flip-
flops and combinational circuits, and they can perform functions such as counting, timing, and
data storage. Examples of sequential circuits include counters, registers, and shift registers.
Sequential circuits are used in a wide range of applications, including computer memory,
communication systems, and control systems.
2. Number system and codes
Number system and codes: Number system refers to the way of representing numerical values
using different symbols or digits. The most commonly used number systems are decimal (base
10), binary (base 2), octal (base 8), and hexadecimal (base 16).
Data types: In computing, data types are used to define the type of data that can be stored in a
variable or memory location. The most common data types include integers (whole numbers),
floating-point numbers (decimal numbers), characters (letters and symbols), and Boolean
(true/false) values. Different data types require different amounts of memory to store, and they
also have different ranges and precision.
Complements: Complements are techniques used to represent negative numbers in digital circuits.
The two most commonly used complements are the ones' complement and the twos' complement.
The ones' complement is obtained by flipping all the bits of a number, while the twos' complement
is obtained by adding one to the ones' complement. Complements are used in arithmetic operations
to simplify the process of adding and subtracting negative numbers.
Fixed and floating-point representation: Fixed-point representation is a way of representing
numbers with a fixed number of digits after the decimal point. This is useful when precision is
important and the range of numbers is known in advance. Floating-point representation, on the
other hand, is used to represent numbers with varying levels of precision and a wide range of
magnitudes. Floating-point numbers are represented as a mantissa and an exponent, and they can
be used to represent very large or very small numbers.
Codes: In computing, codes are used to represent characters, symbols, and other data. The most
commonly used codes include ASCII (American Standard Code for Information Interchange),
Unicode, and BCD (Binary Coded Decimal). ASCII is a 7-bit code used to represent letters,
numbers, and symbols, while Unicode is a 16-bit code used to represent characters in many
different languages. BCD is a way of representing decimal numbers using binary code. Other codes
include Gray code, error-correcting codes, and Huffman codes.
3. Common Digital components
Integrated circuit: An integrated circuit (IC) is a miniature electronic circuit that contains many
interconnected transistors, capacitors, resistors, and other components on a single chip of
semiconductor material. ICs can be used to perform a wide range of digital functions, including
logic gates, amplifiers, oscillators, and memory storage. ICs are widely used in modern electronics
and are available in many different types and sizes.
Binary counter: A binary counter is a digital circuit that can count in binary (base 2) from 0 to a
maximum value determined by the number of bits used in the counter. Binary counters can be
synchronous or asynchronous, and they can be used for various applications, such as frequency
division, timing, and sequencing.
Memory units: Memory units are digital circuits that can store and retrieve digital data. There are
several types of memory units, including random-access memory (RAM), read-only memory
(ROM), and flash memory. RAM is used for temporary data storage, while ROM is used for
permanent data storage. Flash memory is used for non-volatile data storage and can be erased and
reprogrammed.
Decoder: A decoder is a digital circuit that can convert a binary code into a corresponding output
signal. Decoders are used in many applications, such as address decoding, data demultiplexing,
and control signal generation.
Multiplexer: A multiplexer is a digital circuit that can select one of several input signals and route
it to a single output. Multiplexers are used for data selection, signal routing, and communication
systems.
Registers: A register is a digital circuit that can store a fixed number of binary digits and perform
various operations on them, such as shifting, loading, and counting. Registers are used for data
storage, data manipulation, and timing control in digital systems. Common types of registers
include shift registers, counter registers, and data registers.
4. Register Transfer Language and Micro Operations
Register Transfer Language and Micro Operations: Register transfer language (RTL) is a
symbolic language used to describe the flow of data between registers in a digital system. RTL is
used to design digital circuits and systems, and it is often used in conjunction with hardware
description languages (HDLs) such as Verilog and VHDL.
Bus and memory transfer: A bus is a group of wires used to transfer data between components
in a digital system. Buses can be used for both memory transfer and input/output (I/O) operations.
Memory transfer refers to the process of reading data from or writing data to memory, while I/O
operations refer to the process of transferring data between a computer and external devices.
Arithmetic and logic operations: Arithmetic and logic operations are fundamental operations in
digital systems. Arithmetic operations include addition, subtraction, multiplication, and division,
while logic operations include AND, OR, NOT, and XOR. These operations can be performed
using digital circuits such as adders, multipliers, and logic gates.
Shift micro operations: Shift micro operations are digital operations that move the contents of a
register one or more positions to the left or right. Shift operations can be used to perform
multiplication and division by powers of two, as well as to move data in and out of registers.
Common types of shift operations include shift left, shift right, rotate left, and rotate right. Shift
operations can be performed using shift registers or combinational circuits.
5. Basic Computer Organization and Design
Instructional code: Instructional code is a set of instructions that a computer can execute. These
instructions are represented by binary codes that the computer can read and understand.
Instructional code determines the set of operations that a computer can perform, including
arithmetic, logical, and memory operations.
Computer Register: A computer register is a high-speed memory unit used to store data and
instructions that are currently being used by the computer's processor. Registers are used to store
operands for arithmetic and logical operations, as well as to store memory addresses for memory
access operations. Registers are an essential component of a computer's architecture, and their size
and number determine the computer's performance and capabilities.
Computer Instructions: Computer instructions are commands that a computer can execute.
Instructions are represented by binary codes that are stored in memory and read by the computer's
processor. Instructions can perform various operations, such as arithmetic, logical, memory access,
and control flow operations.
Timing and control: Timing and control refers to the process of synchronizing the various
components of a computer's architecture to ensure that instructions are executed correctly and in
the correct order. Timing and control circuits generate timing signals that control the flow of data
and instructions between different components of the computer, such as the processor, memory,
and input/output devices.
Memory reference instructions: Memory reference instructions are instructions that access
memory to read or write data. Memory reference instructions can be used to load data into registers,
store data from registers into memory, or transfer data between different memory locations.
Design of Basic computers: The design of basic computers involves the selection of components
and the organization of those components to create a functional computer. A basic computer
typically includes a processor, memory, input/output devices, and control and timing circuits. The
design of basic computers can vary depending on the intended use of the computer and the
available resources.
Design of accumulator logic: The design of accumulator logic involves the creation of circuits
that perform arithmetic and logical operations using an accumulator register. The accumulator
register is a special register that is used to store the result of arithmetic and logical operations. The
design of accumulator logic involves the selection of appropriate circuits and components to
implement the desired operations, such as addition, subtraction, and logical operations.
6. Central processing Unit
General register organization: The general register organization of a CPU refers to the
arrangement of the processor registers that are used for temporary storage of data during the
execution of instructions. General-purpose registers are registers that can be used for a variety of
purposes, such as holding operands for arithmetic operations, storing addresses for memory access,
or holding intermediate results. The number and size of the general-purpose registers vary
depending on the architecture of the CPU.
Stack organization: Stack organization refers to the way in which a CPU uses a stack to manage
the flow of data and instructions during the execution of a program. The stack is a special area of
memory that is used for temporary storage of data, such as return addresses, function parameters,
and local variables. The stack is organized as a last-in, first-out (LIFO) data structure, which means
that the most recently pushed item is the first to be popped off the stack.
Instruction formats: Instruction formats are the way in which instructions are encoded in binary
format to be executed by the CPU. The instruction format includes fields that specify the operation
to be performed, the operands to be used, and the addressing mode to be used for memory access.
Addressing modes: Addressing modes are the methods by which the CPU accesses memory to
read or write data. Addressing modes include direct addressing, immediate addressing, indirect
addressing, indexed addressing, and relative addressing. Each addressing mode has its own
advantages and disadvantages, depending on the specific application.
Data transfer and manipulation: Data transfer and manipulation refers to the way in which the
CPU moves data between registers and memory and performs arithmetic and logical operations on
that data. The CPU uses instructions to move data from one location to another, perform arithmetic
and logical operations on the data, and store the result in a register or memory.
Program control: Program control refers to the way in which the CPU executes program
instructions in a specific sequence to perform a task. The CPU uses instructions that control the
flow of the program, such as jump and branch instructions, to execute the program in the desired
order.
Characteristics of RISC and CISC: RISC (Reduced Instruction Set Computing) and CISC
(Complex Instruction Set Computing) are two different approaches to CPU design. RISC CPUs
have a small set of simple and fast instructions that can be executed quickly, while CISC CPUs
have a larger set of complex instructions that can perform multiple operations in a single
instruction. RISC CPUs are typically used in embedded systems and mobile devices, while CISC
CPUs are used in desktop computers and servers.
7. Memory Organization
Memory Hierarchy: Memory hierarchy refers to the arrangement of different types of memory
devices in a computer system, organized according to their speed, capacity, and cost. The memory
hierarchy typically includes registers, cache memory, main memory, and secondary storage.
Main memory: Main memory is the primary storage location in a computer system where data
and program instructions are stored for quick access by the CPU. Main memory is typically made
up of volatile memory, such as dynamic random-access memory (DRAM), which requires constant
refreshing to maintain its contents.
Cache memory: Cache memory is a type of high-speed memory that is used to store frequently
accessed data and instructions. Cache memory is located between the CPU and main memory, and
it operates on the principle of locality of reference, which states that programs tend to access a
relatively small portion of their memory at any given time.
Mapping functions: Mapping functions are used to determine how data is stored in cache
memory. There are three main types of mapping functions: direct mapping, associative mapping,
and set-associative mapping. Direct mapping maps a particular block of main memory to a
particular block of cache memory, while associative mapping allows any block of main memory
to be stored in any block of cache memory. Set-associative mapping is a compromise between
direct and associative mapping, where each block of main memory is mapped to a set of cache
memory blocks.
External memory: External memory refers to storage devices that are outside the computer
system and are used for long-term storage of data and program files. External memory includes
magnetic disks, RAID technology, optical disks, and magnetic tapes.
Magnetic disks: Magnetic disks are the most common type of secondary storage device used in
computer systems. Magnetic disks store data on a spinning disk coated with a magnetic material,
and they provide fast access to large amounts of data.
RAID technology: RAID (redundant array of independent disks) is a technology that uses multiple
disks to provide improved data reliability, performance, and storage capacity. RAID can be
configured in different levels, such as RAID 0, RAID 1, RAID 5, and RAID 6, depending on the
specific requirements of the system.
Optical disks: Optical disks use a laser to read and write data on a plastic or glass disc coated with
a reflective material. Optical disks include CD-ROM, DVD-ROM, and Blu-ray discs, and they are
used for the distribution of software, music, and video content.
Magnetic tapes: Magnetic tapes are a type of sequential access storage device that uses a magnetic
tape to store data. Magnetic tapes are typically used for backup and archival purposes, as they
provide low-cost, high-capacity storage.
8. Input-Output Organization
Peripheral devices: Peripheral devices are external devices that are connected to a computer
system and used to input or output data. Examples of peripheral devices include keyboards, mice,
printers, scanners, and displays.
Input-output interface: The input-output interface is the part of the computer system that
connects the CPU to peripheral devices. The interface includes hardware components, such as
input-output ports, and software components, such as device drivers.
Asynchronous data transfer: Asynchronous data transfer is a method of data transfer where the
data is transmitted without a common clock signal. Each data bit is accompanied by a start and
stop bit, which synchronizes the communication between the sender and receiver.
Mode of transfer: Mode of transfer refers to the direction of data flow between the CPU and
peripheral devices. There are two main modes of transfer: programmed I/O and interrupt-driven
I/O. Programmed I/O requires the CPU to actively transfer data to or from a peripheral device,
while interrupt-driven I/O allows the peripheral device to initiate data transfer and interrupt the
CPU when the transfer is complete.
Priority interrupts: Priority interrupts are a mechanism used to manage multiple interrupt
requests from peripheral devices. Each interrupt request is assigned a priority level, and the CPU
services the interrupt requests in order of priority.
Direct memory access (DMA): Direct memory access (DMA) is a technique used to improve the
efficiency of data transfer between peripheral devices and main memory. DMA allows the
peripheral device to transfer data directly to or from main memory without the intervention of the
CPU.
Input-Output Controller (IOC): The Input-Output Controller (IOC) is a hardware component
that manages data transfer between peripheral devices and the CPU. The IOC is responsible for
controlling the flow of data, buffering the data, and handling interrupt requests from peripheral
devices.
Serial communication: Serial communication is a method of data transfer where the data is
transmitted one bit at a time over a single communication channel. Serial communication is
commonly used for long-distance communication and for connecting multiple devices to a single
communication channel. Examples of serial communication protocols include RS-232, USB, and
Ethernet.
9. Pipeline and Vector Processing
Pipeline: A pipeline is a technique used in computer architecture to increase the processing speed
of the CPU. In a pipeline, a single instruction is broken down into a sequence of smaller sub-
instructions, or stages, which can be executed in parallel. Each stage in the pipeline processes a
different part of the instruction, and the stages are connected in a pipeline so that the output of one
stage becomes the input to the next stage.
Parallel Processing: Parallel processing is a technique used to divide a large computational task
into smaller sub-tasks that can be executed in parallel. Parallel processing can be achieved using
multiple processors or cores within a single CPU, or by connecting multiple computers in a
network.
Arithmetic Pipeline: An arithmetic pipeline is a pipeline that processes arithmetic operations,
such as addition, subtraction, multiplication, and division. In an arithmetic pipeline, each stage of
the pipeline processes a different part of the arithmetic operation.
Instruction Pipeline: An instruction pipeline is a pipeline that processes machine instructions. In
an instruction pipeline, each stage of the pipeline processes a different part of the instruction, such
as decoding the instruction, fetching data from memory, and executing the instruction.
Vector Processing: Vector processing is a technique used to perform operations on vectors, or
arrays of data, in parallel. Vector processing can be achieved using specialized hardware, such as
vector processors or graphics processing units (GPUs). Vector processing is commonly used in
applications such as image processing, scientific computing, and video games.
Array Processing: Array processing is a technique used to perform operations on arrays of data
in parallel. Array processing can be achieved using vector processors or by dividing the array into
smaller sub-arrays that can be processed in parallel. Array processing is commonly used in
applications such as signal processing, digital image processing, and neural networks.
10. Multiprocessors
Multiprocessors: A multiprocessor is a computer system that uses two or more processors to share
the workload of a single computer program. Multiprocessors can be classified into two categories:
shared memory systems and distributed memory systems.
Shared Memory Systems: In a shared memory system, all processors share a common memory.
Each processor can access any location in memory and can communicate with other processors
using shared variables. Shared memory systems are easier to program but can suffer from
contention for shared resources, such as the memory bus.
Distributed Memory Systems: In a distributed memory system, each processor has its own local
memory, and communication between processors is achieved by passing messages. Distributed
memory systems are more difficult to program but can scale to larger systems and do not suffer
from contention for shared resources.
Interconnection Structures for Multiprocessor: Interconnection structures for multiprocessors
are used to connect processors and memory in a multiprocessor system. The most common
interconnection structures are buses, crossbar switches, and mesh networks.
Buses: Buses are a simple and common interconnection structure for multiprocessors. In a bus-
based system, all processors and memory modules are connected to a shared bus. Data is
transferred between processors and memory modules by passing through the bus.
Crossbar Switches: Crossbar switches are more complex interconnection structures than buses.
In a crossbar-based system, each processor and memory module is connected to a switch, which
routes data between them. Crossbar switches can provide higher bandwidth than buses but can be
more expensive to implement.
Mesh Networks: Mesh networks are a type of interconnection structure that connect processors
and memory modules in a grid-like pattern. Each processor and memory module is connected to
its nearest neighbors in the grid. Mesh networks can provide scalable and fault-tolerant
interconnection for multiprocessor systems.
Inter Processor Communication and Synchronization: Inter-processor communication and
synchronization are critical for the proper functioning of a multiprocessor system. Communication
between processors can be achieved using message passing or shared memory. Synchronization
can be achieved using locks, semaphores, and other mechanisms to ensure that multiple processors
do not access the same resource at the same time. Synchronization is necessary to prevent race
conditions and ensure consistency of shared data.

You might also like