0% found this document useful (0 votes)
2 views

CompArchitecture Suggestion Semester

The document covers various aspects of computer architecture, focusing on data representation, computer arithmetic, and register transfer operations. It explains number systems (decimal, binary, octal, hexadecimal), methods for representing negative numbers (1's and 2's complement), and arithmetic operations using fixed and floating point representations. Additionally, it discusses algorithms for addition, subtraction, multiplication, and division in binary, as well as the implementation of register transfers and micro-operations in computer systems.

Uploaded by

Suman Pariary
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

CompArchitecture Suggestion Semester

The document covers various aspects of computer architecture, focusing on data representation, computer arithmetic, and register transfer operations. It explains number systems (decimal, binary, octal, hexadecimal), methods for representing negative numbers (1's and 2's complement), and arithmetic operations using fixed and floating point representations. Additionally, it discusses algorithms for addition, subtraction, multiplication, and division in binary, as well as the implementation of register transfers and micro-operations in computer systems.

Uploaded by

Suman Pariary
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

COMPUTER ARCHITECTURE

M1: Data Representation

1. Explain the differences between decimal, binary, octal, and hexadecimal number systems.
Decimal: Base-10 system, uses digits 0-9. Most common system for everyday counting.
Binary: Base-2 system, uses digits 0 and 1. Essential for digital electronics and computing.
Octal: Base-8 system, uses digits 0-7. Simplifies representation of binary numbers (each octal
digit represents three binary digits).
Hexadecimal: Base-16 system, uses digits 0-9 and letters A-F (where A=10, B=11, ..., F=15).
Compact representation of binary numbers (each hex digit represents four binary digits).

2. How are alphanumeric characters represented in binary form?


Alphanumeric characters are represented using character encoding schemes like ASCII
(American Standard Code for Information Interchange) or Unicode. Each character is assigned a
unique binary code. For example, in ASCII, the character 'A' is represented by 01000001 and 'a'
by 01100001.

3. Describe the 1’s complement and 2’s complement methods for representing negative numbers.
1’s Complement: Inverts all bits of the binary representation of a number (0 becomes 1, and 1
becomes 0). For example, the 1’s complement of 5 (00000101) is 11111010.
2’s Complement: Inverts all bits and adds 1 to the least significant bit (LSB). For example, the
2’s complement of 5 (00000101) is 11111011. This method simplifies arithmetic operations as it
allows using the same addition circuitry for both addition and subtraction.

4. What are 9’s complement and 10’s complement representations? Provide examples.
9’s Complement: Subtract each digit from 9. For example, the 9’s complement of 1234 is
8765.
10’s Complement: 9’s complement of the number plus 1. For example, the 10’s complement of
1234 is 8765 + 1 = 8766.

5. Explain the fixed point representation of integers and its applications.


Fixed point representation involves representing numbers with a fixed number of digits before
and after the decimal point. It’s used in systems where precision is critical and hardware
resources are limited, such as embedded systems, signal processing, and real-time computing.

6. How is arithmetic addition performed with fixed point integers? Illustrate with an example.
Addition is performed as with normal integers, ensuring the fixed point (decimal point) aligns
correctly. For example, adding 12.34 (fixed point) and 56.78:
```
12.34
+56.78
------
69.12
```

7. Describe the process of arithmetic subtraction using fixed point representation.


Subtraction is similar to addition, aligning the fixed points and performing the operation digit
by digit. For example, subtracting 34.56 from 78.90:
```
78.90
-34.56
------
44.34
```

8. What is overflow in the context of fixed point arithmetic? How is it detected?


Overflow occurs when a calculation produces a result that exceeds the range representable by
the fixed point format. It is detected when the carry out of the most significant bit does not match
the expected sign bit, indicating an overflow condition.

9. Explain decimal fixed point representation and its uses.


Similar to binary fixed point, but used in decimal form. It’s useful in financial and commercial
applications where decimal precision is crucial, such as currency calculations.

10. Describe floating point representation and its advantages.


Floating point representation allows a wide range of values by representing numbers in the
form of \( \text{sign} \times \text{mantissa} \times 2^{\text{exponent}} \). It provides greater
precision for very large or very small numbers and is used in scientific and engineering
calculations.

11. How is the IEEE 754 standard used for floating point representation?
IEEE 754 standard defines the format for floating point numbers, including single precision
(32-bit) and double precision (64-bit). It specifies the layout of the sign bit, exponent, and
significand (mantissa).

12. What are the components of an IEEE 754 floating point number?
Sign Bit: 1 bit indicating the sign (0 for positive, 1 for negative).
Exponent: Adjusted (biased) exponent to support both positive and negative exponents.
Mantissa: Fractional part of the number.

13. Explain the concept of bias in IEEE 754 floating point representation.
The exponent is stored with a bias to allow representation of both positive and negative
exponents. For single precision, the bias is 127. So, an exponent of 0 is stored as 127, and -1 is
stored as 126.

14. How is rounding handled in IEEE 754 floating point arithmetic?


IEEE 754 standard specifies several rounding modes, including round to nearest (default),
round towards zero, round towards positive infinity, and round towards negative infinity. These
modes handle precision loss during calculations.

15. Compare and contrast fixed point and floating point representations.
Fixed Point: Simpler, faster, and uses less hardware, but limited range and precision. Suitable
for embedded systems and applications requiring predictable precision.
Floating Point: More complex and requires more hardware, but provides a wide range of
values and precision. Ideal for scientific, engineering, and general-purpose computing
applications.

M2: Computer Arithmetic

16. Describe the addition algorithm for sign magnitude numbers.


Steps:
1. Compare the signs of the two numbers.
2. If the signs are the same, add the magnitudes and keep the common sign.
3. If the signs are different, subtract the smaller magnitude from the larger magnitude and
keep the sign of the larger magnitude.
4. Adjust for any carry if necessary.

Example: Adding +9 (01001) and +5 (00101):


Both have the same sign, so add magnitudes: 01001 + 00101 = 01110 (14).

17. How is subtraction performed using sign magnitude numbers?


Steps:
1. Compare the signs of the two numbers.
2. If the signs are different, add the magnitudes and keep the sign of the first number.
3. If the signs are the same, subtract the smaller magnitude from the larger magnitude and
keep the sign of the larger magnitude.
4. Adjust for any borrow if necessary.
Example: Subtracting +5 (00101) from +9 (01001):
Both have the same sign, so subtract magnitudes: 01001 00101 = 00011 (4).

18. Explain the addition algorithm for signed 2’s complement numbers.
Steps:
1. Add the two binary numbers, including their sign bits.
2. Ignore any carry out from the most significant bit (MSB).
3. If the result is negative (sign bit is 1), it is already in 2’s complement form.

Example: Adding -3 (11101) and 5 (00101):


11101 + 00101 = 100010 (6 bits, ignore the overflow, result is 00010 = 2).

19. Describe the subtraction algorithm for signed 2’s complement numbers.
Steps:
1. Take the 2’s complement of the number to be subtracted (invert all bits and add 1).
2. Add this result to the first number.
3. Ignore any carry out from the MSB.

Example: Subtracting 5 (00101) from -3 (11101):


2’s complement of 5: 11011 + 1 = 11100.
Add to -3: 11101 + 11100 = 111001 (ignore the overflow, result is 11100 = -4).

20. What is Booth’s algorithm? Provide an example of its application.


Booth’s Algorithm:
A multiplication algorithm that multiplies two signed binary numbers in 2’s complement
form.
It reduces the number of additions by skipping over blocks of 1s in the multiplier.

Example:
Multiplying 3 (0011) and -4 (1100):
```
Booth's steps:
Initialization: A = 0000, Q = 0011, M = 1100, Q-1 = 0.
Check Q0 and Q-1:
1. 00: Shift.
2. 11: A = A M (0000 1100 = 0100), shift.
3. 01: A = A + M (0100 + 1100 = 0000), shift.
Result: 1100 (binary for -12).
```
21. Explain the multiplication algorithm for binary numbers.
Steps:
1. Align the numbers such that the multiplier is on the right.
2. For each bit in the multiplier, if the bit is 1, add the multiplicand shifted appropriately to
the left.
3. If the bit is 0, skip to the next bit.
4. Sum all partial results to get the final product.

Example: Multiplying 6 (0110) by 3 (0011):


```
0110
x 0011
------
0110 (0110 1)
0110 (0110 1 shifted one position left)
------
10010 (final product 18 in decimal)
```

22. How is division performed using binary numbers? Illustrate with an example.
Steps:
1. Align the divisor and dividend.
2. Subtract the divisor from the most significant part of the dividend.
3. If the result is positive, write 1 in the quotient and bring down the next bit.
4. If the result is negative, write 0 in the quotient, restore the previous result, and bring down
the next bit.
5. Repeat until all bits are processed.

Example: Dividing 13 (1101) by 3 (0011):


```
0011 | 1101 (13)
0011
-----------
1010
0011
-----------
0111
0011
-----------
0100
Quotient: 0101 (5), Remainder: 1
```

23. Compare the efficiency of Booth’s algorithm with the standard multiplication algorithm.
Booth’s algorithm is more efficient for multipliers with large blocks of 1s, as it reduces the
number of required addition operations by skipping over these blocks. It performs better with
fewer operations compared to the standard algorithm, especially for numbers with many
consecutive 1s. The standard multiplication algorithm is simpler but can be less efficient due to
more frequent additions and shifts.

24. What are the advantages of using 2’s complement for arithmetic operations?
Simplifies the hardware design for addition and subtraction, as the same circuit can be used for
both operations.
Eliminates the need for separate sign handling, as negative numbers are represented uniquely.
Allows for easy detection of overflow and underflow conditions.

25. How does overflow occur in signed 2’s complement arithmetic?


Overflow occurs when the result of an arithmetic operation exceeds the representable range of
the given number of bits.
It can be detected when the carry into the MSB does not match the carry out of the MSB.
For example, adding two positive numbers should not produce a negative result, and adding
two negative numbers should not produce a positive result. If either occurs, overflow has
happened.

M3: Register Transfer and Micro-operations

26. Define register transfer language and its significance.


Register transfer language (RTL) is a symbolic notation used to describe the micro-operations
and data flow among registers within a digital system. RTL provides a precise way to define how
data is transferred between registers and how operations are performed on that data, facilitating
the design and analysis of digital circuits.

27. How are register transfers implemented in a computer system?


Register transfers are implemented using a combination of control signals and multiplexers.
Control signals determine which registers are involved in the transfer and the operation to be
performed. Multiplexers select the appropriate data paths for moving data between registers.

28. Describe the bus system for registers and its importance.
A bus system for registers consists of a common data path shared by multiple registers to
transfer data. It allows for efficient data movement and communication between different parts
of a computer system. By using a bus, the number of required interconnections is minimized,
reducing complexity and cost.

29. Explain memory read and memory write operations.


Memory Read Operation: Data is transferred from a memory location to a register. The process
involves placing the address of the memory location on the address bus, enabling the read control
signal, and transferring the data from the memory to the data bus and then to the destination
register.
Memory Write Operation: Data is transferred from a register to a memory location. The
process involves placing the address of the memory location on the address bus, placing the data
on the data bus, enabling the write control signal, and transferring the data from the data bus to
the memory.

30. What are micro-operations? Provide examples.


Micro-operations are basic operations performed on the data stored in registers. They are the
fundamental building blocks of complex instructions executed by the CPU.
Examples:
Arithmetic Micro-operations: Addition (R3 ← R1 + R2)
Logic Micro-operations: AND (R3 ← R1 AND R2)
Shift Micro-operations: Left shift (R1 ← R1 << 1)
Data Transfer Micro-operations: Move (R2 ← R1)

31. Describe the process of register transfer micro-operations.


Register transfer micro-operations involve moving data from one register to another. The
process includes:
1. Activating the control signal for the source register to place its data on the bus.
2. Activating the control signal for the destination register to load the data from the bus.
For example, transferring data from register R1 to register R2 can be expressed as R2 ← R1.

32. Explain arithmetic micro-operations and their implementation.


Arithmetic micro-operations perform arithmetic calculations on the data stored in registers.
These include addition, subtraction, increment, decrement, and more.
Implementation: Arithmetic micro-operations are implemented using arithmetic logic units
(ALUs). For example, the addition operation (R3 ← R1 + R2) involves placing the data from R1
and R2 into the ALU, performing the addition, and storing the result in R3.

33. What are logic micro-operations? Provide examples.


Logic micro-operations perform bitwise logical operations on data stored in registers. These
operations include AND, OR, XOR, and NOT.
Examples:
AND: R3 ← R1 AND R2
OR: R3 ← R1 OR R2
XOR: R3 ← R1 XOR R2
NOT: R2 ← NOT R1

34. Describe shift micro-operations and their uses.


Shift micro-operations move the bits of a register left or right. They are used for various
purposes, such as bit manipulation, multiplication, and division.
Types:
Logical Shift: Shifts bits left or right, filling the vacated bit positions with zeros.
Arithmetic Shift: Shifts bits left or right, preserving the sign bit for signed numbers.
Circular Shift (Rotate): Shifts bits left or right, wrapping the bits around to the opposite end.

35. How does a binary adder work? Explain with a diagram.


A binary adder performs the addition of two binary numbers. The simplest form is the half
adder, which adds two single-bit binary numbers and produces a sum and a carry. The full adder
extends this to add three bits (including a carry bit).
Half Adder Diagram:
```
A ───┐


B ───┴─── XOR ─── Sum


└── AND ─── Carry
```
Full Adder Diagram:
```
A ───┐


B ───┴─── XOR ──┐


C_in ───────────┼── XOR ─── Sum


A B ─────────── AND ─────┐
A C_in ────────── AND ───┼── OR ─── Carry
B C_in ────────── AND ───┘
```

36. Describe the function of a binary adder-subtractor.


A binary adder-subtractor can perform both addition and subtraction using the same hardware.
It uses a mode control signal to switch between addition and subtraction.
Operation:
For addition, the control signal is set to 0, and the inputs are added directly.
For subtraction, the control signal is set to 1, the second operand is inverted (to get its 1’s
complement), and 1 is added to it (to get the 2’s complement), effectively performing
subtraction.

37. What is a binary incrementer? Explain its purpose.


A binary incrementer adds one to a binary number. It is a simple circuit that can be built using
a series of half adders.
Purpose: It is used in counters, address generators, and other circuits where sequential addition
is needed.

38. Describe the arithmetic circuit for performing arithmetic micro-operations.


An arithmetic circuit typically includes an ALU (Arithmetic Logic Unit) capable of performing
various arithmetic operations like addition, subtraction, increment, and decrement.
Components:
Adders: For performing addition.
Subtractor: For performing subtraction (often using an adder with 2’s complement logic).
Incrementer/Decrementer: For performing increment and decrement operations.
Multiplexer: To select the appropriate operation based on control signals.

39. Explain the function of a one-stage logic circuit.


A one-stage logic circuit performs a single logical operation on its inputs and produces an
output. It can be implemented using basic logic gates like AND, OR, NOT, etc.
Example: A simple AND gate that takes two inputs and produces an output that is the logical
AND of the inputs.

40. What are selective set, selective complement, and selective clear operations?
Selective Set: Sets specific bits of a register to 1 while leaving other bits unchanged.
Implemented using the OR operation.
Example: R1 ← R1 OR 00001000 (sets the fourth bit of R1).
Selective Complement: Complements specific bits of a register (changes 1 to 0 and 0 to 1)
while leaving other bits unchanged. Implemented using the XOR operation.
Example: R1 ← R1 XOR 00001000 (complements the fourth bit of R1).
Selective Clear: Clears specific bits of a register to 0 while leaving other bits unchanged.
Implemented using the AND operation with the complement of the mask.
Example: R1 ← R1 AND 11110111 (clears the fourth bit of R1).

M4: Basic Computer Organization and Design

41. What are instruction codes and their importance in computer architecture?
Instruction Codes: Binary codes that represent specific operations to be performed by the
computer’s CPU. Each instruction code typically consists of an opcode (operation code) that
specifies the operation and operands that specify the data or the addresses of the data.
Importance: Instruction codes are fundamental to computer architecture as they define the set
of operations a computer can perform. They enable the CPU to interpret and execute commands,
forming the basis of programming and computation.

42. Explain the concepts of direct address, indirect address, and effective address.
Direct Address: The address field of the instruction contains the actual memory address of the
operand.
Indirect Address: The address field of the instruction contains a memory address that points to
another memory address where the operand is stored.
Effective Address: The actual memory address from which the operand is fetched. In direct
addressing, the effective address is the same as the address field. In indirect addressing, the
effective address is the address pointed to by the address field.

43. List and describe the basic computer registers.


Accumulator (AC): Used for arithmetic and logic operations.
Program Counter (PC): Holds the address of the next instruction to be executed.
Instruction Register (IR): Holds the current instruction being executed.
Memory Address Register (MAR): Holds the address of the memory location to be accessed.
Memory Buffer Register (MBR): Holds the data read from or written to memory.
Temporary Register (TR): Used for intermediate storage during operations.
Input Register (INPR): Holds input data.
Output Register (OUTR): Holds output data.

44. What are memory reference instructions? Provide examples.


Memory reference instructions are instructions that involve accessing memory locations to
fetch operands or store results. They usually involve operations like loading data from memory
to a register or storing data from a register to memory.
Examples:
LDA (Load Accumulator): AC ← M[Address]
STA (Store Accumulator): M[Address] ← AC
ADD: AC ← AC + M[Address]

45. Describe register reference instructions and their uses.


Register reference instructions perform operations on registers without involving memory.
These instructions typically manipulate data within the CPU.
Uses:
Clearing registers: CLA (Clear Accumulator): AC ← 0
Complementing registers: CMA (Complement Accumulator): AC ← AC'
Incrementing registers: INC (Increment Accumulator): AC ← AC + 1

46. Explain input-output instructions in a basic computer system.


Input-output instructions manage data transfer between the CPU and peripheral devices. They
enable the CPU to read data from input devices and write data to output devices.
Examples:
IN: Reads data from an input device into the accumulator.
OUT: Writes data from the accumulator to an output device.
SKI: Skip the next instruction if input flag is set.
SKO: Skip the next instruction if output flag is set.

47. Provide a block diagram and brief explanation of the control unit of a basic computer.
Block Diagram:
```
+-------------------+
| |
| Control Unit |
| |
+---+---+---+---+---+
| | | |
+---+ | | +---+
| | | |
PC IR Flags Control Signals
```
Explanation: The control unit fetches instructions from memory, decodes them to determine
the required operations, and generates control signals to execute the instructions. It coordinates
the activities of the CPU and directs the flow of data between the CPU, memory, and peripherals.

48. What is an instruction cycle? Describe its stages.


Instruction Cycle: The cycle during which a computer retrieves, decodes, and executes an
instruction.
Stages:
Fetch: Retrieve the instruction from memory.
Decode: Interpret the instruction to determine the operation and operands.
Execute: Perform the operation specified by the instruction.
Store (if needed): Write the result to the appropriate register or memory location.

49. How are different types of addresses used in instruction codes?


Immediate Addressing: The operand is part of the instruction itself.
Direct Addressing: The address field contains the memory address of the operand.
Indirect Addressing: The address field contains the address of a memory location that holds the
effective address of the operand.
Register Addressing: The operand is located in a register specified by the instruction.
Indexed Addressing: The effective address is obtained by adding a constant value to the
content of an index register.

50. Describe the execution of a memory reference instruction.


Example: Executing the LDA (Load Accumulator) instruction.
Fetch: The instruction is fetched from memory.
Decode: The control unit decodes the instruction and identifies it as LDA.
Calculate Effective Address: If the instruction uses direct addressing, the address field is the
effective address. If indirect addressing, the address field points to the location of the effective
address.
Fetch Operand: The operand is fetched from the effective address in memory.
Execute: The operand is loaded into the accumulator (AC ← M[Effective Address]).

M5: Microprogrammed Control

51. What is control memory and its role in microprogrammed control?


Control Memory: A special type of memory that stores the microinstructions for the control
unit of a computer. Each microinstruction specifies one or more micro-operations and control
signals.
Role: It directs the sequence of micro-operations to execute a machine instruction, allowing for
flexible and easier modifications to the control unit logic.
52. Explain the process of address sequencing in microprogrammed control units.
Address sequencing involves determining the address of the next microinstruction to be
executed. The process can be:
Sequential Addressing: Incrementing the current address to get the next microinstruction.
Branching: Loading a new address based on conditions or branch instructions.
Conditional Branching: Depending on the result of a condition, a different microinstruction
address is chosen.

53. Provide an example of a microprogram and explain its operation.


Example: A microprogram for an ADD instruction:
```
Microinstruction 1: MAR <PC
Microinstruction 2: MBR <Memory[MAR], PC <PC + 1
Microinstruction 3: IR <MBR
Microinstruction 4: MAR <IR[Address]
Microinstruction 5: MBR <Memory[MAR]
Microinstruction 6: AC <AC + MBR
```
Operation: This microprogram fetches the ADD instruction, decodes it, fetches the operand
from memory, and performs the addition operation with the accumulator.

54. How does microprogramming improve control unit design?


Flexibility: Easier to implement and modify complex instruction sets.
Simplicity: Simplifies the design of the control unit by breaking down operations into
microinstructions.
Upgradability: Allows updates and enhancements to the instruction set without changing the
hardware.

55. Compare microprogrammed control with hardwired control.


Microprogrammed Control: Uses a sequence of microinstructions stored in control memory to
generate control signals. Easier to design and modify.
Hardwired Control: Uses fixed logic circuits to generate control signals. Faster execution but
difficult to modify and design for complex instruction sets.

56. What are the advantages of using microprogrammed control in CPUs?


Ease of Design and Implementation: Simplifies the design of the control unit.
Flexibility: Easy to implement complex instructions and modify them.
Modularity: Facilitates a modular approach to control unit design.
Maintainability: Easier to diagnose and fix control logic errors.
57. Describe the structure of a microinstruction.
Structure: Typically includes fields for specifying micro-operations, control signals, and the
address of the next microinstruction.
Example:
```
| Operation Field | Control Signals | Next Address |
```

58. How is branching handled in a microprogram?


Branching: Conditional or unconditional jumps to different microinstructions based on the
value of condition flags or specific conditions.
Conditional Branching: Uses condition flags to decide the next address.
Unconditional Branching: Directly loads the next address specified in the microinstruction.

59. Explain the concept of micro-operation sequencing.


Micro-operation Sequencing: The process of executing a sequence of micro-operations
specified by microinstructions to perform a machine instruction. It ensures that the operations are
performed in the correct order and at the right time.

60. What are the challenges associated with microprogrammed control?


Complexity: Designing efficient microprograms can be complex.
Speed: Generally slower than hardwired control due to the extra step of fetching
microinstructions.
Memory Requirements: Requires additional memory for storing microinstructions.

M6: Central Processing Unit

61. Describe the general register organization in a CPU.


General Register Organization: A CPU organization where multiple registers are used to hold
data and intermediate results. This setup allows for more flexible and efficient instruction
execution by minimizing memory access.

62. Explain the stack organization and its advantages.


Stack Organization: Uses a stack data structure for storing data, with operations based on LIFO
(Last In, First Out).
Advantages:
Efficient use of space for temporary data.
Simplifies subroutine calls and returns.
Reduces the need for specifying operand addresses.
63. What is a register stack? How does it differ from a memory stack?
Register Stack: A stack implemented using a set of CPU registers.
Difference: Faster access compared to memory stack, as it avoids memory latency. Limited
size due to the finite number of registers.

64. Describe stack operations such as push and pop.


Push: Adds an element to the top of the stack.
Example: Push X → SP ← SP 1; M[SP] ← X
Pop: Removes an element from the top of the stack.
Example: Pop X ← M[SP]; SP ← SP + 1

65. How are arithmetic expressions evaluated using a stack?


Evaluation: Uses postfix (reverse Polish) notation.
Example: For expression (3 + 4) 5:
Convert to postfix: 3 4 + 5
Push 3, Push 4, Pop 4 and 3, Push 7 (3 + 4), Push 5, Pop 5 and 7, Push 35 (7 5).

66. Explain the different types of CPU organization: single accumulator, general register, and
stack organization.
Single Accumulator: Uses one accumulator for all operations.
Example: ADD A (Accumulator = Accumulator + A)
General Register: Uses multiple registers for operations.
Example: ADD R1, R2 (R1 = R1 + R2)
Stack Organization: Uses a stack for operations.
Example: Push A, Pop B, Add (Top = Top + Next)

67. Provide examples of instructions for each type of CPU organization.


Single Accumulator: ADD, SUB, MUL
General Register: MOV, ADD, SUB
Stack Organization: PUSH, POP, ADD

68. What are three-address, two-address, one-address, and zero-address instructions?


Three-Address: Specifies three operands.
Example: ADD R1, R2, R3 (R1 = R2 + R3)
Two-Address: Specifies two operands.
Example: ADD R1, R2 (R1 = R1 + R2)
One-Address: Specifies one operand, uses accumulator.
Example: ADD A (AC = AC + A)
Zero-Address: Uses implicit stack operations.
Example: ADD (Top = Top + Next)

69. Define and provide examples of data transfer instructions.


Data Transfer Instructions: Instructions that move data between registers, memory, and I/O.
Examples: MOV R1, R2; LOAD R1, 1000; STORE R1, 1000

70. Explain data manipulation instructions with examples.


Data Manipulation Instructions: Instructions that perform arithmetic and logic operations on
data.
Examples: ADD R1, R2; SUB R1, R2; AND R1, R2; OR R1, R2

71. What are program control instructions? Provide examples.


Program Control Instructions: Instructions that control the flow of execution.
Examples: JMP 1000 (Jump to address 1000); CALL 2000 (Call subroutine at address 2000);
RET (Return from subroutine)

72. Describe the different types of interrupts: external, internal, and software interrupts.
External Interrupts: Generated by external devices (e.g., I/O devices).
Internal Interrupts: Generated by the CPU (e.g., divide-by-zero error).
Software Interrupts: Generated by executing specific instructions (e.g., system calls).

73. Compare RISC and CISC architectures.


RISC (Reduced Instruction Set Computer): Emphasizes a small, highly optimized set of
instructions.
Characteristics: Simplified instructions, uniform instruction format, load/store architecture.
CISC (Complex Instruction Set Computer): Emphasizes a larger set of more complex
instructions.
Characteristics: Multiple addressing modes, variable-length instructions, more complex
instruction decoding.

74. What are the key features of RISC architecture?


Key Features:
Simple and few instructions.
Fixed instruction format.
Load/store architecture.
High performance through pipelining.
Emphasis on software optimization.

75. Describe the main characteristics of CISC architecture.


Main Characteristics:
Large number of instructions.
Complex addressing modes.
Variable-length instructions.
Single instructions capable of performing multi-step operations.
Emphasis on hardware complexity to optimize performance.

M7: Pipeline and Vector Processing

76. What is parallel processing and its significance in computer architecture?


Parallel Processing: The simultaneous use of multiple computing resources to solve a
computational problem. It involves breaking down a problem into smaller tasks that can be
processed concurrently.
Significance: Increases computational speed and efficiency, allows for handling larger and
more complex problems, and enhances overall system performance.

77. Explain Flynn’s classification of parallel processing systems.


Flynn's Classification categorizes computer architectures based on the number of concurrent
instruction (control) streams and data streams they support:
SISD (Single Instruction stream, Single Data stream): Traditional single-core processors.
SIMD (Single Instruction stream, Multiple Data streams): Vector processors, GPUs.
MISD (Multiple Instruction streams, Single Data stream): Rare, used in specialized systems.
MIMD (Multiple Instruction streams, Multiple Data streams): Multi-core processors,
distributed systems.

78. Describe the concept of pipelining and its benefits.


Pipelining: A technique where multiple instruction phases (fetch, decode, execute, etc.) are
overlapped. Each phase is handled by a different stage in the pipeline.
Benefits: Increases instruction throughput, improves CPU utilization, and enhances overall
performance by allowing multiple instructions to be processed simultaneously.

79. Provide an example of a pipeline and explain its operation.


Example: A simple 4-stage pipeline (Fetch, Decode, Execute, Write-back).
Operation:
Cycle 1: Fetch instruction 1.
Cycle 2: Decode instruction 1, Fetch instruction 2.
Cycle 3: Execute instruction 1, Decode instruction 2, Fetch instruction 3.
Cycle 4: Write-back instruction 1, Execute instruction 2, Decode instruction 3, Fetch
instruction 4.

80. What is a space-time diagram? How is it used in pipeline analysis?


Space-Time Diagram: A graphical representation showing the execution of instructions across
different stages of the pipeline over time.
Usage: Helps visualize instruction flow, identify potential bottlenecks, and analyze pipeline
performance (e.g., instruction throughput, latency).

81. Define speedup in the context of pipelining.


Speedup: A measure of the performance improvement achieved by pipelining, defined as the
ratio of the time taken to execute a task without pipelining to the time taken with pipelining.
Formula: Speedup = \( \frac{\text{Non-pipelined execution time}}{\text{Pipelined execution
time}} \)

82. Describe the basic idea of an arithmetic pipeline.


Arithmetic Pipeline: A pipeline used to perform arithmetic operations (e.g., addition,
multiplication) in stages. Each stage completes part of the operation, allowing for concurrent
processing of multiple arithmetic tasks.
Example: A floating-point adder pipeline with stages for alignment, addition, normalization,
and rounding.

83. Explain the process of floating point addition/subtraction using a pipeline.


Stages:
Alignment: Align the exponents of the operands.
Addition/Subtraction: Perform the arithmetic operation on the mantissas.
Normalization: Normalize the result to maintain proper floating-point representation.
Rounding: Round the result to the nearest representable value.

84. What are the challenges associated with pipelining?


Challenges:
Hazards: Data hazards (dependencies between instructions), control hazards (branch
instructions), and structural hazards (resource conflicts).
Stalling: Delays caused by waiting for data or resources.
Complexity: Increased design complexity and debugging difficulty.

85. How is instruction-level parallelism achieved in pipelining?


Instruction-Level Parallelism (ILP): Achieved by overlapping the execution of multiple
instructions in the pipeline. Techniques include:
Out-of-Order Execution: Instructions are executed as soon as their operands are available.
Branch Prediction: Predicting the outcome of branches to minimize control hazards.
Superscalar Execution: Multiple instructions are issued and executed per clock cycle.

M8: Input-Output Organization


86. What are peripheral devices? Provide examples.
Peripheral Devices: External devices connected to a computer to provide input, output, or
storage functions.
Examples: Keyboards, mice, monitors, printers, hard drives, USB drives.

87. Describe the input-output interface and its importance.


Input-Output Interface: The hardware and software components that enable communication
between the CPU and peripheral devices.
Importance: Ensures smooth data transfer, manages device control, and provides protocols for
handling I/O operations, ensuring efficient and reliable system performance.

88. Compare isolated I/O and memory-mapped I/O.


Isolated I/O: Separate address spaces for memory and I/O devices. Requires specific
instructions for I/O operations.
Memory-Mapped I/O: I/O devices share the same address space as memory. Regular memory
instructions are used for I/O operations.

89. Explain asynchronous data transfer methods: strobe and handshaking.


Strobe Method: Uses a control signal (strobe) to indicate when data is ready for transfer. The
sender activates the strobe signal, and the receiver captures the data.
Handshaking Method: Uses two control signals for coordinated data transfer. The sender sends
a ready signal, and the receiver sends an acknowledgment signal once it has received the data.

90. What is programmed I/O? How does it work?


Programmed I/O: The CPU directly controls data transfer to and from I/O devices. The CPU
continuously checks the status of the I/O device (polling) and performs the data transfer.
Operation:
CPU sends a command to the I/O device.
CPU waits for the I/O operation to complete by polling the status.
CPU transfers data to/from the I/O device once ready.

91. Describe interrupt-initiated I/O and its advantages.


Interrupt-Initiated I/O: The I/O device interrupts the CPU when it is ready for data transfer,
eliminating the need for continuous polling.
Advantages: Reduces CPU idle time, improves efficiency, and allows the CPU to perform
other tasks while waiting for I/O operations.

92. Explain the basic concept of Direct Memory Access (DMA).


Direct Memory Access (DMA): Allows peripheral devices to directly transfer data to and from
memory without CPU intervention.
Concept: The DMA controller manages data transfers, freeing the CPU from the burden of data
transfer operations and improving overall system efficiency.

93. What is a DMA controller (DMAC)? Describe its function.


DMA Controller (DMAC): A hardware component that manages DMA data transfers between
peripheral devices and memory.
Function: Initiates and controls data transfers, generates memory addresses, and manages the
data transfer process, allowing the CPU to focus on other tasks.

94. How does an input-output processor (IOP) differ from a CPU?


Input-Output Processor (IOP): A specialized processor dedicated to managing I/O operations.
Differences:
Function: IOP handles I/O tasks, offloading these tasks from the CPU.
Architecture: Optimized for I/O processing rather than general-purpose computation.
Control: Operates concurrently with the CPU, handling multiple I/O devices and tasks.

95. Describe the role of interrupts in input-output operations.


Role of Interrupts:
Signals the CPU to handle an I/O event.
Allows efficient CPU utilization by avoiding constant polling.
Facilitates asynchronous I/O operations, enabling the CPU to respond to I/O device readiness.

M9: Memory Organization

96. What is the memory hierarchy in a computer system?


Memory Hierarchy: The organization of different types of memory in a system, arranged based
on speed, cost, and size.
Levels:
Registers: Fastest, smallest, and most expensive.
Cache Memory: Faster than main memory, used to store frequently accessed data.
Main Memory (RAM): Primary storage for actively used programs and data.
Secondary Storage: Larger, slower storage (e.g., hard drives, SSDs).
Tertiary Storage: Backup storage (e.g., tapes, external drives).

97. Define main memory and its types.


Main Memory: The primary storage area for data and programs that are actively used by the
CPU.
Types:
RAM (Random Access Memory): Volatile memory, used for temporary storage.
ROM (Read-Only Memory): Non-volatile memory, used to store firmware and system
software.

98. What are the different types of RAM? Explain their characteristics.
SRAM (Static RAM):
Uses bistable latching circuitry.
Faster and more reliable than DRAM.
More expensive, consumes more power.
DRAM (Dynamic RAM):
Stores data as charge

in capacitors.
Slower than SRAM, but denser and cheaper.
Requires periodic refreshing to maintain data.

99. Compare and contrast SRAM and DRAM.


SRAM:
Faster access time.
Does not require refreshing.
Higher cost and power consumption.
Used in cache memory.
DRAM:
Slower access time.
Requires periodic refreshing.
Lower cost and higher density.
Used in main memory.

100. What is cache memory? Explain its purpose.


Cache Memory: A small, high-speed memory located close to the CPU.
Purpose: Stores frequently accessed data and instructions to reduce the time needed to access
data from the main memory, improving overall system performance.

101. Describe the different cache memory mapping techniques: direct, associative, and set
associative.
Direct Mapping: Each block of main memory maps to only one cache line.
Advantage: Simple and fast.
Disadvantage: Potential for high conflict misses.
Associative Mapping: Any block of main memory can be loaded into any cache line.
Advantage: Reduces conflict misses.
Disadvantage: More complex and slower to search.
Set Associative Mapping: Combines direct and associative mapping by dividing cache into
sets.
Advantage: Balances complexity and conflict misses.
Disadvantage: More complex than direct mapping.

102. What is Content Addressable Memory (CAM)? Describe its hardware organization.
Content Addressable Memory (CAM): A type of memory where data is accessed based on
content rather than address.
Hardware Organization: Consists of an array of memory cells with logic circuitry for parallel
searching of data. Each cell stores a bit and can compare the stored bit with the search bit
simultaneously.

103. Explain the concept of virtual memory and its benefits.


Virtual Memory: An abstraction that gives the illusion of a large, continuous memory space to
programs, regardless of the physical memory available.
Benefits:
Allows execution of larger programs than the physical memory.
Provides memory protection and isolation between processes.
Facilitates efficient use of available physical memory through paging and swapping.

104. How is virtual memory implemented using pages?


Implementation: Divides both physical memory and virtual memory into fixed-size blocks
called pages.
Page Table: Maps virtual addresses to physical addresses.
Page Fault: Occurs when a program accesses a page not currently in physical memory,
triggering a page load from secondary storage.

105. What is a page fault? How is it handled?


Page Fault: An interrupt that occurs when a program accesses a page that is not currently in
physical memory.
Handling:
The operating system identifies the missing page.
Allocates a free frame in physical memory.
Loads the required page from secondary storage into the allocated frame.
Updates the page table to reflect the new location.
Resumes program execution.

106. Describe the process of segment-based virtual memory mapping.


Segment-Based Virtual Memory: Divides the program's memory into variable-sized segments,
each representing a logical unit (e.g., code, data, stack).
Mapping:
Each segment has a segment number and an offset.
The segment table maps segment numbers to physical addresses.
The offset specifies the location within the segment.

107. What is a Translation Lookaside Buffer (TLB)? Explain its function.


Translation Lookaside Buffer (TLB): A cache used to store recent translations of virtual
addresses to physical addresses.
Function: Speeds up virtual address translation by reducing the number of accesses to the page
table, thus improving memory access times.

108. Describe the different types of auxiliary memory.


Types:
Magnetic Disks: Used for secondary storage (e.g., hard drives).
Optical Disks: Use laser technology to read/write data (e.g., CDs, DVDs).
Flash Memory: Non-volatile memory used in USB drives, SSDs.
Magnetic Tape: Used for backup and archival storage.

110. Define and explain seek time, rotational delay, access time, transfer time, and latency in the
context of disk storage.
Seek Time: The time it takes for the read/write head to move to the track where the data is
located.
Rotational Delay: The time it takes for the desired sector of the disk to rotate under the
read/write head.
Access Time: The total time to access data, including seek time and rotational delay.
Transfer Time: The time it takes to actually transfer the data once the read/write head is
positioned correctly.
Latency: The delay before data transfer begins, often synonymous with rotational delay in the
context of disks.

You might also like