0% found this document useful (0 votes)
37 views33 pages

UNIT3 Merged Watermarked

The document describes the instruction cycle process which includes fetching an instruction from memory using the program counter address, decoding the instruction opcode, fetching any operands from memory, performing the data operation on the operands, storing any results, and incrementing the program counter to fetch the next instruction, repeating the cycle. It also discusses program control instructions which can alter the program counter to control program flow, including examples like branch, jump, skip, call, return, compare, and test instructions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views33 pages

UNIT3 Merged Watermarked

The document describes the instruction cycle process which includes fetching an instruction from memory using the program counter address, decoding the instruction opcode, fetching any operands from memory, performing the data operation on the operands, storing any results, and incrementing the program counter to fetch the next instruction, repeating the cycle. It also discusses program control instructions which can alter the program counter to control program flow, including examples like branch, jump, skip, call, return, compare, and test instructions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

INSTRUCTION CYCLE

An instruction cycle is the complete process of fetching, decoding and executing the
instruction.

1) PC gives the address to fetch an instruction from the memory.


2) Once fetched, the instruction opcode is decoded.
3) This identifies, if there are any operands to be fetched from the memory.
4) The operand address is calculated.
5) Operands are fetched from the memory.
6) Now the data operation is performed on the operands, and a result is generated.
7) If the result has to be stored in a register, the instruction ends here.
8) If the destination is memory, then first the destination address has to be calculated.
9) The result is then stored in the memory.
10) Now the current instruction has been executed.
11) Side by side PC is incremented to calculate address of the next instruction.
12) The above instruction cycle then repeats for further instructions.
PROGRAM CONTROL INSTRUCTION
Instructions of the computer are always stored in consecutive memory locations.
These instructions are fetched from successive memory locations for processing and
executing.
When an instruction is fetched from the memory, the program counter is
incremented by 1 so that it points to the address of the next consecutive instruction
in the memory. Once a data transfer and data manipulation instruction are executed,
the program control along with the program counter, which holds the address of the
next instruction to be fetched, is returned to the fetch cycle.
Data transfer and manipulation instructions specify the conditions for data
processing operations, whereas the program control instructions specify the
conditions that can alter the content of the program counter.
The change in the content of the program counter can cause an interrupt/break in the
instruction execution. However, the program control instructions control the flow of
program execution and are capable of branching to different program segments.
Some of the program control instructions are listed in the table.
Program Control Instructions

Name Mnemonics

Branch BR

Jump JMP

Skip SKP

Call Call

Return RET

Compare (by Subtraction) CMP

Test (by ANDing) TST

The branch is a one-address instruction. It is represented as BR ADR, where ADR is


a mnemonic for an address. The branch instruction transfers the value of ADR into
the program counter. The branch and jump instructions are interchangeably used to
mean the same. However, sometimes they denote different addressing modes.
The conditional branch instructions such as ‘branch if positive’, or ‘branch if zero’
specifies the condition to transfer the flow of execution. When the condition is met,
the branch address is loaded in the program counter.
The figure depicts the conditional branch instructions.

The compare instruction performs an arithmetic subtraction. Here, the result of the
operation is not saved; instead, the status bit conditions are set. The test instruction
performs the logical AND operation on two operands and updates the status bits.
MICRO-OPERATIONS & CONTROL SIGNALS

• A Program is a set of Instructions.


• An Instruction, requires a set of small operations called Micro-Operations.
• A Micro-Operation is a finite activity performed by the processor in one clock cycle. One clock
cycle is also called one T-state (Transition state).
• One Micro-Operation requires One T-state.
• Several Independent Micro-Operations can be performed in the same T-state.
• Control unit generates control signals to perform these very Micro-Operations.
• To understand Control Units, we must first clearly understand Micro-Operations.
MICRO-OPERATIONS FOR INSTRUCTION FETCHING

STATE MICRO-OPERATION EXPLANATION


T1: MAR ç PC PC puts address of the next instruction into MAR
T2: MBR ç Memory (Instr) MBR gets instruction from memory through data bus
T3: IR ç MBR IR gets instruction from MBR
PC ç PC + 1 PC gets incremented

As we can see in the above table, two Micro-Operations can take place in the same T-state i.e.

IRç MBR and PCç PC + 1

This is because they are completely independent of each other.


In fact, PC becoming PC + 1 can also be performed in the 2nd T-state while the instruction is being
fetched from the memory through the data bus into MBR.
This is shown below.

MICRO-OPERATIONS FOR INSTRUCTION FETCHING (ALTERNATE METHOD)

STATE MICRO-OPERATION EXPLANATION


T1: MAR ç PC PC puts address of the next instruction into MAR
T2: MBR ç Memory (Instr) MBR gets instruction from memory through data bus
PC ç PC + 1 PC gets incremented
T3: IR ç MBR IR gets instruction from MBR

Please Note

PC ç PC + 1 cannot take place in the 1st T-state.


That’s because, in the 1st T-state, PC is providing the address on the address bus, through MAR. If at the
same time PC gets incremented, then the incremented address will be put on the address bus.

Now that we know how an instruction is fetched, we can proceed further and learn Micro-Operations for
various instructions, of different Addressing Modes.
MICRO-OPERATIONS FOR IMMEDIATE ADDRESSING MODE

E.g.: MOV R1, 25H; R1 register gets the immediate value 25H

STATE MICRO-OPERATION EXPLANATION


T1: MAR ç PC PC puts address of the next instruction into MAR
T2: MBR ç Memory (Instr) MBR gets instruction from memory through data bus
T3: IR ç MBR IR gets instruction “MOV R1, 25H” from MBR
PC ç PC + 1 PC gets incremented
T4: R1 ç 25H (IR) R1 register gets the value 25H from IR

MICRO-OPERATIONS FOR REGISTER ADDRESSING MODE

E.g.: MOV R1, R2; R1 register gets the data from Register R2

STATE MICRO-OPERATION EXPLANATION


T1: MAR ç PC PC puts address of the next instruction into MAR
T2: MBR ç Memory (Instr) MBR gets instruction from memory through data bus
T3: IR ç MBR IR gets instruction “MOV R1, R2” from MBR
PC ç PC + 1 PC gets incremented
T4: R1 ç R2 R1 register gets the value from R2 Register

MICRO-OPERATIONS FOR DIRECT ADDRESSING MODE

E.g.: MOV R1, [2000H]; R1 register gets the data from memory location 2000H

STATE MICRO-OPERATION EXPLANATION


T1: MAR ç PC PC puts address of the next instruction into MAR
T2: MBR ç Memory (Instr) MBR gets instruction from memory through data bus
T3: IR ç MBR IR gets instruction “MOV R1, [2000H]” from MBR
PC ç PC + 1 PC gets incremented
T4: MAR ç IR (2000H) MAR gets the address 2000H from IR
T5: MBR ç Memory ([2000H]) MBR gets contents of location 2000H from Memory.
T6: R1 ç MBR ([2000H]) Register R1 gets contents of memory location 2000H from MBR
MICRO-OPERATIONS FOR INDIRECT ADDRESSING MODE

E.g.: MOV R1, [R2]; R1 register gets the data from memory location pointed by R2

STATE MICRO-OPERATION EXPLANATION


T1: MAR ç PC PC puts address of the next instruction into MAR
T2: MBR ç Memory (Instr) MBR gets instruction from memory through data bus
T3: IR ç MBR IR gets instruction “MOV R1, [R2]” from MBR
PC ç PC + 1 PC gets incremented
T4: MAR ç R2 MAR gets the address R2 Register
T5: MBR ç Memory ([R2]) MBR gets contents of location pointed by R2 from Memory.
T6: R1 ç MBR ([R2]) Register R1 gets contents of memory location pointed by R2 from MBR
In the exam, once, Add R1, [R2] was asked.
Everything else will be same. Only change: T6: R1 ç R1 + MBR

MICRO-OPERATIONS FOR INDIRECT ADDRESSING MODE

E.g.: MOV [R2], R1; R1 register stores data into memory location pointed by R2

STATE MICRO-OPERATION EXPLANATION


T1: MAR ç PC PC puts address of the next instruction into MAR
T2: MBR ç Memory (Instr) MBR gets instruction from memory through data bus
T3: IR ç MBR IR gets instruction “MOV [R2], R1” from MBR
PC ç PC + 1 PC gets incremented
T4: MAR ç R2 MAR gets the address R2 Register
T5: MBR ç R1 R1 puts data into MBR to store it in the memory location pointed by R2.

MICRO-OPERATIONS FOR IMPLIED ADDRESSING MODE

E.g.: STC; Set the Carry Flag; (CF ç 1).

STATE MICRO-OPERATION EXPLANATION


T1: MAR ç PC PC puts address of the next instruction into MAR
T2: MBR ç Memory (Instr) MBR gets instruction from memory through data bus
T3: IR ç MBR IR gets instruction “STC” from MBR
PC ç PC + 1 PC gets incremented
T4: CF ç 1 Carry Flag in the Flag Register becomes 1
RISC
RISC stands for Reduced Instruction Set Computers. RISC is a microprocessor and as the name indicates, it
performs a smaller number of computer instructions. Thus, it has a high speed to operate. It works on a fixed
instruction format that keeps less than 100 instructions to be processed with a register-based instruction
used by a few simple addressing modes. The LOAD/STORE is the only instruction used to access the memory
as it is a compiler development mechanism.

Advantages of RISC Processor


1. The RISC processor's performance is better due to the simple and limited number of the instruction
set.

2. It requires several transistors that make it cheaper to design.

3. RISC allows the instruction to use free space on a microprocessor because of its simplicity.

4. RISC processor is simpler than a CISC processor because of its simple and quick design, and it can
complete its work in one clock cycle.

Disadvantages of RISC Processor


1. The RISC processor's performance may vary according to the code executed because subsequent
instructions may depend on the previous instruction for their execution in a cycle.

2. Programmers and compilers often use complex instructions.

3. RISC processors require very fast memory to save various instructions that require a large collection
of cache memory to respond to the instruction in a short time.

It is a highly customized set of instructions used in portable devices due to system reliability such as
Apple iPod, mobiles/smartphones.
CISC
CICS stands for Complex Instruction Set Computer. CISC is the kind of chip that can be easily programmed
and makes the best and efficient use of memory. The main motive of CISC is to make compiler development
easy and simple. There is no need for the machine to generate instructions for the processor as CISC
eliminates the need.

Difference between RISC and CISC Processor


S.No. RISC CISC

1. RISC is a reduced instruction set. CISC is a complex instruction set.

2. The number of instructions is less as compared The number of instructions is more as compared
to CISC. to RISC.

3. The addressing modes are less. The addressing modes are more.

4. It works in a fixed instruction format. It works in a variable instruction format.

5. The RISC consumes low power. The CISC consumes high power.

6. The RISC processors are highly pipelined. The CISC processors are less pipelined.

7. It optimizes the performance by focusing on It optimizes the performance by focusing on


software. hardware.

8. Requires more RAM. Requires less RAM.


PIPELINING
Overlapping different stages of an instruction is called pipelining.

An instruction requires several steps which mainly involve fetching, decoding and execution.
If these steps are performed one after the other, they will take a long time.
As processors became faster, several of these steps started to get overlapped, resulting in faster
processing. This is done by a mechanism called pipelining.

2 STAGE PIPELINING - 8086


Here the instruction process in divided into two stages of fetching and execution.
Fetching of the next instruction takes place while the current instruction is being executed. Hence two
instructions are being processed at any point of time.

NON-PIPELINED PROCESSOR EG: 8085

Time
F1 E1 F2 E2 F3 E3 F4 E4 F5 E5

Total time taken

PIPELINED PROCESSOR EG: 8086

Time
F1 E1 E2 E3 E4 E5
F2 F3 F4 F5
Overlapping fetching
and execution

Total time taken


3 STAGE PIPELINING – 80386/ ARM 7

Here the instruction process in divided into three stages of fetching, decoding and execution and are
overlapped. Hence three instructions are being processed at any point of time.

4 STAGE PIPELINING

Fetch, Decode, Execute, Store

5 STAGE PIPELINING - PENTIUM

Fetch, Decode, Address Generation, Execute, Store


6 STAGE PIPELINING – PENTIUM PRO

Modern processors have a very deep pipeline.


Ex: Pentium 4 (Net burst Architecture) has a 20 stage pipeline.

ADVANTAGE OF PIPELINING
The obvious advantage of pipelining is that it increases the performance. As shown by the various
examples above, deeper the pipelining, more is the level of parallelism, and hence the processor becomes
much faster.
DRAWBACKS/ HAZARDS OF PIPELINING
There are various hazards of pipelining, which cause a dip in the performance of the processor. These
hazards become even more prominent as the number of pipeline stages increase. They may occur
due to the following reasons.

1) DATA HAZARD/ DATA DEPENDENCY HAZARD

Data Hazard is caused when the result (destination) of one instruction becomes the operand
(source) of the next instruction.

Consider two instructions I1 and I2 (I1 being the first).

Assume I1: INC [4000H]


Assume I2: MOV BL , [4000H]

Clearly in I2, BL should get the incremented value of location [4000H].


But this can only happen once I1 has completely finished execution and also written back the result
at [4000H].
In a multistage pipeline, I2 may reach execution stage before I1 has finished storing the result at
location [4000H], and hence get a wrong value of data.
This is called data dependency hazard.
It is solved by inserting NOP (No operation) instructions between such data dependent instructions.

2) CONTROL HAZARD/ CODE HAZARD

Pipelining assumes that the program will always flow in a sequential manner.
Hence, it performs various stages of the forthcoming instructions before-hand, while the current
instruction is still being executed.
While programs are sequential most of the times, it is not true always.
Sometimes, branches do occur in programs.
In such an event, all the forthcoming instructions that have been fetched/ decoded etc have to be
flushed/ discarded, and the process has to start all over again, from the branch address. This causes
pipeline bubbles, which simply means time of the processor is wasted.

Consider the following set of instructions:

Start:
JMP Down
INC BL
MOV CL, DL
ADD AL, BL


Down: DEC CH
JMP Down is a branch instruction.
After this instruction, program should jump to the location “Down” and continue with DEC CH
instruction.

But, in a multistage pipeline processor, the sequentially next instructions after JMP Down have
already been fetched and decoded. These instructions will now have to be discarded and fetching will
begin all over again from DEC CH. This will keep several units of the architecture idle for some time.
This is called a pipeline bubble.

The problem of branching is solved in higher processors by a method called “Branch Prediction
Algorithm”. It was introduced by Pentium processor. It relies on the previous history of the
instruction as most programs are repetitive in nature. It then makes a prediction whether branch
will be taken or not and hence puts the correct instructions in the pipelines.

3) STRUCTURAL HAZARD

Structural hazards are caused by physical constraints in the architecture like the buses. Even
in the most basic form of pipelining, we want to execute one instruction and fetch the next one. Now
as long as execution only involves registers, pipelining is possible. But if execution requires to
read/ write data from the memory, then it will make use of the buses, which means
fetching cannot take place at the same time. So the fetching unit will have to wait and hence a
pipeline bubble is caused. This problem is solved in complex Harvard architecture processors, which
use separate memories and separate buses for programs and data. This means fetching and
execution can actually happen at the same time without any interference with each other.
E.g.: PIC 18 Microcontroller.
HARDWIRED CONTROL UNIT

• Here control signals are produced by hardware.


• There are three types of Hardwired Control Units

STATE TABLE METHOD

1) It is the most basic type of hardwired control unit.


2) Here the behavior of the control unit is represented in the form of a table called the state table.
3) The rows represent the T-states and the columns indicate the instructions.
4) Each intersection indicates the control signal to be produced, in the corresponding T-state of every
instruction.
5) A circuit is then constructed based on every column of this table, for each instruction.

INSTRUCTIONS
T-STATES
I1 I2 … IN
T1 Z1,1 Z1,2 … Z1,N
T2 Z2,1 Z2,2 … Z2,N
… … … … …
TM ZM,1 ZM,2 … ZM,N

Z1,1 : Control Signal to be produced in T-state (T1) of Instruction (I1)

ADVANTAGE:
It is the simplest method and is ideally suited for very small instruction sets.

DRAWBACK:
As the number of instructions increase, the circuit becomes bigger and hence more complicated.
As a tabular approach is used, instead of a logical approach (flowchart), there are duplications of many
circuit elements in various instructions.
DELAY ELEMENT METHOD
1) Here the behavior of the control unit is represented in the form of a flowchart.
2) Each step in the flowchart represents a control signal to be produced.
3) Once all steps of a particular instruction, are performed, the complete instruction gets executed.
4) Control signals perform Micro-Operations, which require one T-states each.
5) Hence between every two steps of the flowchart, there must be a delay element.
6) The delay must be exactly of one T-state. This delay is achieved by D Flip-Flops.
7) These D Flip-Flops are inserted between every two consecutive control signals.

8) Of all D Flip-Flops only one will be active at a time. So the method is also called “One Hot Method”.
9) In a multiple entry point, to combine two or more paths, we use an OR gate.

10) A decision box is replaced by a set of two complementing AND gates

ADVANTAGE:
As the method has a logical approach, it can reduce the circuit complexity.
This is done by re-utilizing common elements between various instructions.

DRAWBACK:
As the no of instructions increase, the number of D Flip-Flops increase, so the cost increases.
Moreover, only one of those D Flip-Flops are actually active at a time.
SEQUENCE COUNTER METHOD

1) This is the most popular form of hardwired control unit.


2) It follows the same logical approach of a flowchart, like the Delay element method, but does not
use all those unnecessary D Flip-Flops.
3) First a flowchart is made representing the behavior of a control unit.
4) It is then converted into a circuit using the same principle of AND & OR gates.
5) We need a delay of 1 T-state (one clock cycle) between every two consecutive control signals.
6) That is achieved by the above circuit.
7) If there are “k” number of distinct steps producing control signals, we employ a “mod k” and “k”
output decoder.
8) The counter will start counting at the beginning of the instruction.
9) The “clock” input via an AND gate ensures each count will be generated after 1 T-state.
10) The count is given to the decoder which triggers the generation of “k” control signals, each after
a delay of 1 T-state.
11) When the instruction ends, the counter is reset so that next time, it begins from the first count.

ADVANTAGE:
Avoids the use of too many D Flip-Flops.

GENERAL DRAWBACKS OF A HARDWIRED CONTROL UNIT


1) Since they are based on hardware, as the instruction set increases, the circuit becomes more and
more complex. For modern processors having hundreds of instructions, it is virtually impossible to
create Hardwired Control Units.
2) Such large circuits are very difficult to debug.
3) As the processor gets upgraded, the entire Control Unit has to be redesigned, due to the rigid
nature of hardware design.
MICRO-INSTRUCTION FORMAT
The main part of the micro-instruction is its control field.
It determines the control signals to be produced.
It can be of two different formats: Horizontal or Vertical.

1) HORIZONTAL MICRO-INSTRUCTION
Here every bit of the micro-instruction corresponds to a control signal.
Whichever bit is “1”, that particular control signal will be produced by the micro-instruction.

2) VERTICAL MICRO-INSTRUCTION
Here bits of the micro-instruction have to be decoded.
The decoded output decides the control signal to be produced.
HORIZONTAL MICRO-INSTRUCTION VERTICAL MICRO-INSTRUCTION

Every bit of the micro-instruction Bits of the micro-instruction have to be


1
corresponds to a control signal. decoded to produce control signals.

2 Does not require a decoder. Needs a decoder.

N bits in the micro-instruction will totally N bits in the micro-instruction will totally
3
produce N control signals. produce 2N control signals.

Multiple control signals can be produced Only one control signal can be produced by
4
by one micro-instruction. one micro-instruction.

As the control signals increase, the micro- To produce more control signals, more
5 instruction grows wider. Hence the Control number of micro-instructions are needed.
Memory grows Horizontally. Hence the Control Memory grows Vertically.

6 Executes faster as no decoding needed. Executes slower as decoding is needed.

Micro-instruction are very wide. Micro-instruction are much narrower.


7
Hence Control memory is large. Hence Control memory is small.

Circuit is simpler as a decoder is not Circuit is more complex as a decoder is


8
needed. needed.

As seen from the above comparison, both methods have their pros and cons.
So a combination of both is used together called Nano-Programming.
NANO-PROGRAMMING (Very Important)
1) Horizontal µ-instructions can produce multiple control signals simultaneously, but are very wide.
2) This makes the Control Memory very large in size.
3) Vertical micro-instructions are narrow, but on decoding can produce only one control signal.
4) This makes the Control Memory small but the execution is slow.
5) Hence a combination of both techniques is needed called Nano-Programming.
6) Here we have a two level control memory.
7) The instruction is fetched from the main memory into IR.
8) Using its opcode we load address of its first micro-instruction into µPC,
9) Using this address we fetch the micro-instruction from µ-Control Memory (µCM) into µIR.
10) This is in vertical form and has to be decoded.
11) The decoded output loads a new address in a Nano program counter (nPC).
12) Using this address we fetch the Nano-instruction from Nano-Control Memory (nCM) into nIR.
13) This is in horizontal form and can directly generate control signals.
14) Such a combination gives advantage of both techniques.
15) The size of the Control Memory is small as µ-instructions are Vertical.
16) Multiple control signals can be produced simultaneously as Nano-instructions are Horizontal.
HARDWIRED CONTROL UNIT MICROPROGRAMMED CONTROL UNIT

Control signals are generated using Control signals are generated using
1
hardware. software (Microprogram).

Since hardware is used, the circuit is Since software is used, the circuit is
2
rigid. flexible.

Modification to the Control Unit requires Modification to the Control Unit simply
3
re-design of the entire hardware. requires re-programming of µ-instructions.

Ideally suited for processors with small Ideally suited for processors with large and
4
and simple instruction sets. complex instruction sets.

Debugging a large Hardwired Control As micro-programs are software, debugging


5
Unit is very difficult. is much easier.

6 Emulation is not possible. Emulation is possible.

Executes faster as control signals are Executes slower as time is wasted in


7
directly generated by hardware. fetching and decoding µ-instructions.

8 Does not need a Control Memory. Needs a Control Memory.

Cost is lower as Control Memory is not Cost is higher as Control Memory is needed
9
needed. inside the processor.

10 Preferred in RISC processors. Preferred in CISC processors.

As seen from the above comparison, both methods have their pros and cons.
Hence modern processors use a combination of both.
Simple and regularly used instructions are decoded by a Hardwired Control Unit as they are faster.
Complex instructions are decoded by a Microprogrammed Control Unit as they are easier to design..
TYPICAL MICROPROGRAMMED CONTROL UNIT (Super Important!)

1) Microprogrammed Control Unit produces control signals by software, using micro-instructions.


2) A program is a set of instructions.
3) An instruction requires a set of Micro-Operations.
4) Micro-Operations are performed by control signals.
5) Here, these control signals are generated using micro-instructions.
6) This means every instruction requires a set of micro-instructions
7) This is called its micro-program.
8) Microprograms for all instructions are stored in a small memory called “Control Memory”.
9) The Control memory is present inside the processor.
10) Consider an Instruction that is fetched from the main memory into the Instruction Register (IR).
11) The processor uses its unique “opcode” to identify the address of the first micro-instruction.
12) That address is loaded into CMAR (Control Memory Address Register) also called µIR.
13) This address is decoded to identify the corresponding µ-instruction from the Control Memory.
14) There is a big improvement over Wilkes’ design, to reduce the size of micro-instructions.
15) Most micro-instructions will only have a Control field.
16) The Control field Indicates the control signals to be generated.
17) Most micro-instructions will not have an address field.
18) Instead, µPC will simply get incremented after every micro-instruction.
19) This is as long as the µ-program is executed sequentially.
20) If there is a branch µ-instruction only then there will be an address filed.
21) If the branch is unconditional, the branch address will be directly loaded into CMAR.
22) For Conditional branches, the Branch condition will check the appropriate flag.
23) This is done using a MUX which has all flag inputs.
24) If the condition is true, then the MUX will inform CMAR to load the branch address.
25) If the condition is false CMAR will simply get incremented.
26) The control memory is usually implemented using FLASH ROM as it is writable yet non volatile.

ADVANTAGES

1) The biggest advantage is flexibility.


2) Any change in the control unit can be performed by simply changing the micro-instruction.
3) This makes modifications and up gradation of the Control Unit very easy.
4) Moreover, software can be much easily debugged as compared to a large Hardwired Control Unit.
5) Since most micro-instructions are executed sequentially, they don’t need for an address field.
6) This significantly reduces the size of micro-instructions, and hence the Control Memory.

DRAWBACKS
1) Control memory has to be present inside the processor, increasing its size.
2) This also increases the cost of the processor.
APPLICATIONS OF MICROPROGRAMMED CONTROL UNIT / MICROPROGRAMMING
Microprogramming has various advantages like flexibility, simplicity, cost effectiveness etc.
As a result, it plays a major role in the following applications.

1) DEVELOPMENT OF CONTROL UNITS


Modern processors have very large and complex instruction sets.
Microprogramming is ideally suited for making Control Units of such processors as they are far less
complex and can be easily modified.

2) USER TAILORING OF THE CONTROL UNIT


As the Control Unit is developed using software, it can be easily reprogrammed.
This can be used for custom made modifications of the Control Unit.
To allow this, the Control Memory must be writeable like RAM or Flash ROMs.

3) EMULATION
Emulation is when one processor (A) is made to emulate or behave like another processor (B).
To do this, “A” must be able to execute the instructions of “B”.
If we re-program the Control Memory of “A”, same as that of “B”, then “A” will be able to emulate the
behavior of “B”, for every instruction. This is possible only in Microprogrammed Control Units.
Used generally when a main processor has to emulate the behavior of a math co-processor.

4) IMPROVING THE OPERATING SYSTEM


Microprogramming can be used to implement complex and secure functions of the OS.
This not only makes the OS more powerful and efficient, but more importantly secure, as it
provides the OS a higher degree of protection from malicious virus attacks.

5) HIGH LEVEL LANGUAGE SUPPORT


Modern high level languages have far more advanced and complex data types.
Microprogramming can provide support for such data types directly from the processor level.
As a result the language becomes easy to compile and also faster to execute.

6) MICRO-DIAGNOSTICS
As Microprogrammed Control Units are software based, debugging an error is far more easy as
compared to doing the same for a complex hardwired control unit. This allows monitoring,
detection, isolation and repairs of any kind of system errors in the Control Unit.
At times, it can also act as a runtime substitute, if the corresponding hardwired component fails.

7) DEVELOPMENT OF SPECIALIZED PROCESSORS


All processors are not general purpose. Many applications require specialized processors like DSP
(Digital Signal Processors) for communication, GPU (Graphic Processor Unit) for image processing.
They have complex instruction sets and also need to be constantly upgraded, making
Microprogrammed Control Unit the ideal choice.
MICRO-INSTRUCTION SEQUENCING
It is the method of determining the flow of the Microprogram.
There are two main techniques.

1) DUAL ADDRESS FIELD


In this approach, micro-instructions are NOT executed in a sequential manner.
The instruction register (IR) gives the address of the first micro-instruction.
Thereafter, each micro-instruction gives the address of the next micro-instruction.
If it is a conditional micro-instruction, it will contain two address fields.
One for the condition to be true and the other for false.
Hence it is called dual address field.
The multiplexer will decide the address the address to be loaded into the control memory address
register (CMAR) based on the status flags.
2) SINGLE ADDRESS FIELD
In this approach, micro-instructions are executed in a sequential manner.
The instruction register (IR) gives the address of the first micro-instruction into CMAR.
Thereafter the address is simply incremented.
Hence every micro-instruction need not carry address of the next one.
This is true so long as the µ-program is executed in a sequential manner.
For an unconditional branch, the micro-instruction contains the branch address.
This address will be loaded into CMAR.
For a conditional branch, the micro-instruction contains the branch address for true condition.
If the condition is false, the current address in CMAR will be simply incremented.
This means even in the worst case, the micro-instruction will carry only one address.
Hence it is called single address field.
The multiplexer will decide the address the address to be loaded into the control memory address
register (CMAR) based on the status flags.
XA+B) (cd)
x (e /f) UNT13
K (A e+) * (cD+)* (EF/)

AB+ CD +* (EfL)
AB+CD+* Ef|
Zuo Adol w

Push A Tos A

Push ToseB
ADD Tos A+ B
Push C Tos C
Push D Tos D
ADD Tos c+D)
MUL
+B)* c+D)
P'oush & Tos E
rush F Tos F
DIV Tos (EIF)

MUL
Tos
(A+B)+(C+ D) *
(E }F )
PopaX
C fiust Adelauas
S
Load A AC MLAJ
Add B AC - AC+B
$to MLP P Ac
Load e AC MLC
t o Add D AC Ac+D
Mul MtP1 Ac AC PP
to MLP1| PAc
load
MCE
AC - ACF
Div
Mul MP1 ACC A C * P

Sto Mt] M1- AC

Second Acdoluss

MOV R1, A R A
ADD R1 ,B RL -R1+B
MDV R2, C R 2 e C

R2 R 2t B
ADD R2,B

MUL Ri R 2 RL Ri* R2
R3 E
E
Mov R 3,
DTY R3, F R3 3 h
MUL R , R3 R R 1 R3

MoV X , RI
R

The Acdolbss

App R , A,B
R CtD
P DD R2, C D

R R2
MUL R1, Ri, R2
DTV R3, E,F e Ri R s

MUL Ri, R 3
, ,
n

Tnstucs
5 12

Addbssing1< T 0PCoDE A DDR BS|

Mods
febch Ad
6 Bits
Exeu Ras
Rad
Tngtuutir qu
Sequna Countu
Se
1.
Fetehing
PC
PC+ i, \IR<-M
-

CAR
AR Lo 1i1,
li2-141 TI Deco de

nclict LDiutt

Ex ceeutu
$it adeb 12
7 y 16
si nstuchin 2
x
6

O11
MAm 4
12 1

Data

4o96
151 12 1) O

Ragiste

2 1

T11 1 Taput |oulput


n s tuetion_yek_stat_agua_

ooand bpexamd
Enst
Fetth fetch Stone
Mulbipla
Mulbiple

/Snstuu hin
Tns tuucien Obrand Data opond
opan ddebuss aoleluy
addbuss opaeni
cakudatind Decodin Cokeulaue Calulahed
Tnstuuen tomhte feteh Rutwn
r t ing tuulien Vectau data_
2 s6 K

GA rnbudu wsu muma


instuele Cooe
binbtnau
32 b tach Tha instuchs
u m y

tauol in
haspant
One
am
waHod

indiuec
6 adduu
adduuin
moo bu
Cool at to
a n opualien Cod g áteuy
64 giste
an
a
od an
ado
u he
Paut (alaulat
How many bets Q input
mermas
wo umat and indical
Deaw the instuucuen
obits in tath pat
tnole
H o many bis asu

4gaster N cok & tu aleduess pat

024
3 22
Pol 256 K x

2 32 6it
2s6x l66 e4 X 3 2

2 2 x32

2 x32 2's

Size adobuess 8
8Bits
Sin o instuten /data =
32 Pit

Adcbsig mece0p-toele Riaten Loc


its Addus
3
23

32
oy coele 64
-(1+13 +6 5
1
A mon -pipline 7ste
teke So ns

h ask Can be puo led


tasp inptsu ttrt &ane

lo n
Lgmend ppelire. o th chock ye
uatoigo piplim 5
Detest
temie perlup
s maxirum 4R20
instuthen . koet
1o0 teuk (
e achi'tval
u that Com

tp Ons
Sak n = S On 6
= o0

topx So

( 160

4T6

Smax = 50

You might also like