0% found this document useful (0 votes)
68 views

Assignment - Muhammad Ali Asad - 014

The document discusses different types of programmable logic devices including PALs, GALs, SPLDs, and CPLDs. 1) PALs have a programmable AND array and fixed OR array, while GALs are similar to PALs but their AND arrays and output logic can be reprogrammed. 2) SPLDs are the simplest PLDs and include PLAs, FPLAs, PALs, and GALs. CPLDs are more complex than SPLDs and contain multiple logic blocks like SPLDs. 3) Programmable logic devices offer advantages over fixed logic devices like design flexibility, improved reliability, and

Uploaded by

mumtaz
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views

Assignment - Muhammad Ali Asad - 014

The document discusses different types of programmable logic devices including PALs, GALs, SPLDs, and CPLDs. 1) PALs have a programmable AND array and fixed OR array, while GALs are similar to PALs but their AND arrays and output logic can be reprogrammed. 2) SPLDs are the simplest PLDs and include PLAs, FPLAs, PALs, and GALs. CPLDs are more complex than SPLDs and contain multiple logic blocks like SPLDs. 3) Programmable logic devices offer advantages over fixed logic devices like design flexibility, improved reliability, and

Uploaded by

mumtaz
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

MUHAMMAD ALI ASAD

FA19C2LA014
SUBMITTED TO: DR RIZWAN ANJUM SB
DIGITAL INTEGRATED CIRCUITS FOR
COMMUNICATION

.com

1. What is the difference between a PAL and GAL? Discuss the types of
programmable logic, SPLD and CPLDs and their basic structure??
Programmable Array Logic (PAL)
The definition of term PAL or Programmable Array Logic is one type of PLD which is known
as Programmable Logic Device circuit, and working of this PAL is the same as the PLA.
Programmable array logic (PAL) has a programmable AND array at the input and a fixed OR
array at the output. The designing of the programmable array logic can be done with fixed OR
gates as well as programmable AND gates. By using this we can implement two easy functions
wherever the associates AND gates with each OR gate denote the highest number of product
conditions that can be produced in the form of SOP (sum of product) of an exact function.

As the logic gates like AND is connected continually toward the OR gates, and that indicates
that the produced product term is not distributed with the output functions. The major notion
behind PLD development is to fabricate a compound Boolean logic onto a single chip by
removing the defective wiring, avoiding the logic design, as well as decreasing the consumption
of power.

Example of PAL

Implement the following Boolean expression with the help of programmable array logic (PAL)

X =AB + AC’

Y= AB’ + BC’

The above given two Boolean functions are in the form of SOP (sum of products). The product
terms present in the Boolean expressions are X & Y, and one product term that is AC’ is
common in every equation. So, the total required logic gates for generating the above two
equations is AND gates-4 OR programmable gates-2. The equivalent PAL logic diagram is
shown below.
The AND gates which are programmable have the right of entry for normal as well as
complemented variable inputs. In the above logic diagram, the available inputs for each AND
gate are A, A’, B, B’, C, C’. So, in order to generate a single product term with every AND
gate, the program is required.

All the product terms are obtainable at the inputs of an each OR gate. Here, the programmable
connections on the logic gate can be denoted with the symbol ‘X’.

Here, the OR gate inputs are fixed. Thus, the required product terms are associated with each
OR gate inputs. As a result, these gates will generate particular Boolean equations. The ‘.’ The
symbol represents permanent connections.

Generic Array Logic (GAL):


Generic array logic was introduced by lattice semiconductor Co. in 1983. GAL offered CMOS
electrically erasable PROM (EPROM, E2PROM) variations on the PAL concept. GAL
architecture has reprogrammable AND array, a fixed OR array and reprogrammable output
logic. GAL is similar to PAL with output logic macro cells (OLMCs), which provide more
flexibility. Output logic macro cell can be configured either for a combinational output of for a
registered output. GAL can be erased and reprogrammed and usually replace a whole set of
different PALs.
The reprogrammable array is essentially a grid of conductors forming rows and columns with an
electrically erasable CMOS (E2CMOS) cell at each cross-point, rather than a fuse as in a PAL.
Each column is connected to the input of an AND gate, and each row is connected to an input
variable or its complement. Any combination of input variables or complements can be applied
to an AND gate to form any desired product term by programming each E2CMOS cell. Each
macro cell contains an edge-triggered D-type flip-flop and a pair of configurable multiplexers.
The control fuses for the GAL macro cells allow each macro cell to be configured in one of
three basic configurations. These configurations correspond to the various types of I/O
configurations found in the PAL devices that the GAL is designed to replace.

Difference between PAL and GAL:

A generic array logic (GAL) same as that of the PAL architecture. The only difference between
GAL and PAL is that, the programmable AND array of a GAL device can be erased and
reprogrammed. Further, the output logic of GAL device is re-programmable. Because of this
facility in GAL device is that the implemented logic can be corrected by reprogramming.
Discuss the types of programmable logic, SPLD and CPLDs and their basic
structure?

Programmable Logic:

It is a logic element whose function is not restricted to a particular function. It may be


programmed at different points of the life cycle. At the earliest, it is programmed by the
semiconductor vendor (standard cell, gate array), by the designer prior to assembly, or by the
user, in the circuit.

Programmable Logic Devices (PLDs)

PLDs are ICs with a large number of gates and flip flops that can be configured with basic
software to perform a specific logic function or to perform the logic for a complex circuit.
Unlike a logic gate, which has a fixed function, a PLD has an undefined function at the time of
manufacture. Before the PLD can be used in a circuit it must be programmed, that is,
reconfigured.

Types of Programmable Logic:

Programmable logic devices are available in many different types. The current range of devices
span from small devices capable of implementing only a handful of logic equations to huge
FPGAs that can hold an entire processor core and peripherals. In addition to this incredible
difference in size, there are many variations in the architecture.

Programmable logic devices can be divided into three distinct architectural groups.

 Simple Programmable Logic Devices – SPLDs


 Complex Programmable Logic Devices – CPLDs
Simple Programmable Devices:

SPLDs are the simplest, smallest and least-expensive type of programmable logic device. These
devices typically have logic gates laid out in arrays where the interconnection between these
arrays is configurable by the user.

The term SPLD covers several types of device:

 Programmable Logic Array (PLA) – This device has both programmable AND, and OR
planes.
 Field Programmable Logic Array (FPLA) – Same as PLA but can be erased and
reprogrammed.
 Programmable Array Logic (PAL) – This device has a programmable AND plane and a
fixed OR plane.
 GAL – This device has the same logical properties as the PAL but can be erased and
reprogrammed

The figure shows a general structure of an SPLD. The connection link across two wires can
either be predefined or programmable depending on the type of SPLD.
Complex Programmable Logic Device:

This group covers the middle ground in terms of complexity and density between SPLDs and
FPGAs. CPLDs can handle significantly larger designs than SPLDs, but provide less logic than
field programmable gate arrays (FPGAs). CPLDs contain several logic blocks, each of which
includes eight to 500 macrocells. For most practical purposes, CPLDs can be thought of as
multiple SPLDs (plus some programmable interconnect) in a single chip. Each of the 16 logic
array blocks shown is the equivalent of one SPLD. However, in an actual CPLD, there may be
more (or less) than 16 logic array blocks. Also, each of these logic array blocks are themselves
comprised of macrocells and interconnect wiring, just like an ordinary SPLD.

The larger size of a CPLD allows you to implement either more logic equations or a more
complicated design. Most complex programmable logic devices contain macro cells with a sum-
of-product combinatorial logic function and an optional flip-flop. Complex programmable logic
devices feature predictable timing characteristics that make them ideal for critical, high-
performance control applications. Typically, CPLDs have a shorter and more predictable delay
than FPGAs and other programmable logic devices. Because they are inexpensive and require
relatively small amounts of power, CPLDs are often used in cost-sensitive, battery-operated
portable applications. CPLDs are also used in simple applications such as address decoding.

Because CPLDs can hold larger designs than SPLDs, their potential uses are more varied. They
are still sometimes used for simple applications like address decoding, but more often contain
high-performance control-logic or complex finite state machines. At the high-end (in terms of
numbers of gates), there is also a lot of overlap in potential applications with FPGAs.
Traditionally, CPLDs have been chosen over FPGAs whenever high-performance logic is
required. Because of its less flexible internal architecture, the delay through a CPLD (measured
in nanoseconds) is more predictable and usually shorter.

Advantages of Programmable Logic Devices:

Programmable logic devices offer a number of important advantages over fixed logic devices,
including:

 Design Flexibility: PLDs offer customers much more flexibility during the design cycle
because design iterations are simply a matter of changing the programming file, and the
results of design changes can be seen immediately in working parts.
 Improved Reliability: Lower power plus fewer interconnections and packages translate
into greatly improved system reliability.
 Lower Power: CMOS and fewer packages combine to reduce power consumption.
 Reduced complexity: Since PLDs consume lower power requirements less board space
simpler testing procedures.
 PLDs are field-programmable i. e. can be programmed outside of the manufacturing
environment
 PLDs are erasable and reprogrammable i.e. allows updating a device or correction of
errors and allows reusing the device for a different design – the ultimate in reusability.
2. Discuss the term RIMM, DIMM, and SIMM in detail. What is a major
advantage of a flash memory over a SRAM or DRAM; List the three modes
of operations of flash memory with its characteristics.
RAM (Random Access Memory) is a type of computer memory that can be accessed randomly
and holds data temporarily while the CPU is processes the data.

Several different types of SIMMs, DIMMs, and RIMMs have been commonly used in desktop
systems. The various types are often described by their pin count, memory row width, or
memory type.

SIMM Details:

SIMM (Single In-Line Memory Modules) is the tiny circuit boards having edge connectors
where the RAM chips are placed. SIMMs was widely used from the late 1980s to the late 1990s
but has become obsolete.There are slots available on the motherboard for inserting these
SIMMs. In case the SIMM connector is of gold then the slot connector should be of gold only
and not be of other metal. The metal connectors present at each side of bottom edges works
effectively through the card, and just one set is of connectors are functional at a time.The 72-pin
SIMMs use a set of four or five pins to indicate the type of SIMM to the motherboard. These
presence detect pins are either grounded or not connected to indicate the type of SIMM to the
motherboard. Presence detect outputs must be tied to the ground through a 0-ohm resistor or
jumper on the SIMM to generate a high logic level when the pin is open or a low logic level
when the motherboard grounds the pin. This produces signals the memory interface logic can
decode. If the motherboard uses presences detect signals, a power-on self-test (POST)
procedure can determine the size and speed of the installed SIMMs and adjust control and
addressing signals automatically. This enables auto detection of the memory size and
speed.Presence detect performs the same function for 72-pin SIMMs that the serial presence
detect (SPD) chip does for DIMMs.
Because these pins can have custom variations, you often must specify IBM, Compaq, HP, or
generic SIMMs when you order memory for systems using 72-pin SIMMs. Although very few
(if any) of these systems are still in service, keep this information in mind if you are moving 72-
pin modules from one system to another or are installing salvaged memory into a system. Also,
be sure you match the metal used on the module connectors and sockets. SIMM pins can be tin
or gold plated, and the plating on the module pins must match that on the socket pins;
otherwise, corrosion will result.

Types of SIMM

There are two variants of the SIMM, one with 30 pins and other with 72 pins.

30 pins SIMM contain an address width of 8 bits and 1MB or 4 MB of RAM. Therefore, the
data it can transfer from the memory bus at a time is 8 bits. Later hardware of 30 pins SIMM
contains parity bit for the error detection which makes the address width of 9 bits. To ensure the
proper installation of the SIMM, it has a notch on the bottom left.

72 pins SIMM can have an address width of 32 bits or 36 bits including parity bits. Each byte is
allotted parity bits (for 32 data bits 4 bits are for parity). The amount of RAM memory it has
can be 4, 8, 16, 32, or 64 MB. It is notched at the side and center of the module.
DIMM:

SDR (single data rate) DIMMs uses a completely different type of presence detect than SIMMs,
called serial presence detect (SPD). It consists of a small EEPROM or flash memory chip on the
DIMM that contains specially formatted data indicating the DIMM's features. This serial data
can be read via the serial data pins on the DIMM, and it enables the motherboard to auto
configure to the exact type of DIMM installed.DIMM (Dual In-Line Memory Module) also has
metal connectors similar to SIMM, but either of the sides of the connector does not rely on the
other. Advanced motherboards use 168, 184, 240 pin DIMMs. It consumes 3.3 volts of power
and can store from 32 MB up to 1GB of memory.

DIMMs can come in several varieties, including un-buffered and buffered as well as 3.3V and
5V. Buffered DIMMs have additional buffer chips on them to interface to the motherboard.
Unfortunately, these buffer chips slow down the DIMM and are not effective at higher speeds.
For this reason, most PC systems (those that do not use registered DIMMs) use un-buffered
DIMMs. The voltage is simple DIMM designs for PCs are almost universally 3.3V. If you
install a 5V DIMM in a 3.3V socket, it would be damaged, but fortunately keying in the socket
and on the DIMM prevents that.
DDR DIMM:

The 184-pin DDR DIMMs use a single key notch to indicate voltage.DDR DIMMs also use two
notches on each side to enable compatibility with both low- and high-profile latched sockets.
Note that the key position is offset with respect to the center of the DIMM to prevent inserting it
backward in the socket. The key notch is positioned to the left, centered, or to the right of the
area between pins 52 and 53. This is used to indicate the I/O voltage for the DDR DIMM and to
prevent installing the wrong type into a socket that might damage the DIMM.

DDR2 DIMM:

The 240-pin DDR2 DIMMs use two notches on each side to enable compatibility with both
low- and high-profile latched sockets. The connector key is offset with respect to the center of
the DIMM to prevent inserting it backward in the socket. The key notch is positioned in the
center of the area between pins 64 and 65 on the front (184/185 on the back), and there is no
voltage keying because all DDR2 DIMMs run on 1.8V.

DDR3 DIMM

The 240-pin DDR3 DIMMs use two notches on each side to enable compatibility with both
low- and high-profile latched sockets. The connector key is offset with respect to the center of
the DIMM to prevent inserting it backward in the socket. The key notch is positioned in the
center of the area between pins 48 and 49 on the front (168/169 on the back), and there is no
voltage keying because all DDR3 DIMMs run on 1.5V.

Types of DIMM

168 pin DIMM structure is different from the SIMM because it has tiny notches along the rows
of the pins at the bottom of the module.

184 and 240 pin DIMMs is provided with only one notch at the different position to prevent
the improper placement of the DIMM in the socket.
RIMM

The 16/18-bit RIMMs are keyed with two notches in the center. This prevents a backward
insertion and prevents the wrong type (voltage) RIMM from being used in a system. Currently,
all RIMMs run on 2.5V, but proposed 64-bit versions will run on only 1.8V. To allow for
changes in the RIMMs, three keying options are possible in the design.RIMMs incorporate an
SPD device, which is essentially a flash ROM onboard. This ROM contains information about
the RIMM's size and type, including detailed timing information for the memory controller. The
memory controller automatically reads the data from the SPD ROM to configure the system to
match the RIMMs installed.

RIMM memory is expensive and slower than DIMMs. RIMM that use 16-bit data bus has 2
notches and 184-pins. RIMM that use a 32-bit data bus has a single notch and 232-pins, it
supports dual channels. With RIMMs all memory slots on the motherboard must be filled to
maintain continuity throughout all slots. If the slot is not filled with ram it must have a
placeholder module called a C-RIMM (Continuity RIMM) to ensure continuity throughout all
slots.
RIMMs also have different signal pins on each side. Three different physical types of RIMMs
are available: a 16/18-bit version with 184 pins, a 32/36-bit version with 232 pins, and a 64/72-
bit version with 326 pins. Each of these plugs into the same sized connector, but the notches in
the connectors and RIMMs are different to prevent a mismatch. A given board will accept only
one type. By far the most common type is the 16/18-bit version. The 32-bit version was
introduced in late 2002, and the 64-bit version was introduced in 2004. The standard 16/18-bit
RIMM has 184 pins, one notch on either side, and two notches centrally located in the contact
area. The 16-bit versions are used for non-ECC applications, whereas the 18-bit versions
incorporate the additional bits necessary for ECC.

The main physical difference between SIMMs and DIMMs is that DIMMs have different signal
pins on each side of the module, resulting in two rows of electrical contacts. That is why they
are called dual inline memory modules, and why with only 1" of additional length, they have
many more pins than a SIMM.

Comparison Chart

BASIS FOR
SIMM DIMM
COMPARISON

Basic Pins present in DIMM pins are

either side are independent.

connected.

Channel 32 bit 64 bit

Power 5 volts 3.3 volts

consumption

Storage provided 4MB to 64 MB 32MB to 1 GB


BASIS FOR
SIMM DIMM
COMPARISON

Applications 486 CPU and Modern Pentium

early Pentium PCs are enabled

computers use with DIMM

SIMM. modules.

SRAM & DRAM:

SRAM:

Each SRAM cell stores a bit using a six-transistor circuit and latch. (DRAM uses transistors and
capacitors.) SRAM is volatile but if the system is powered, SRAM retains data values without
recharging cells. It is fairly insensitive to electrical noise, which is unwanted electrical signal
that interferes with a desired signal. Since it is faster and costs more than DRAM, it normally
operates as CPU memory caches or on high-end, high-performance servers. SRAM system
memory is typically 20-40ns (nanoseconds).It is used in computers, mobile phones, automotive
electronics, electronic toys etc.
DRAM:

Each DRAM cell stores a bit using a single paired capacitor and transistor. Since single
component pairs can create a cell, and billions of them can fit on a single chip, DRAM is
capable of very high densities. Like SRAM, DRAM if volatile. But unlike SRAM, each cell
must be periodically refreshed since capacitors leak power. It is sensitive to electrical noise.
DRAM speeds usually range between 60ns and 100ns – still fast but slower than SRAM.
Typical per-second speeds are 20-40GB/s, and continual cell recharging results in higher
latency and bandwidth delays than SRAM.

As computing moves faster, and the all flash data center takes hold, the need to create still faster
RAM will continue. This will likely impact both these RAM types. The rapid pace of SSD
innovation demands constant upgrades in RAM performance.

They differ in the technology that they use to hold the data. DRAM has to be refreshed
thousands of times per second. SRAM does not need to be refreshed. DRAM access time is
about 60 nanoseconds, whereas SRAM access time is about 10 nanoseconds. SRAM is more
expensive, which is why DRAM is the most common type of Ram. Both are volatile, meaning
that they lose their content when power is turned off.
Advantage of SRAM AND DRAM:

 The biggest advantage is that it is non-volatile. Both SRAM and DRAM lose data when
power is removed. DRAM loses it in every read operation. So that’s where flash is
helpful.
 You can store your programs that create variables and constants in Ram.
 Now-a-days gate memory is in use which is better than flash memory.

Advantages of SRAM:

 SRAM performance is better than DRAM in terms of speed. It means it is faster in


operation. As a result, it takes less time for accessing data or information compare to
DRAM.
 It is used to create speed sensitive cache.
 It has medium power consumption.
 it doesn't need to refresh the memory contents, access time is faster
 Also because it doesn't need to be refreshed, no refresh logic or circuitry is needed so the
memory module itself is simpler.

Advantages of DRAM:

Following are the benefits or advantages of DRAM:

 DRAM memory can be deleted and refreshed while running the program.
 Cheaper compare to SRAM.
 It is smaller in size.
 It has higher storage capacity. Hence it is used to create larger RAM space system.
 It is simple in structure than SRAM.
 Simple memory cell structure.
List the three modes of operations of flash memory with its characteristics.

Flash memory:

Flash memory is inside your smartphone, GPS, MP3 player, digital camera, PC and the USB
drive on your key chain. Solid-state drives (SSD) using flash memory are replacing hard drives
in netbooks and PCs and even some server installations. Needing no batteries or other power to
retain data, flash is convenient and relatively foolproof.Flash memory is a solid-state chip that
maintains stored data without any external power source. It is commonly used in portable
electronics and removable storage devices, and to replace computer hard drives.

Flash memory is a type of electronically erasable programmable read-only memory (EEPROM),


memory chips that retain information without requiring power. (This is different from flash
RAM, which does need power to retain data.) Regular EEPROM erases content byte by byte;
most flash memory erases data in whole blocks, making it suitable for use with applications
where large amounts of data require frequent updates. Inside the flash chip, data is stored in
cells protected by floating gates. Tunneling electrons change the gate's electronic charge in "a
flash" (hence the name), clearing the cell of its contents so it can be rewritten.

Flash memory devices use two different logical technologies NOR and NAND to map data.

NOR flash provides high-speed random access, reading and writing data in specific memory
locations; it can retrieve as little as a single byte. NOR is used to store cell phones' operating
systems; it's also used in computers for the BIOS program that runs at start-up.NOR flash
memory can be execute in place (XIP) memory, meaning that programs stored in NOR flash can
be executed directly from the NOR flash without needing to be copied into RAM first.NOR has
a high transfer efficiency and high cost effectiveness in small capacity of 1 - 4 MB, but
comparatively, it has a slow write and erase speeds.
NAND flash reads and writes sequentially at high speed, handling data in small blocks called
pages. This flash is used in solid-state and USB flash drives, digital cameras, audio and video
players, and TV set-top boxes. NAND flash reads faster than it writes, quickly transferring
whole pages of data. Less expensive than NOR flash, NAND technology offers higher capacity
for the same-size silicon.It is accessed much like block devices, such as hard disks. NAND is
best suited to systems requiring high capacity data storage. It offers higher densities, larger
capacities, and lower cost. It has faster erases, sequential writes, and sequential reads.NAND
flash memory forms the core of the removable USB storage devices known as USB flash drives,
as well as most memory card formats and solid-state drives available today.
3. List the type of SRAM and DRAM. Also explain how SRAM and DRAM
differs from each other?

RAM:

RAM, or random access memory, is a kind of computer memory in which any byte of memory
can be accessed without needing to access the previous bytes as well. RAM is a volatile medium
for storing digital data, meaning the device needs to be powered on for the RAM to work.

Definition of SRAM

SRAM (Static Random Access Memory) is made up of CMOS technology and uses six
transistors. Its construction is comprised of two cross-coupled inverters to store data (binary)
similar to flip-flops and extra two transistors for access control. It is relatively faster than other
RAM types such as DRAM. It consumes less power. SRAM can hold the data as long as power
is supplied to it.

Working of SRAM for an individual cell:

To generate stable logic state, four transistors (T1, T2, T3, T4) are organized in a cross-
connected way. For generating logic state 1, node C1 is high, and C2 is low; in this state, T1
and T4 are off, and T2 and T3 are on. For logic state 0, junction C1 is low, and C2 is high; in
the given state T1 and T4 are on, and T2 and T3 are off. Both states are stable until the direct
current (dc) voltage is applied.

The SRAM address line is operated for opening and closing the switch and to control the T5
and T6 transistors permitting to read and write. For read operation the signal is applied to these
address line then T5 and T6 gets on, and the bit value is read from line B. For the write
operation, the signal is employed to B bit line, and its complement is applied to B’.
Definition of DRAM

DRAM (Dynamic Random Access Memory) is also a type of RAM which is constructed using
capacitors and few transistors. The capacitor is used for storing the data where bit value 1
signifies that the capacitor is charged and a bit value 0 means that capacitor is discharged.
Capacitor tends to discharge, which result in leaking of charges.

The dynamic term indicates that the charges are continuously leaking even in the presence of
continuous supplied power that is the reason it consumes more power. To retain data for a long
time, it needs to be repeatedly refreshed which requires additional refresh circuitry. Due to
leaking charge DRAM loses data even if power is switched on. DRAM is available in the higher
amount of capacity and is less expensive. It requires only a single transistor for the single block
of memory.

Working of typical DRAM cell:

At the time of reading and writing the bit value from the cell, the address line is activated. The
transistor present in the circuitry behaves as a switch that is closed (allowing current to flow) if
a voltage is applied to the address line and open (no current flows) if no voltage is applied to the
address line. For the write operation, a voltage signal is employed to the bit line where high
voltage shows 1, and low voltage indicates 0. A signal is then used to the address line which
enables transferring of the charge to the capacitor.
When the address line is chosen for executing read operation, the transistor turns on and the
charge stored on the capacitor is supplied out onto a bit line and to a sense amplifier.

Key Differences between SRAM and DRAM

 SRAM is an on-chip memory whose access time is small while DRAM is an off-chip
memory which has a large access time. Therefore SRAM is faster than DRAM.
 DRAM is available in larger storage capacity while SRAM is of smaller size.
 SRAM is expensive whereas DRAM is cheap.
 The cache memory is an application of SRAM. In contrast, DRAM is used in main
memory.
 DRAM is highly dense. As against, SRAM is rarer.
 The construction of SRAM is complex due to the usage of a large number of transistors.
On the contrary, DRAM is simple to design and implement.
 In SRAM a single block of memory requires six transistors whereas DRAM needs just
one transistor for a single block of memory.
 DRAM is named as dynamic, because it uses capacitor which produces leakage current
due to the dielectric used inside the capacitor to separate the conductive plates is not a
6. Discuss basic principles and issues of dynamics CMOS design, speed
and power dissipation perspective

Introduction:

The term CMOS stands for “Complementary Metal Oxide Semiconductor”. CMOS technology
is one of the most popular technology in the computer chip design industry and broadly used
today to form integrated circuits in numerous and varied applications. Today’s computer
memories, CPUs and cell phones make use of this technology due to several key advantages.
This technology makes use of both P channel and N channel semiconductor devices.

One of the most popular MOSFET technologies available today is the Complementary MOS or
CMOS technology. This is the dominant semiconductor technology for microprocessors,
microcontroller chips, memories like RAM, ROM, EEPROM and application specific
integrated circuits (ASICs).

CMOS (Complementary Metal Oxide Semiconductor)

The main advantage of CMOS over NMOS and BIPOLAR technology is the much smaller
power dissipation. Unlike NMOS or BIPOLAR circuits, a Complementary MOS circuit has
almost no static power dissipation. Power is only dissipated in case the circuit actually switches.
This allows integrating more CMOS gates on an IC than in NMOS or bipolar technology,
resulting in much better performance. Complementary Metal Oxide Semiconductor transistor
consists of P-channel MOS (PMOS) and N-channel MOS (NMOS). Please refer the link to
know more about the fabrication process of CMOS transistor.

NMOS
NMOS is built on a p-type substrate with n-type source and drain diffused on it. In NMOS, the
majority carriers are electrons. When a high voltage is applied to the gate, the NMOS will
conduct. Similarly, when a low voltage is applied to the gate, NMOS will not conduct. NMOS
are considered to be faster than PMOS, since the carriers in NMOS, which are electrons, travel
twice as fast as the holes.

PMOS

P- Channel MOSFET consists P-type Source and Drain diffused on an N-type substrate.
Majority carriers are holes. When a high voltage is applied to the gate, the PMOS will not
conduct. When a low voltage is applied to the gate, the PMOS will conduct. The PMOS devices
are more immune to noise than NMOS devices.

CMOS Working Principle

In CMOS technology, both N-type and P-type transistors are used to design logic functions. The
same signal which turns ON a transistor of one type is used to turn OFF a transistor of the other
type. This characteristic allows the design of logic devices using only simple switches, without
the need for a pull-up resistor.In CMOS logic gates a collection of n-type MOSFETs is arranged
in a pull-down network between the output and the low voltage power supply. Instead of the
load resistor of NMOS logic gates, CMOS logic gates have a collection of p-type MOSFETs in
a pull-up network between the output and the higher-voltage rail.

Thus, if both a p-type and n-type transistor have their gates connected to the same input, the p-
type MOSFET will be ON when the n-type MOSFET is OFF, and vice-versa. The networks are
arranged such that one is ON and the other OFF for any input pattern as shown in the figure
below.

CMOS offers relatively high speed, low power dissipation, high noise margins in both states,
and will operate over a wide range of source and input voltages (provided the source voltage is
fixed).

CMOS Inverter

It consists of PMOS and NMOS FET. The input A serves as the gate voltage for both
transistors.The NMOS transistor has an input from Vss (ground) and PMOS transistor has an
input from Vdd. The terminal Y is output. When a high voltage (~ Vdd) is given at input
terminal (A) of the inverter, the PMOS becomes open circuit and NMOS switched OFF so the
output will be pulled down to Vss.
When a low-level voltage (<Vdd, ~0v) applied to the inverter, the NMOS switched OFF and
PMOS switched ON. So the output becomes Vdd or the circuit is pulled up to Vdd.

CMOS NAND Gate

It consists of two series NMOS transistors between Y and Ground and two parallel PMOS
transistors between Y and VDD.If either input A or B is logic 0, at least one of the NMOS
transistors will be OFF, breaking the path from Y to Ground. But at least one of the pMOS
transistors will be ON, creating a path from Y to VDD.

Hence, the output Y will be high. If both inputs are high, both of the nMOS transistors will be
ON and both of the pMOS transistors will be OFF. Hence, the output will be logic low.
CMOS NOR Gate

The NMOS transistors are in parallel to pull the output low when either input is high. The
PMOS transistors are in series to pull the output high when both inputs are low. The output is
never left floating.

CMOS Applications

Complementary MOS processes were widely implemented and have fundamentally replaced
NMOS and bipolar processes for nearly all digital logic applications. The CMOS technology
has been used for the following digital IC designs.

 Computer memories, CPUs


 Microprocessor designs
 Flash memory chip designing
 Used to design application-specific integrated circuits (ASICs)

CMOS inverter power dissipation:

When the input is steady at either a high or a low voltage (static condition) then one transistor is
always off between Vdd and Vss. Hence the current flowing is extremely small - equal to the
leakage current of the off transistor which is typically 100 nA. As a result of this the static
power dissipation is extremely low and it is this reason that has made CMOS such a popular
choice of technology.
For input voltages between VT and Vdd − VT then the individual MOS transistors will be
switched on by an amount dictated by Equations 9.1 and 9.2 and thus current will flow from
Vdd to Vss. When the input voltage is Vdd/2 both transistors will be turned on by the same
amount and hence the current will rise to a maximum and power will be dissipated. On many
integrated circuits, several thousand gates exist and hence this power dissipation can be large. It
is for this reason that the input voltage to a CMOS circuit must not be held at Vdd/2. When the
inputs are switching the power dissipated is called dynamic power dissipation. However, as
long as the input signals have a fast rise and fall time then this form of dynamic power
dissipation is small. The main cause of dynamic power dissipation, however, in a CMOS circuit
is due to the charge and discharge of capacitance at each gate output. The dynamic power
dissipation of a CMOS gate is therefore dependent upon the number of times a capacitor is
charged and discharged. Hence as the frequency of switching increases so the dynamic power
dissipation increases. The dynamic power dissipation for a CMOS gate is equal to

CMOS Power Trends

The idea of scaling the transistors dimension has resulted in a faster and increased integration
density of a chip. However, high-density chips are suffering with high-power dissipation.
Therefore, there is a need to reduce the power dissipation. The power consumption of the
devices is increasing with the subsequent technology nodes. There are two kinds of power
dissipations in complementary metal oxide semiconductor (CMOS) circuit design: static power
dissipation and dynamic power dissipation. Typically, the dynamic power consumption is
higher than the static power dissipation. Nevertheless, the situation changes with scaling down
of the metal oxide semiconductor field-effect transistor (MOSFET) size. It means, the static
power increases with every new technology nodes. It indicates that the leakage power
dissipation has been growing with scaling of the technology node and exceeds the dynamic
power dissipation below 65 nm technology node.
To maintain the power performance and a high speed of the device, the power supply as well as
the threshold voltage must be scale down in proportion to the channel length. However, a
decrease in threshold voltage increases the OFF current (IOFF) that increases the static power
dissipation of the device up to an unacceptable level. Therefore, scaling of threshold voltage of
the MOSFET device is limited by the fundamental minimum SS of 60 mV/dec indicates that it
is not possible to scale down the threshold voltage in proportion to supply voltage that reduces
the overdrive voltage. Consequently, scaling of the supply voltage out-of-proportion to the
threshold voltage reduces the drive current of device that results in a slower speed. Therefore,
there are two major consequences of the non-scalability of the threshold voltage of MOSFETs:
4. Discuss in detail about the development, new generation and recent trend
towards circuit designing of wireless communication transceivers integrated
into CMOS IC technology? Support your answer with an example of RF CMOS
circuit design technique?? Also describe the strengths and weakness of these
circuit influences the choice of radio architecture???

WIRELESS COMMUNICATION TRANSCEIVERS


A transceiver is a combination transmitter/receiver in a single package. The term applies
to wireless communications devices such as cellular telephones, cordless telephone sets,
handheld two-way radios, and mobile two-way radios. 

In a radio transceiver, the receiver is silenced while transmitting. An electronic switch allows
the transmitter and receiver to be connected to the same antenna, and prevents the
transmitter output from damaging the receiver. With a transceiver of this kind, it is impossible
to receive signals while transmitting. This mode is called half duplex. Transmission and
reception often, but not always, are done on the same frequency.

Some transceivers are designed to allow reception of signals during transmission periods. This
mode is known as full duplex, and requires that the transmitter and receiver operate on
substantially different frequencies so the transmitted signal does not interfere with reception.
Cellular and cordless telephone sets use this mode. Satellite communications networks often
employ full-duplex transceivers at the surface-based subscriber points. The transmitted signal
(transceiver-to-satellite) is called the uplink, and the received signal (satellite-to-transceiver) is
called the downlink.

Design and Implementation of a New Wireless Microwave Transceiver Based


on Radio Frequency Technology
RF technology is widely used in many fields. This study is based on the knowledge of RF circuit. It is
aimed to design a wireless communication transceiver, which includes transmitter circuit, receiver
circuit and communication antenna. In design process, we do not only consider the transmitter circuit
and the receiver circuit indicators to meet the requirements of the module, but also ensure that the
system cascade to work properly. The final design of a normal work to the 2.4GHz RF terminal, and the
system should have a good signal anti-jamming capability, as much as possible to increase the
transmission distance.
SYSTEM DESIGN
The system consists of sending and receiving two modules, and the whole system works at 2.4GHz. The
transmitter part generates the local oscillator signal through the computer control chip. Then the local
oscillator signal is sent through the mixer to get high-frequency signal.[5] The input signal amplified by
the low noise amplifier is sent to the filter to reduce noise, and then sent through the mixer and the
local oscillator to improve the signal center frequency.

The signal is received by the antenna which is controlled by the switch. The received signal is subjected
to primary amplification and secondary filtering to obtain the signals within the specified frequency
band. And then mixed with the local oscillator to get the IF signal, then filter the high-frequency
components by the IF filter, and finally output the signal after the amplifier.

BLOCK DIAGRAM OF TRASNSMITTER SYSTEM

BLOCK DIAGRAM OF RECEIVER SYSTEM


EXAMPLES OF RF CIRCUIT DESIGN

BASIC RF CIRCUIT

RF TRANSMITTER CIRCUIT
Strength of CMOS Technology
Very low static power consumption

Reduce the complexity of the circuit.

High density of logic functions on a chip.

Low static power consumption

High noise immunity

The power per gate is 1 maw @ 1 MHz this power consumption is less than TTL and CMOS.
The noise immunity is better than both TTL and ECL. The noise margin is about 40% of supply
voltage.
Fan-out (about > 50) is better than both TTL and ECL.
CMOS works satisfactorily over wide temperature range from -155 to 125 degree C.
It is compatible with 5V supply used in TTL circuits.
Nominal supply voltage ranges from 3V to 15V while TTL supports 5V

WEAKNESS IN CMOS Technology

it is not bipolar
some circuits are not practicable
it is hard to implement
5. Discuss the characteristics of CMOS inverter with respect to propagation
delay, first order and from a design perspective.

CMOS INVERTER
The inverter is truly the nucleus of all digital designs. Once its operation and properties are
clearly understood, designing more intricate structures such as NAND gates, adders,
multipliers, and microprocessors is greatly simplified. The electrical behavior of these complex
circuits can be almost completely derived by extrapolating the results obtained for inverters.
The analysis of inverters can be extended to explain the behavior of more complex gates such
as NAND, NOR, or XOR, which in turn form the building blocks for modules such as multipliers
and processors. In this chapter, we focus on one single incarnation of the inverter gate, being
the static CMOS inverter — or the CMOS inverter, in short. This is certainly the most popular at
present, and therefore deserves our special attention

STATIC CMOS INVERTER


The circuit diagram of a static CMOS inverter. Its operation is readily understood with the aid
of the simple switch model of the MOS transistor, introduced in Chapter 3 (Figure 3.25): the
transistor is nothing more than a switch with an infinite off resistance (for |VGS| < |VT|), and
a finite on-resistance (for |VGS| > |VT|).

CHACTERISTICS OF CMOS

Propagation Delay of CMOS


inverter

The propagation delay of a logic gate


e.g. inverter is the difference in time
(calculated at 50% of input-output
transition), when output switches, after
application of input.
In the above figure, there are 4 timing
parameters. Rise time (tr) is the time,
during transition, when
output switches from 10% to 90% of
the maximum value. Fall time (tf) is the time, during transition, when output switches from 90%
to 10% of the maximum value. Many designs could also prefer 30% to 70% for rise time and
70% to 30% for fall time. It could vary up to different designs.
The propagation delay high to low (tpHL) is the delay when output switches from high-to-low,
after input switches from low-to-high. The delay is usually calculated at 50% point of input-
output switching, as shown in above figure.
Now, in order to find the propagation delay, we need a model that matches the delay of inverter.
As we have seen above, the switching behavior of CMOS inverter could be modeled as a
resistance Ron with a capacitor CL, a simple first order analysis of RC network will help us to
model the propagation delay.

7. Discuss briefly with its neat and clean circuit diagram for dynamic, latch up
and static discharge in CMOS.

DYNAMIC CMOS:
Dynamic gates use a clocked PMOS pull up. The implemented logic function or the logic gate is
achieved through 2 modes of operation: Pre charge and Evaluate. In integrated circuit
design, dynamic logic (or sometimes clocked logic) is a design methodology in
combinatory logic circuits, particularly those implemented in MOS technology. It is
distinguished from the so-called static logic by exploiting temporary storage of
information in stray and gate capacitances

CIRCUIT DIAGRAM
Latch up:
Latch up refers to short circuit formed between power and ground rails in an IC leading to high
current and damage to the IC. Speaking about CMOS transistors, latch up is the phenomenon
of low impedance path between power rail and ground rail due to interaction between
parasitic PnP and NPN transistors. The structure formed by these resembles a Silicon
Controlled rectifier (SCR, usually known as a thyristor a PNPN device used in power
electronics). These form a +ve feedback loop, short circuit the power rail and ground rail,
which eventually causes excessive current, and can even permanently damage the device.

LATCH UP FORMATION IN CMOS:

CMOS latch up and electrostatic discharge (ESD) continues to be a semiconductor quality and reliability
area of interest as semiconductor components continue to be reduced to smaller dimensions. The
combination of scaling, design integration, circuit performance objectives, new applications, and the
evolving system environments, CMOS latch up and ESD robustness will continue to be a technology
concern. With both the revolutionary and evolutionary changes in CMOS and Silicon Germanium
semiconductor technologies, and changing product environments, new CMOS latch up and ESD
requirements also continue in semiconductor design, device and chip-level simulation, design
verification, chip-to-system evaluation, and the need for new latch up and ESD test specifications.
Additionally, the issues of low cost, low power and radio frequency (RF) GHz performance objectives
has lead to both revolutionary as well as derivative technologies; these have opened new doors for
discovery, development and research in the area of latch up and ESD. Although latch up and ESD are
not a new reliability arena, there are also new issues rising each year, making the latch up and ESD an
area of continuous discovery, innovation and invention. In this paper, an introduction to latch up in
CMOS and Bi CMOS Silicon Germanium will be discussed.

CIRCUIT DIAGRAM:
ELECTROSTATIC DISCHARGE:
Electrostatic discharge (ESD) phenomenon often happens between two or more objects with different
electrostatic potentials. ESD phenomenon has been known as a serious problem for IC products
fabricated by the advanced deep-submicron and nanoscale semiconductor process technologies. In the
scaled-down CMOS processes, the MOS devices with shallower junction depth, thinner gate oxide,
lightly-doped drain (LDD) structure, and slicided diffusion have better circuit performance of higher
operating speed and lower operating power, but they become weaker to ESD stresses. Devices are
usually damaged by ESD due to the rapidly generated heat or the rapidly created strong electrical field

CIRCUIT DIAGRAM:
REFERENCES:
https://archive.org/details/ucberkeley_webcast_OmjkApDsoLc

https://ieeexplore.ieee.org/document/4016442

https://vlsiuniverse.blogspot.com/2013/03/latchup-condition-in-cmos-devices.html

https://www.researchgate.net/publication/220455926_A_review_of_CMOS_latchup_and_electrostatic
_discharge_ESD_in_bipolar_complimentary_MOSFET_BiCMOS_Silicon_Germanium_technologies_Part
_II_-_Latchup

https://brainly.in/question/13482977
https://techdifferences.com/difference-between-pla-and-pal.html

https://www.elprocus.com/what-are-pal-and-pla-design-and-differences/

https://www.electronics-tutorial.net/programmable-logic-devices/generic-array-logic/

http://www.fpgacentral.com/pld-types/gal-generic-array-logic#ixzz6IR6I0ihO

https://krazytech.com/technical-papers/programmable-logic-devices-pld

https://techdifferences.com/difference-between-simm-and-dimm.html

https://www.informit.com/articles/article.aspx?p=1416688&seqNum=4

https://www.enterprisestorageforum.com/storage-hardware/sram-vs-dram.html

https://www.microcontrollertips.com/dram-vs-sram/

https://techdifferences.com/difference-between-sram-and-dram.html

https://www.diffen.com/difference/Dynamic_random-access_memory_vs_Static_random-
access_memory

https://courseware.ee.calpoly.edu/~dbraun/courses/ee307/F02/02_Shelley/Section2_BasilShelley.htm

https://www.student-circuit.com/learning/year2/digital-systems-design/total-power-dissipation-in-
cmos-inverter/

https://www.sciencedirect.com/topics/engineering/dynamic-power-dissipation

https://www.elprocus.com/cmos-working-principle-and-applications/

You might also like