Notes Day8
Notes Day8
Digital Design
Left Right
Control
The switch connects the left contact to the right contact. The state of the switch (i.e.
open or closed) depends on what is happening at the control contact. If the control
contact is at one logic state, the switch is closed. If it is at the other logic state, the
switch is open. Now we can build an inverter:
1 1
Output
Output
0 0
Input=1 Input=0
We have used two different versions of our controlled switch. One is closed when the
input is 1; the other is open when the input is 1. This arrangement steers a 0 to the
output when the input is 1, and steers a 1 to the output when the input is 0. It functions
as an inverter.
1
The ability to pack large numbers of gates onto a single chip
The relative priority of these features depends on the intended application. For most
applications, the best compromise is achieved by logic families based on MOS
transistors. These score well on power dissipation, cost and packing density, but
compromise somewhat on speed. The two most important types of MOSFET are
shown below:
The MOSFETs have three terminals, the gate (G), the source (S) and the drain (D).
For the n-channel transistor, the behaviour is as follows:
The region between the drain and the source behaves like a switch.
If the gate is 1, then the switch is closed, the source is connected to the drain, and
the transistor is said to be on.
If the gate is zero, then the switch is open, the source is disconnected from the
drain, and the transistor is said to be off.
The p-channel device responds to the gate in exactly the opposite manner. It is turned
on by a 0 at the gate, and off by a 1 at the gate.
V
DD
Pull-up
Input
Output
Pull-down
V
SS
VDD is the high voltage supply (which can be regarded as a source of logic 1’s). VSS is
the reference voltage, usually taken to be zero (and can be regarded as a source of
logic 0’s).
2
Whatever is connected between the output and VSS is called the pull-down network,
because it tends to pull the output down to 0. For this simple gate, the pull-down is a
single transistor. Similarly, whatever tends to connect the output to VDD is called the
pull-up network because it tends to pull the output up to 1.
The operation of the inverter is very simple. When Input=1, the pull-down is switched
on and the pull-up is off, so the output is pulled to 0. When Input=0, the pull-down is
switched off and the pull-up is on, so the output is pulled to 1.
V
DD
A B
Out
V
SS
A B A NAND B
0 0 1
0 1 1
1 0 1
1 1 0
The two pull-down transistors are in series. So there will only be a conducting path
from Out to VSS if both transistors are turned on. The pull-down transistors are both
on if A=1 and B=1.
The pull-up transistors are in parallel, so there will be a conducting path from Out to
VDD if either of the transistors is switched on. This will happen if A=1 or B=1.
We can figure out how to build a NOR gate by using a similar procedure.
3
V
DD
A
Out
A B
V
SS
Out NOT( A OR B) A OR B A B
A B A NOR B
0 0 1
0 1 0
1 0 0
1 1 0
38.5 Complementarity
If you look carefully at the CMOS gates above, you will see that for any combination
of inputs, either the pull-up or the pull-down is on. They are never on simultaneously
or off simultaneously. This is the principle of complementarity, which gives the
CMOS family its name. CMOS stands for Complementary MOS. Most logic families
other than CMOS will sometimes create a low resistance path between the power
rails, which causes high power dissipation, draining the battery and overheating the
chip. CMOS does not have this undesirable property, because complementarity means
that the pull-up and pull-down will not be on at the same time, no matter what the
input condition. For this reason, almost all high density digital integrated circuits are
built using the CMOS approach.
4
39 Integrated Circuit Manufacture
In this lecture we will look at silicon is process to create transistors, and what factors
impact the yield of the working devices that result from the manufacturing process.
This will lead to come important conclusions that govern the cost and impact on the
design process.
The key to the widespread deployment of cheap computing is the ability to place an
entire computer, or at least the major sub-systems of a computer, onto a single tiny
silicon chip. During the last three decades, enormous progress has been made in
increasing the complexity and speed of the systems that can be placed onto a chip.
An integrated circuit (IC) is a circuit fabricated on a small piece (usually called a chip
or a die) of very pure semiconductor (usually silicon). The conduction properties of
the silicon can be altered by introducing impurity atoms (dopants) in a process known
as doping. By altering the conduction properties in an appropriate pattern, various
devices can be made. Also, various layers can be deposited on top of the silicon.
Insulating layers are formed by deposits of glass (silicon dioxide), and conducting
layers are formed by metal (usually aluminium or copper).
Moulding compound
Lead frame
Bond wire
Chip
Pins
5
39.2 Integrated circuit fabrication
39.2.3 Photolithography
The main stages involved in making an integrated circuit are introduced in this
section. The main idea is to have a picture of the size and shape of the feature that we
want to make. The feature could be a region of metal, insulator, or doping. This
picture is called a mask. The process used to transfer the pattern of the mask into a
feature on the silicon is a form of photographic process called photolithography. We
will illustrate the process by showing how we would make an doped region of a
particular shape.
The process starts by taking a very pure piece of silicon, and heating it up in an
oxygen atmosphere. The top surface of the silicon oxidises, forming a layer of silicon
dioxide (the stuff that common glass is made from). This oxide layer is used as a
barrier to provide protection for the silicon.
6
On top of this oxide we place a layer of photoresist. This is a photographic chemical
that changes is solubility properties on exposure to light.
SiO2 Photoresist
Si
SiO2
Si
We then expose the top surface to ultra-violet light. We shield some of the top surface
by using a mask, which is opaque in places. Where the top surface was exposed to UV
light, the photoresist becomes insoluble. In the region that was shadowed by the mask,
the photoresist remains soluble. We then dissolve off the photoresist using an organic
solvent. The photoresist leaves an exposed silicon dioxide surface in exactly the
pattern that was present on the mask.
UV radiation
Developed image
Mask
Next we use an acid that attacks silicon dioxide but not photoresist. We use this to
etch a hole in the oxide layer to expose the surface of the silicon. We then use a
different type of solvent to get rid of the remaining photoresist.
Finally we heat the chip up , and expose its surface to a gas that contains atoms of the
dopant. The dopant atoms will soak slowly into the top of the silicon, giving rise to a
doped region that has exactly the shape of the opaque region on the mask.
Expose to
gaseous form
of dopant
Doped region
This process is repeated many times to build up a series of layers of the desired
properties.
7
39.3 The need for clean rooms
The processing steps must be carried out in clean rooms. These are rooms that are
isolated from the outside world in order to prevent contamination. People who work
in these rooms must wear special clothing to prevent their dandruff, skin flakes,
exhaled moisture, and skin grease from contaminating the clean environment.
To illustrate this, imagine that we have removed the photoresist and are ready to
expose the surface to gaseous dopant. At this point a small speck of dust lands on the
surface. The dust will mask the surface from the dopants, and the doped region will be
the wrong shape.
Dust
Doped region is
interrupted under the dust
The crosses represent faults in the crystal. Now let's look at three possible scenarios
for how we can use this wafer:
1. Making a large number of small chips
2. Making a medium number of medium chips
3. Making a small number of large chips
8
1 2 3
Wherever there is a cross, that indicates a fault, and that chip will not work. Let's
calculate what percentage of our chips work (this number is called the yield of our
process).
This leads to an important conclusion. If we want to place more and more powerful
computers onto a single chip, we need to increase the number of transistors on the
chip. But we cannot increase the size of the chip, so the only way to fit more
transistors in is to reduce the size of the individual transistors.
9
40 Trends in IC Power Dissipation
For the devices used in computers and smartphones, perhaps the most important
factor limiting these devices is their power dissipation. For desktop and laptop
computers, this is because overheating can cause the device to malfunction and there
are limits to how much cooling can be added to the computer within a reasonable
budget. For smartphones, tablets and laptops, this is because battery lifetime is highly
valued by potential customers, and a device with very poor lifetime won’t sell. In this
lecture we’ll look at the key factors that establish how much power is dissipated by an
integrated circuit. To do this, we will need to look at MOSFET operation
10
40.2 MOSFET operation
In an n-channel MOSFET, current flow between source and drain is caused by a
continuous flow of mobile electronics. These are abundant in the n-type regions of the
source and the drain. However, the region under the gate is p-type. Mobile electrons
experience an energy barrier on trying to cross over from an n-type region to a p-type
region, and are unable to cross unless voltages are applied to the gate in a way that
reduces this barrier.
Suppose we put a logic 1 on the drain and a logic 0 on the gate, and let’s assume that
we are using a 3V power supply:
This gate voltage doesn’t provide energy to electrons to assist them to cross through
the p-region between the source and the drain. Electron flow is blocked and the device
is turned off.
The positive voltage on the gate drives positive charges onto the gate metal. These
positive charges are attractive to electrons, so it becomes energetically favourable for
electrons to be in the region under the gate. The barrier to electrons moving from the
source toward the drain is therefore lowered. Electronics form a conducting channel
between the source and the drain, and the device is switched on.
11
The positive charge on the gate metal and the negative charge in the MOSFET
channel are separated by an insulator. This forms a parallel plate capacitor:
𝜖𝑊𝐿
𝐶=
𝑡
𝐶𝑉
𝐸=
2
Charge will flow from the VDD rail and the transistors’ gates will charge up to a
voltage of VDD. Each transistor will store
12
𝐶𝑉
𝐸=
2
Now if we apply a logic 0 to the gate, the charge on the gates will transfer to the VSS
rail. The energy that has been transferred from VDD to VSS is burned off as heat.
Suppose we charge and discharge many times per second at a frequency of f. The
power P dissipated by each transistor is:
𝐶𝑉 𝑓
𝑃=
2
The capacitance is proportional to the length L and the width W of the transistors. So
power P is proportional to:
𝑃 ∝ 𝑊𝐿𝑉 𝑓
𝑃 ∝ 𝑁𝑊𝐿𝑉 𝑓
This is an important relation, and tells us what factors tend to increase the power
dissipation of a circuit:
N: Number of transistors on the IC
W: Width of the transistors
L: Length of the transistors
VDD: Supply voltage
fclock: Clock frequency
Simple devices, such as microcontrollers that must operate for many months from a
single battery charge, can keep their power dissipation low by using a very simple
design (low value of N) and a very modest clock frequency.
13
41 Trends and Trade-Offs in Integrated Circuit Manufacture
We gave seen in previous unts that number of transistors that we can fit onto a single
chip is limited by yield considerations. Miniaturising the transistors used for an
integrated circuit maximises the transistor count within a given chip size, and also
gives gains in speed and power dissipation. However, extreme miniaturisation makes
the manufacturing process very costly. In this lecture we will look at the various
factors involved in trading off these issues.
The following graphs show the progress that has been made in the miniaturisation of
microprocessor chips1. The first graph shows the number of transistors on a single
chip, and the second shows the size of the basic features (e.g. the transistors and
wires) on the chips.
The two graphs on the next page show the speed that has been achieved. The left hand
graph shows the clock frequency, and the right hand graph shows the length of the
clock cycle. In all of these categories, massive progress has been sustained in the past
thirty years. The current state of the art is that speeds of about 3.8 GHz can be
achieved using transistors about 5 nm in size and chips containing a few billion
transistors. Clock frequencies have stopped rising in recent years because the power
dissipation of a CMOS chip scales linearly with its clock frequency. Once we go
above a sustained 3.5 GHz clock speed, the power dissipated within the chip rises to
1
These figures refer to the processors used in cheap PCs, e.g. Intel 80X86 / Pentium / Core.
14
about 150W. Even with very large and powerful fans attached to the processor, it is
hard to stop the processor from overheating, causing malfunction.
1000 1000
100 100
10 10
1 1
0.1 0
1970 1980 1990 2000 2010 2020 1970 1980 1990 2000 2010 2020
To set up a manufacturing facility that can produce at this state of the art can cost a
few billion dollars. Once the factory has been set up, it can produce many millions of
chips per year, so the cost can be recouped quite quickly. Only the very biggest
companies, such as Intel and IBM, can afford to build their own manufacturing plants,
so most companies hire time on someone else’s manufacturing facility. Facilities that
are used mainly for manufacturing third party designs are silicon foundries. The most
well-known silicon foundry group is TSMC, whose most advanced process currently
manufactured is 3 nm. Some very large companies choose not to construct their own
fabrication plants, relying instead on silicon foundries, so that they can focus their
efforts on the design of the integrated circuits. Examples of these “fabless” companies
include Apple (for the processors in iPhones and iPads) and AMD (the main rival to
Intel for processors that go into PCs).
You can see from this table that it rarely makes sense to produce a special purpose
chip unless the intended production volume exceeds 100,000. This sort of production
volume is associated with markets such as PCs, mobile phones, WiFi markets, digital
TV and MP3 players. Programmable chips (e.g. microprocessors, PLDs and FPGAs)
15
can be produced in very large quantities, and then sold to a large number of different
customers, each of whom intends to programme their chips to perform a different
function. Designs that will be deployed in volumes of less than 100,000 will therefore
normally be implemented in a mass produced programmable chip.
16
42 Computer Operation
In this lecture we look at the basic ideas of computer operation and introduce the
ideas of the memory map and the fetch-execute cycle.
These items, either instructions or data, expressed as binary numbers, are held in a
memory.
R/W
Item 0
Enable
Item 1
Item 2
Data
Item 3
Item 4
Item 5
Item 6
Address
We can access any location within the memory by supplying an address1. Say we
wanted to access item 5; we would supply the address 101 to the address input. If this
is a read operation, then the contents of item 5 would be sent to the data lines. If it is a
write operation, then the contents of the data lines would be stored in location 5, over-
writing its previous contents. The memory knows whether it should perform a read or
a write from the value of the R / W input. If this has a value of 1, a read is performed;
if it 0, then a write is performed. R / W is a control line; it tells the memory what to
do. Most memories also have a second control line, Enable. If Enable=1, the memory
behaves as normal. If Enable=0, then the memory does nothing, and does not respond
to any new addresses. This is useful because sometimes activity on the address and
data busses will be destined not for the memory, but for some other device within the
computer.
Typically, the memory will contain many programs and many pieces of data.
1
This type of memory is called RAM (random access memory). Random access means that we can
access any location in memory with equal speed and ease, simply by providing a different address. This
is in contrast to sequential memory (e.g. a video tape) where an item at the beginning of the tape can be
accessed quickly, but an item at the end of the tape can only be accessed after we have fast forwarded
through the entire tape, which takes a long time.
17
Address 000000
R/W
Accounting program
Address 057120 Enable
Word processor program
Address 091600 Data
Game program
Address 103112
Payroll data
Address 120562
Address
Book text
By simply supplying an address, and then telling the processor to start executing from
that address, our computer can become a word processor, a games machine, or an
accountancy machine.
In order to be able to step through the memory locations that correspond to the
program, the processor needs to keep track of how far it has got. This is done through
a register called the program counter. This contains the address of the instruction to
be executed. After the completion of an instruction, the program counter is
automatically incremented to contain the address of the next instruction.
18
42.4 What’s special about computers?
The fact that the hardware isn’t specialised to any particular function, but can in
principle be used to accomplish any task, may sound trivial but it is revolutionary in
its impact. The computer is in several important respects completely unlike any other
engineering creation.
Firstly, computer manufacturers don’t have to produce one product for the games
market, another for the accountancy market, another for the secretarial market, and so
on. Instead they can mass produce one standard product that is suitable for every
market. The huge production runs associated with manufacture of computer chips are
the reason why it is tolerable for companies such as Intel to lay out a billion dollars on
a manufacturing plant. Without the economies of scale that result from having
hardware that is suitable for every market, the advances in speed and capability of
computer hardware would have ground to a halt due to the enormous set-up costs of
an integrated circuit manufacturing plant.
Address bus
Data bus
Control bus
This type of computer where the processor is separate from the memory, but the
memory is unified and holds both instructions and data without distinction, is called a
von Neumann machine. The terms “Stored Program Computer” and “von Neumann
machine” are used almost interchangeably. The name comes from John von
Neumann, the Hungarian mathematician, who in 1945 became the first to publish
theoretical analyses of the capabilities of this type of machine. (Though critics point
out that the hard work of inventing such a machine had already been done by other
people a couple of years earlier.)
19
42.6 The instruction cycle
During the running of a program, the sequence of events is something like this1:
Firstly, the processor must get an instruction. To do this, the processor takes the
address stored in the program counter and places it on the address bus. It also
activates the appropriate lines on the control bus to activate the memory, and to
enable a read operation.
The memory responds by placing the contents of the memory location
corresponding to the address onto the data bus. This is the instruction.
The processor inspects (“decodes”) the instruction to find out what operands it
needs, and where they can be found.
In order to get the operand, the processor places an address on the address bus; it
also activates the appropriate lines on the control bus to activate the memory, and
to enable a read operation.
The memory responds by placing the contents of the memory location
corresponding to the address onto the data bus.
The above two steps may be repeated if more operands are needed.
The processor acts upon the instruction, producing a result
The processor writes this result back into memory by placing the appropriate
address on the address bus; it also activates the appropriate lines on the control
bus to activate the memory, and to enable a write operation.
This instruction is now finished with, so the program counter is incremented in
order to point to the next instruction.
The processor normally runs at a very high clock speed; a clock frequency of 3 GHz
would be normal for a reasonable microprocessor nowadays. Inside the processor
there is one (or more) arithmetic logic unit(s) that run at the full processor clock
speed. There is also a very small amount of memory, called register, which can be
accessed at the full processor speed. This is used to keep local copies within the
processor of data that is held within the main memory.
The busses of the computer run at a much lower clock speed (typically a few hundred
MHz). The sequence of operations whereby the processor drives an address onto the
1
It varies slightly, depending on the type of computer, and the type of instruction being executed.
20
address lines, then acts on the data lines is called a bus cycle. A bus cycle will
typically take several cycles of the bus clock.
21
43 Computer Memory
One of the key issues in the design of computer systems is the way that data is
communicated between memory and processor. In this lecture we will look in detail at
how memories are constructed, how the processor communicates with memory, and
the impact that this has on computer design.
w0
w1
w2
w3
w4
w5
w6
w7
a2 a1 a0
Enable
Address
decoder
a2 a1 a0 d7 d6 d5 d4 d3 d2 d1 d0
The simplest type of memory (static RAM) uses a latch in each memory cell. So each
word of memory is a bank of eight 8-bit latches. The word lines act as the enable
signal to the latches. Whichever latch is enabled will connect to the data lines d7…d0.
(This memory has used an 8-bit word, which is an unusual size. However, the
generalisation to other word length is obvious.)
22
43.1 Types of Memory
Each intersection of the word line and the data line contains a small circuit. Different
types of circuit give rise to different types of memory. The key characteristics that
differentiate different types of memory are:
Static/dynamic: These are two different types of RAM. Static RAM (SRAM) stores
each data bit in a small latch. It retains it data value for as long as the power is
applied. Static memory tends to be very fast, but low density (because it takes 6
transistors to build the latch to store one bit). Dynamic RAM (DRAM) stores a data
bit as a charge on the parasitic capacitance of a transistor. It requires only 1 transistor
to store 1 bit, so we can fit a very large number of bits onto a single RAM chip.
However, it is very slow (since the capacitance is not supplying any active drive to the
signal input/output). The name “dynamic” refers to the fact that the charges stored on
the capacitances leak away (over a timescale of about 1 ms) thus losing their stored
data. In order to avoid this loss of data, DRAM chips contain internal circuitry that
automatically reads and restores (“refreshes”) the values of the memory bits at a high
enough rate to complete the refresh process before the charge has leaked away.
This problem is solved by using a small amount of very high speed static RAM to
hold operands and instructions that the processor is likely to need in the near future.
This is called the cache memory. The fact that it is small means that it is economic to
use the fastest type of RAM, i.e. static RAM. The system now has the following
appearance.
23
Processor Cache Memory Main Memory
( CPU ) memory Controller memory Buffer
Data bus
Address bus
Control bus
When an address (and the corresponding read/write control signal) is sent by the
processor, this is not sent directly to the memory elements. Instead it goes to a
memory controller, which checks whether the required item is held in cache. If it is,
then the cache is instructed by the controller to perform the required read or write
signal directly onto the data bus. If the item is not in cache, then the controller
instructs the slower main memory to carry out the required read or write.
As long as all the data that the program needs is stored in the cache, then execution
can proceed quickly. However, the computer system can’t know for sure at the outset
of execution what data will be needed as the program progresses. So occasionally the
program will reach a point where the required data has not been loaded into the cache.
This is called a cache miss, and the processor will have to stop and wait whilst the
required data items are loaded from the slow main memory into the high speed cache.
Sometimes it is too expensive to put a large cache onto the processor die, so there is
another cache (bigger but slower) on the external memory bus, which is called the
level 2 cache.
Processor-memory bus
24
On modern systems, a level 2 cache (and perhaps a level 3 cache) is also integrated
into the processor chip.
Computer programmes that have long run time tend to spend most of their time in
loops operating on arrays or similar data structures. Here is a simple example:
/* Compute total cost of 100 32-bit items */
total=0;
for (i=0; i<100; i++) {
total = total + item[i]; }
When the programme is compiled the array will be laid out in memory as a sequence
of bytes. Each integer in the array is 32 bits in size, and thus requires 4 bytes to store.
Suppose the base address of the array is 1000. Then the elements of the array are
stored in memory like this:
When the program runs it will step through an array fetching one item at a time.
When we process item 0, it is not in cache and has to be fetched (slowly) from main
memory. The main memory system will transfer not only the data item that was
requested, but will also shift the surrounding 64 bytes into cache at the same time.
When we come to access items 1, 2 through to 15 in the following iterations of the
loop, they are all in cache by the time we need them. They are therefore accessed
quickly and we see a substantial gain of speed.
25
1024 cells are read into the output buffer. On the next cycle, 10 bits of the address are
applied to select one column’s data bit and this bit is then written or read. In this way,
only 16 address pins are needed on the chip. The chip will contain another 7 similar
arrays, each using the same addresses to produce bits d7 to d0 of the data byte to give a
64 Mbyte memory chip.
0
1
16 2
Row Array of
address RAM bit cells
…
Row select 65535
0 1 2 … 1023
Output buffer
Column
address Column select
10
data bit
A very important feature of this arrangement is that although the memory array
responds very slowly, the output buffer is constructed from standard electronics and is
very fast. When we read one bit from the array (rather slowly), its 1023 neighbours
are simultaneously made available in the very high speed output buffer. The output
buffer can then burst these neighbouring data items out across a high speed bus into
the cache. So although the latency of RAM may be poor, the throughput can be high.
26
44 Managing Data Movement in a Computer
In this lecture we will look at how the flow of data around a PC is organised. This will
lead us to the idea of the PC chipset, a collection of bridges and associated devices,
that manage the routing and flow control of data.
CPU
Output Memory
devices
Hub
Input
Storage
devices
The hub also performs the functions of a bridge. A bridge is a device that connects
two busses together. The bridge “translates” from the requirements of one bus to the
requirements of another. So, for example, the bridge may provide buffering in order to
temporarily store data that builds up in the bridge due to mismatch between the speed
of the input and output bus. The bridge will probably also provide some form of flow
control, i.e. a way of telling a bus that is delivering too much data too quickly that it
must temporarily suspend the transfer until the receiving bus has managed to catch up.
Also, the different busses are likely to be using different methods (“protocols”) to
27
attract attention from other devices, and to set-up transfers. The bridge will provide
translation between the protocols of the different busses.
Historically bit-parallel has been seen as the desirable way to operate as it takes just
one clock cycle to transfer an entire data word. However, the movement away from
shared bus architectures to hub-based architectures means that using a full 32 bits for
every link into and out of the hub gives a very high number of bus wires. This has
made bit-serial communication more desirable.
Another issue that has increased the use of serial busses is that parallel busses are very
difficult to operate reliably at speeds much above a few hundred MHz. This is
because all bits on a parallel bus must maintain exact synchronisation as they move
from one end of the bus to the other. If some bits arrive much earlier than others, we
may end up with the bits from one data word getting muddled with bits from the
following data word. The longer the wires are, the greater this problem becomes.
Furthermore, if a bus has to go around a corner then the outer wires will have further
to travel than the inner wires so they will arrive at the destination later. So very high
speed bit-parallel busses are nowadays only used for very short straight
communication pathways with very predictable electrical loads. This effectively limits
them to links between processor and main memory.
By contrast, serial busses can operate at extremely high clock speeds: 8-16 GHz is
quite normal nowadays. Most devices other than memory are connected using some
form of high speed bit-serial bus.
28
CPU
Frontside bus
Internal bus
Network
connector
Peripheral South
slots bridge
Peripheral bus
Disk slots
USB
connectors Universal serial bus Legacy bus
Slow devices
& boot ROM
The high-speed bridges and memory controller have been integrated into a single chip
called the North Bridge (also known as the Memory Controller Hub). Another chip,
the “South Bridge” (also known as the IO Controller Hub) acts as the link between the
lower speed busses. The whole architecture revolves around these two bridges, which
control the movement of data around the computer. The caches are contained
completely inside the processor chip.
The motherboard or main board of a computer is a printed circuit board that carries
all the wires associated with the busses, and also the chips that implement the bridge
circuitry. The bridge chips are often referred to as the chipset of the motherboard.
Different manufacturers make slightly different chipsets, with slightly different
capabilities and performance. The motherboard contains many slots that can be used
to plug in devices (such as graphics cards, Wi-Fi cards, etc.). The motherboard also
has connectors that protrude through the computer case to provide sockets for USB or
network cables to be plugged in.
The overall speed of the computer is limited by the amount of different traffic
(memory, graphics, I/O) that must pass through the front side bus. The FSB therefore
becomes a bottleneck that limits overall system speed.
29
Index
38 Technologies for Logic Implementation 1
38.1 Switch logic 1
38.2 The transistor as a switch 1
38.3 The CMOS inverter 2
38.4 The CMOS NAND and NOR gates 3
38.5 Complementarity 4
42 Computer Operation 17
42.1 The stored program concept 18
42.2 Memory mapped devices 18
42.3 Logical and Physical addresses 18
42.4 What’s special about computers? 19
42.5 Computer organisation 19
42.6 The instruction cycle 20
42.7 The sequence of operations 20
42.8 Timing of operations 20
43 Memory Hierarchies 22
43.1 Memory organisation 22
43.2 Types of Memory 23
43.3 Cache memory 23
43.4 Multilevel caches 24
43.5 Locality of reference 25
43.6 Modern RAM architecture: SDRAM 26
30
44 Managing Data Movement in a Computer 27
44.1 Evolution of PC architecture 27
44.2 Shared bus architectures 27
44.3 Hub-based architectures 27
44.4 Parallel bus or serial bus? 29
44.5 The classic north bridge/south bridge architecture 29
44.6 Current PC architecture 29
31