PC Upgrade and Maintenance
PC Upgrade and Maintenance
Electrostatic discharge or ESD is caused by the buildup of electrical charge on one surface that is
suddenly transferred to another surface when touched.
This discharge is actually typically several thousand volts. ESD’S have very little current which is
why it doesn't kill unlike those high-tension lines with several thousand volts but it can certainly
kill your computer components.
One way to reduce the buildup of ESD is to increase the relative humidity of the room where the
computer is located because ESD builds up more in dry environments than in moist ones.
Another way to reduce static is to avoid doing the well-known things that cause it: wearing socks on
carpeted floors, etc.
B. Draining it away
Draining static is usually a simple matter of touching something that is grounded, such as the metal
of your case when it is plugged in. This will drain off any static buildup in your body that might
cause damage to your components.
Power Fluctuations
The power supply is one of the most important, but also most ignored pieces of a computer.
The power supply has to work hard to provide a constant and stable level of electricity to the
devices in your computer without fluctuations. It has to be strong enough to feed all the devices in
your machine, and in some cases it has to be approved to work with certain parts of your PC such as
an Athlon CPU.
A power supply doesn't last forever. Sooner or later they'll fail. They can last all the way from a few
months to many years, it all depends on the quality, how hard it has to work, and what conditions it
is exposed to (temperature changes, bad electricity, dirt, etc.).
The component inside a power supply that is prone to fail first is the fan. It usually starts with a
grinding or high-pitched noise that initially disappears a few minutes after you turn the PC on, but
soon gets worse.
Once the fan is dead, the hot air is not being properly exhausted from the power supply which
causes it to overheat and accelerates its demise.
In addition, often the power supply fan also exhausts hot air from the inside of the computer, and if
the fan fails, you lose an important part of cooling.
Warning: Don't try to replace the power supply fan yourself unless you know what you're doing!
It requires some soldering and should only be done by somebody who is familiar and comfortable
with such a procedure. I rather consider replacing the whole unit with a better quality one.
Power failure can possibly result in system crash, data corruption, or hardware failure. Another
thing that could happen is that when you turn on your PC, the lights and fans come on, but it doesn't
boot, because the BIOS cannot verify a sufficient and consistent power flow is established before it
continues the Power On Self-Test (POST) and the boot process.
The PC does not boot at all if the power supply is completely dead and nothing happens at all when
you push the power button
Power Surges
The required voltage for a computer is always between 18.0-19.5 voltages of the 240v from an
electric source but due to disturbances, distant lightning strikes, and problems within the electrical
grid, on occasion a voltage spike may come down the line. i.e. a temporary increase of voltage that
can last just a few thousand of second, say an increase from 240 to 1,000 volts or even higher.
Most computer power supplies are subjected to many of these surges each year and like with line
noise, most of the better ones can tolerate them to some extent, though not good for their internal
components.
In some cases, high voltage surges can disrupt or even damage your computer equipment.
In addition, being subjected to many surges over a period of time will slowly degrade many power
supply units and cause them to fail prematurely.
A child playing the keyboard like a piano, a power surge, lightning, floods or simple equipment
failure are ways data can be loss but a regularly backup of your files keeping them in a separate
place, you can be certain of getting them back in the event of the above situations.
Deciding what to back up is highly personal, anything you cannot replace easily should be top of
your list, making a checklist of files to back up will help you determine what to back up and also
give you a reference list in the event you need to retrieve a backed-up file.
The backup tool in Windows XP helps you to protect your data in case your hard disk fails or files
are accidentally erased. Backup creates a duplicate copy of all the data on your hard disk and then
archive it on another storage device, such as a hard disk or a tape.
VIRUS
A virus is a piece of code that attaches itself to an executable file and is not activated until the
executable file is launched.
Also, they are intentionally created programs, for the purpose of causing some effect in the
computer and replicate themselves to be passed on to other computers. The effect that a virus has
on a computer is called its payload.
A virus payload might not be destructive to the computer, the purpose might be to merely display a
particular message, run a video clip, or change the display colors.
However, if a payload is destructive, it can delete files, close running applications, or destroy a
drive’s master boot record.
WORM
A worm, on the other hand, is a program in itself and does not need to attach itself to a legitimate
application in order to run. Viruses are typically more common than worms.
Virus category
FILE VIRUS
This is the most common virus type, it hide’s itself in executable files, when the executable file is
run the virus is activated.
MACRO VIRUS
These viruses attach themselves to portions of applications and disguise themselves as macros.
A macro is an automated process within an application, such as reading and automatically updating
a date field or searching for and formatting specified text.
This type of virus hides itself in the MBR (master boot record) and is activated during startup
when the MBR is located and initialized.
A master boot record (MBR) is a type of boot sector, a data sector at the beginning of many types
of computer mass storage. It is most common on disk drives, large enough to be partitioned, hence
it is not usually present on floppy disks or small thumb drives.
When a virus is introduced into a computer system, it typically replicates (copies) itself into
memory. From there, it can copy itself into other files in the system. This is an intentional behavior,
configured or designed by the programmer who created the virus. These copies of the virus can
then be spread via floppy disks, downloading files from the Internet, or executing e-mail
attachments that launch a host program, such as a word processor.
1. You can minimize the spread of viruses by using antivirus programs that scan all new files
introduced into the computer system.
2. You should scan all files on floppies that have been used in other computers
3. All e-mail messages with attachments,
4. All files that you download from the Internet
Unfortunately, even if you take all the precautions we’ve mentioned, you are not immune to
computer viruses.
New viruses are created all the time and could be too new for your antivirus utility to detect. When
the computer starts behaving sporadically or begins to unexpectedly crash, close, or launch
applications or lose files, you should suspect a virus and begin troubleshooting the problem
immediately. If you have an antivirus utility, run it and instruct it to perform a virus scan and
removal.
A variety of antivirus utilities are available from third parties, such as Norton, Symantec, and
McAffee. Windows 2000 includes a native antivirus utility called AVBoot.
In most cases, antivirus utilities work by recognizing and removing specific viruses. They are
typically useless against viruses that have been created since the release of the utility itself, for this
reason, most third-party virus utility manufacturer’s keep an up-to-date list of new viruses and
offer upgrades via the Internet. It is therefore important that you update you antivirus utility’s
capabilities often. Furthermore, if you stay current about new virus types, you are likely to
recognize them more quickly if they are introduced into your system.
If you are unable to remove a virus before causing severe damage, you will probably have to
reinstall the OS from scratch. It is important in these cases to repartition and reformat the hard
drive because viruses could still exist on the drive (especially in the boot sector).
New PC’s are not built neither is a system upgrade done unless we get at least a factor of three or
four more in performance at a reasonable cost. for example, a 600 MHz system wouldn’t be
changed except a more reasonably-priced systems can be purchased that run at 1.8 GHz to 2.4 GHz
or faster.
A 2 GHz system wouldn’t be replaced until a reasonably-priced system could be purchased that
runs at 6 GHz or higher, hence we do upgrade for a better, faster and more efficient performance.
The longer you can delay upgrading, the more you’ll get for your money when you finally do
upgrade.
It won’t be a good idea to upgrade from a 1.7 GHz system to a 2 GHz one. The only exception is
when software you want to run demands a better system.
Explanation
Maybe, you want to play a video-intensive game and your system won’t allow you or, maybe you
decide you want to study database development and you install Oracle 9i on your computer but find
you need a faster PC. Possibly, you decide to produce music videos on your PC, and you find that the
best video editing software runs much better on a faster system. But, unless the software you desire
to run demands a faster, better system, you’ll probably do well to postpone an upgrade or building
a new system until you can get a factor of three in better overall performance.
The RAM, CPU, sometimes the mainboard, video card or hard disk and Sometimes we also need to
upgrade the Software in our PC, meaning we install a newer version of a specific operating system
or other applications. This becomes a must since some newer applications won’t run under an old
operating system and also some files of newer versions of applications won’t run under older
versions of the same application.
Examples:
Office2003 can’t be installed in MS Windows98 or previous versions. Also, a PDF file generated with
a new version of Adobe Acrobat Writer won’t be correctly read using an earlier version of Adobe
Acrobat Reader software.
RECENT ADVANCEMENT
A PC with a “classic” Pentium 166MHz CPU will run faster than a PC with a “classic” Pentium
120MHz CPU. Naturally upgrading the CPU will improve system processing but you can’t just place
any old CPU in the CPU socket and expect the motherboard to work, every motherboard is limited
to using a handful of current CPU versions.
E.g. the Intel’s AN430TX motherboard supports Pentium processors at 90, 100, 120, 133, 150, 166,
and 200MHz, as well as Pentium MMX processors running at 166, 200, and 233MHz. By
comparison, Intel’s NX440LX motherboard supports Pentium II microprocessors operating at 233,
266, and 300MHz. Changing the processor type and speed requires changes in several jumper
settings.
ADDING A MEMORY
Memory slots the sheer amount of memory that can be added to the motherboard will indirectly
affect system performance because of a reduced dependence on virtual memory (a swap file on the
hard drive).
Memory is added in the form of SIMMs (Single In-line Memory Modules) or DIMMs (Dual In-line
Memory Modules). Motherboards that can accept more or larger-capacity memory modules will
support more memory. It is not uncommon today to find motherboards that will support 512MB of
RAM (equal to the storage capacity of older hard drives).
MEMORY TYPES
The type of memory will also have an effect on motherboard (and system) performance. Faster
memory will improve system performance.
DRAM (DYNAMIC RANDOM ACCESS MEMORY) remains the slowest type of PC memory, and is
usually used in older systems or video boards.
EDO RAM (EXTENDED DATA OUTPUT RAM) is faster than ordinary DRAM, and is now
commonplace in PCs.
SDRAM (SYNCHRONOUS DYNAMIC RAM) is measurably faster than EDO RAM, and is appearing in
high-to-mid-range PC applications.
RDRAM (RAMBUS DYNAMIC RAM) is an emerging memory type that should gain broad
acceptance in the next few years. It is not necessary for you to understand what these memory
types are yet; just understand that memory performance and system performances are related.
CACHE MEMORY Traditional RAM is much slower than a CPU—so slow that the CPU must insert
pauses (or “wait states”) for memory to catch up. Cache is a technique of improving memory
performance by keeping a limited amount of frequently used information in very fast cache RAM.
If the needed information is found, the CPU reads the cache at full speed (and performance is
improved because less time is wasted). By making the cache larger, it is possible to hold more
“frequently used” data.
CHIPSET
A chipset is a set of highly optimized, tightly inter-related ICs which taken together, handle virtually
all of the support functions for a motherboard. As new CPUs and hardware features are crammed
into a PC, new chipsets must be developed to implement those functions.
For example, the Intel 430HX chipset supports the Pentium CPU and EDO RAM. Their 430VX chipset
supports use of the Pentium CPU.
Upgrading is a term used to describe updating a software program or adding a new hardware.
Software upgrade allows a user to get the latest version of a software program at a discounted price
and not have to purchase the full product. For example, a user running Microsoft Windows 95 could
purchase the Microsoft Windows 98 upgrade for a low price when compared to the full version of
Windows 98.
A hardware upgrade often involves removing an old hardware device and replacing it with a new
hardware device. For example, replacing an 8MB PCI video card with a 32MB AGP video card would
be considered an upgrade. A hardware upgrade such as a memory upgrade may not require a user
to remove the memory from the computer because of the availability of additional expansion slots.
BENEFITS
1. Performance increase: The majority of the hardware upgrades performed is to increase the
performance of the computer.
2. Capacity increase: Users may upgrade or add a new device to increase the overall capacity of the
computer. For example, adding a new hard drive to allow the computer to store more information.
Or increase the memory to increase the ability of what programs can be opened and also at the
same time increase the performance.
3. Compatibility: A user may upgrade one or more components in their computer to be able to run
or use a software program.
STEPS TO UPGRADING A PC
1. OPEN THE CASE OF THE PC: In order to upgrade a PC, the first step to take is to open the
case to inspect the internal components. Some cases are opened using Philips screwdrivers,
others using flat screwdrivers. The computer case holds all the internal parts of your PC.
Many case variations are available including tower cases, mid-tower cases, and desktop
models.
4. CHECK AND VERIFY IF THE NEW COMPONENT MEETS YOUR REQUIREMENTS: For
example, if you replaced a VGA card to run some application, then the first thing to do is to
test if this application is really running using the new card.
ENCLOSURE
The enclosure is the most obvious and least glamorous element of a PC, yet serves some
very important functions. Firstly, the enclosure forms the mechanical foundation (chassis)
of every PC. Every other sub-assembly is bolted securely to this chassis. Secondly, the
chassis is electrically grounded through the power supply. Grounding prevents the buildup
or discharge of static electricity from damaging other sub-assemblies.
Whenever you work inside of a PC, be sure to use a good anti-static wrist strap to prevent
electrostatic discharge on your body from accidentally damaging circuitry inside the
system. If you do not have an anti-static wrist strap handy, you can discharge yourself on
the PC’s metal chassis in front slots (or external drive bays), and one or two drives mounted
inside the PC (in internal drive bays).
An average-sized enclosure, allows a fair amount of space to expand the system as your
customer’s needs changes. Unfortunately, the push toward smaller PCs has led to the use of
smaller, more-confined enclosures. Small (or low-profile) enclosures restrict the size of the
motherboard, which results in fewer expansion slots (usually 4 to 6), and allows room for
only 1 to 3 drives.
The great advantage of tower enclosures is their larger physical size. Towers usually offer
4 or 5 external drive bays, as well as 3 or 4 internal bays. To accommodate such
expandability, a large power supply (250 to 300 watts) is often included. Tower cases can
also fit larger motherboards, which tend to support a greater number of expansion slots.
The higher power demands of a tower system result in greater heat generation but towers
compensate for heat by providing one or more internal fans to force air into the enclosure.
If a second internal fan is included, it generally works in conjunction with the first fan to
exhaust heated air. For example, you’ll often find tower systems with two fans. One in the
lower front to force in cooler air and the other at the upper rear to exhaust heated air. If
only one fan is used, it will usually be located in the upper rear of the chassis to exhaust
heated air.
Desktop and tower PCs incorporate seven key items; the enclosure, the power supply, the
motherboard, a floppy disk drive, a hard disk drive, a video adapter, and a drive controller.
The following sections detail each item. Many case variations are available including tower
cases, mid-tower cases, and desktop models.
Most common cases are tower or mid-tower. Most builders also prefer the ATX form factor.
Smaller cases have a smaller footprint and they save space. However, larger cases offer
more room for expansion and working inside a larger case is easier. It is recommended that
you choose a quality mid-tower or full-tower ATX case for your first PC build because these
cases are designed to be paired with any ATX mainboard.
A standard ATX case gives you a full range of upgrade options to newer, more powerful
mainboards. This standardization of components, which allows easy upgrades, is one
advantage of building a PC rather than buying one.
Power supplies come with most cases today, the power supply has many power connectors
to power the mainboard, hard drives, CD-RW drives, and other components. Ensure that
your case and power supply match the type of mainboard you want to install. This usually
means purchasing an ATX style mainboard and case. Be sure your case supports a full ATX
mainboard.
POWER CONNECTORS
Most mainboards today are ATX style. You can identify an ATX power supply and case by
looking for an ATX power connection.
Most power connectors today are made so that they can only be plugged in one way. This
connector provides power to the ATX mainboard. Most important power connectors, such
as the twenty-pin ATX power connection are designed so that they can only be plugged in
one way. This prevents plugging the connector in the wrong way and causing damage to
components by putting too high a voltage on a pin that is not designed to take such.
Newer ATX power supplies also have a special four-pin power connector, which is used
with Pentium 4 mainboards. If you’re installing an AMD Athlon, you won’t need this special
four-pin connector. Just leave it disconnected. If you’re building a Pentium 4 system, be sure
your power supply has the necessary 4-pin power supply connector in addition to the
standard ATX power supply connector. All newer cases will have it.
If your power supply ever needs replacement, you can keep the case and just purchase a
new ATX power supply. As a general rule, most cases will have several extra power
connectors which will remain unused when your system is built. Just tuck the unneeded
power connectors out of the way when you close up your PC case. They don’t all need to be
connected to something. If you later add another hard drive or a DVD player, for example,
you’ll use one of the remaining power connectors to supply power to it. If you run out of
power connectors (unlikely), you can purchase Y Splitters which are small cables designed
to give you more than one power connector from the existing power supply connection. It’s
just like purchasing a power strip that plugs into your wall outlet and provides six or eight
new outlet sockets.
Similarly, if you find out that some component needs a unique power connection that isn’t
provided for from your existing power supply connections, you can purchase a Y splitter or
an adapter which will give you the specific connector you need.
This is relatively rare as most modern power supplies offer a more than enough power
connectors. There are also extension adapters which give power supply cables more length.
You probably won’t need these either, unless you install a new power supply in a large case.
Other connectors from the case don’t supply power, but they connect the front panel of the
computer case to the mainboard. These connectors are thin wires with little connectors on
the ends that plug into pins on the mainboard.
For example, to turn the computer on and off, there is an on-off switch on the case. The
small Power SW wire connects the power button on the case to the mainboard to let the
mainboard know when you want the PC to turn on or off.
This small two-pin connector may be plugged-in in either direction on the mainboard.
Basic switches can usually be installed in either direction, because they are designed to
either open or close a circuit. So, the orientation of the two pins doesn’t usually matter.
Examine your mainboard manual carefully to determine the proper pins to connect these
thin-wire case panel connectors to. Also examine your mainboard carefully before installing
it in the case, because you’ll often have a better view of the pins when the mainboard is out
of the case. Usually, a row of many pins will be provided on the mainboard. It’s easy to plug
the little fellers on the wrong pins if you don’t pay attention to the mainboard manual.
Thin-Wire Connectors
Most of these other small, thin-wire connectors are also ambidextrous. The thin-wire
connectors typically includes
Power Switch (P SW): This can be connected in either direction to the proper two pins on
the mainboard. It turns the computer on and off.
Reset Switch (Reset): This can be connected in either direction to the proper two pins on
the mainboard. If Ctrl+Alt+Del doesn’t work to reboot your hung-up PC, you can always use
the reset switch to restart your computer. There should be a small reset button on the front
of your case. Using the reset switch is more desirable than turning a PC on and off again
rapidly. Always wait a couple of minutes after turning a PC completely off before turning it
on again. This prevents a surge of current and charge from hitting components that may not
have drained their existing charge yet.
Power LED: LED stands for Light-Emitting Diode. These are the little blinky things on the
front of your computer case. LEDs light up when a small current passes through them in the
correct direction. The power LED goes on when the system is powered up. The small
current to light the LED is provided by the mainboard.
HD LED: This front case panel LED blinks when the hard drive is active. If this connector is
installed in the wrong direction, your computer will work fine except your hard drive LED
probably won’t light up or it will remain on rather than blinking with activity. If you notice
that it isn’t working, just reorient the connector.
SPEAKER CONNECTION
This connects the small case speaker to the mainboard. Those front panel connectors that
aren’t ambidextrous (such as the hard drive LED, which lights up on the front panel to show
activity on the hard drive) won’t damage your system if they are hooked up backward.
These thin-wire connectors to the mainboard aren’t supplying power to the mainboard.
The ATX power supply also typically provides a small current to the mainboard even when
the computer is off. So you should always disconnect the power supply cord before
upgrading your PC or working on its internals. Or, turn off your power strip or
uninterruptible power supply (UPS) that your computer is attached to before working on it.
The ATX power supply also usually provides a power switch at the back of the PC, labeled
“O” for off and “1” for on. But, it’s best if the power is off before reaching the PC power cord.
ATX mainboards often have an LED on the mainboard which will remain lighted all the time,
even when the PC is turned off. This lets you know there is power to the mainboard. And,
hopefully, reminds you to unplug the power cord before proceeding further,
Inserting and removing parts on an ATX mainboard that has power can damage
components.
Plugging your PC into the wall outlet or UPS will be the last step in building your PC. I
recommend your purchase a UPS to protect your new PC from electrical surges. At today’s
prices, a UPS is a great purchase. If power fails, the UPS will give you time to shut down your
system properly. Do not plug in your power supply cord to an outlet until you have
assembled your PC.
The older AT case style is outdated. Connections from the power supply differ between the
ATX and AT style. Older AT cases will not work with a newer ATX mainboard. (You can buy
adapters to convert AT power to ATX power. But, I’d recommend against this, because with
your newer components, you’ll probably want a bigger and more stable power supply
anyway.)
Your case and mainboard will probably be based upon the ATX style. But, if you ever need to
repair or upgrade an older AT style, it’s very important to be sure that the two AT power
connectors are connected with the black wires toward the middle of the two connectors.
This is one of the few power connectors that can be assembled incorrectly causing damage.
You don’t need to worry about this with the ATX style cases. If you’re working with new
PCs, you’ll probably never use the older AT style power connectors.
POWER SUPPLIES
The power supply is typically located at the back of the computer’s interior, and it is (sometimes
abbreviated power supply or PSU). This is a device or system that supplies electrical or other types
of energy to an output load or group of loads. The power supply is the silver box that is usually
located in the rear right quarter of the enclosure.
FUNCTIONS OF SPU
1. It is responsible for converting the alternating current (AC) voltage from wall outlets into the
direct current (DC) voltage that the computer requires. The power supply accomplishes this task
through a series of switching transistors, which gives rise to the term switching mode power
supply.
2. The power supply ensures that the computer receives the proper amount of voltage. The
computers require comparatively smaller voltages—±12, ±5, or ±3.3v DC (volts DC). The
computer’s power supply removes the excess voltage and dissipates it in the form of heat. This
build-up of heat can cause computer components (including the power supply itself) to fail.
Therefore, the power supply has a built-in fan that draws air in from outside into the computer case
and cools off the components inside.
3. Power supplies sustain a great deal of electrical stress in normal everyday operation.
Ac enters the supply through the ac line cord, which is connected at the rear of the enclosure. A
supply then produces a series of dc outputs that power the motherboard and drives. The
importance of a power supply is easy enough to understand, but its implications for system
integrity and expandability might not be as obvious.
The conversion of ac into dc results in substantial heat, which is why so many power supplies are
equipped with a cooling fan. Surges, spikes, and other anomalies that plague ac power distribution
also find their way into PC power supplies, where damage can occur. The quality of a power
supply’s design and components and design dictate how long it will last in operation. A quality
supply will resist power problems and tolerate the rigors of normal operation, but a sub-standard
supply can fail spontaneously after only a few months of operation. When replacing or up-grading a
power supply, be sure to choose a reliable model. Power supplies also limit a system’s
expandability. Every element used in the PC requires a certain amount of power (marked W for
watts). The supply must be capable of producing enough power to adequately meet the system’s
demand. An under-powered supply (typical in low-profile systems) or a supply overloaded by
excessive expansion (which frequently occurs in tower systems) might not be able to support the
power needs of the system.
When replacing a power supply, be certain that the new supply can provide at least as much power
as the supply being replaced. When upgrading a supply, choose a supply that offers at least 50 watts
more than the original supply. Power supply assemblies are generally regarded as extremely safe
because it is virtually impossible to come into contact with exposed high-energy circuitry. Still,
exercise care and common sense whenever working with a running power supply.
GENERAL DESCRIPTION
The complete range of power supplies is very broad, and could be considered to include all forms of
energy conversion from one form into another. Conventionally though, the term is usually confined
to electrical or mechanical energy supplies. Constraints that commonly affect power supplies are
the amount of power they can supply, how long they can supply it for without needing some kind of
refueling or recharging, how stable their output voltage or current is under varying load conditions,
and whether they provide continuous power or pulses.
This term covers the main power distribution system together with any other primary or secondary
sources of energy such as:
• Batteries
• Solar power
• Conversion of another form of electrical power into the desired form (typically converting 120 or
240 volt alternating current supplied by a utility company into low-voltage direct current for
electronic devices) Low voltage, low power dc power supply units are commonly integrated with
the devices they supply, such as computers and household electronics.
Mother Board
A motherboard is a printed circuit board used in a personal computer, also known as the
mainboard and occasionally abbreviated to mobo or MB. The term mainboard is also used for the
main circuit board in this and other electronic devices.
A typical motherboard provides slots for one or more of the following: CPU, graphics card, sound
card, hard disk controller, memory (RAM), and external peripheral devices.
All of the basic circuitry and components required for a computer to function sit either directly on
the motherboard or in an expansion slot of the motherboard. The most important component on a
motherboard is the chipset which consists of two components or chips known as the Northbridge
and Southbridge. These chips determine the features and capabilities of the motherboard.
The motherboard (also known as the main board, system board, backplane board, or planar board)
holds the majority of a computer’s processing power. As a minimum, a motherboard contains the
system CPU, math co-processor which is a second processor in a computer that does all the
calculation involving floating points i.e decimal numbers such as scientific calculation and algebraic
functions (now routinely built into the CPU), clock/timing circuits, RAM, cache, BIOS ROM, serial
port(s), parallel port, and expansion slots. Each portion of the motherboard is tied together with
interconnecting logic circuitry. Some advanced motherboards also include circuitry to handle drive
and video interfaces.
It is the motherboard more than any other element of the PC that defines the performance (and
performance limitations) of any given computer system, the more reason why motherboard
upgrades are so popular and often provide stunning improvements to a PC.
MOTHERBOARD LIMITATIONS
CPU TYPE
A CPU is responsible for processing each instruction and virtually all of the data needed by the
computer (whether the instruction is for BIOS, the operating system, or an application). The type of
CPU limits the PC’s overall processing power. For example, PC with a Pentium II CPU runs Windows
95 much better than a PC with a “classic” Pentium CPU. Also, a Pentium MMX CPU will generally
handle graphics-intensive applications better than a “classic” Pentium CPU.
CPU SPEED
When CPUs are the same, clock speed (measured in MHz) affects performance. For example, a PC
with a “classic” Pentium 166MHz CPU will run faster than a PC with a “classic” Pentium 120MHz
CPU.
CPU upgrade increases the potential because CPUs have a finite processing limit, it follows that
upgrading the CPU will improve system processing but you can’t just place any old CPU in the CPU
socket and expect the motherboard to work. Any motherboard is limited to using a handful of
current CPU versions. For example, Intel’s recent AN430TX motherboard supports Pentium
processors at 90, 100, 120, 133, 150, 166, and 200MHz, as well as Pentium MMX processors
running at 166, 200, and 233MHz. By comparison, Intel’s new NX440LX motherboard supports
Pentium II microprocessors operating at 233, 266, and 300MHz. Changing the processor type and
speed requires changes in several jumper settings.
MEMORY SLOTS
The sheer amount of memory that can be added to the motherboard will indirectly affect system
performance because of a reduced dependence on virtual memory (a swap file on the hard drive).
Memory is added in the form of SIMMs (Single In-line Memory Modules) or DIMMs (Dual In-line
Memory Modules). Motherboards that can accept more or larger-capacity memory modules will
support more memory. It is not uncommon today to find motherboards that will support 512MB of
RAM (equal to the storage capacity of older hard drives).
MEMORY TYPES
The type of memory will also have an effect on motherboard (and system) performance. Faster
memory will improve system performance. DRAM remains the slowest type of PC memory, and is
usually used in older systems or video boards.
SDRAM is measurably faster than EDO RAM, and is appearing in high-to-mid-range PC applications.
By the time you read this book, SDRAM should be common.
RDRAM is an emerging memory type that should gain broad acceptance in the next few years. It is
not necessary for you to understand what these memory types are yet; just understand that
memory performance and system performances are related.
CACHE MEMORY
Traditional RAM is much slower than a CPU—so slow that the CPU must insert pauses (or “wait
states”) for memory to catch up. Cache is a technique of improving memory performance by
keeping a limited amount of frequently used information in VERY fast cache RAM. If the needed
information is found, the CPU reads the cache at full speed (and performance is improved because
less time is wasted). By making the cache larger, it is possible to hold more “frequently used” data.
Older motherboards used from 128KB to 256KB of cache. Current motherboards use 512KB to 1MB
of cache RAM.
CHIPSETS
A chipset is a set of highly optimized, tightly inter-related ICs which, taken together, handle
virtually all of the support functions for a motherboard. As new CPUs and hardware features are
crammed into a PC, new chipsets must be developed to implement those functions. For example,
the Intel 430HX chipset supports the Pentium CPU and EDO RAM. Their 430VX chipset supports use
of the Pentium CPU, the Universal Serial.
Several different upgrades can boost your PC's performance, but for a real jump for your old PC,
nothing beats a full motherboard upgrade. A new motherboard, coupled with a high-speed
processor and a large amount of RAM, can significantly improve system performance.
Most recent computers have cases that require a motherboard with an ATX form factor. If you're
replacing an ATX motherboard, you can choose from a wide variety of boards, differing mainly in
the processor types and speeds they support.
It is advisable to choose the processor you want and then purchase a motherboard that supports it.
The most important thing to think about the new board is its upgradeability, i.e. to what speed it
can accept a CPU and how many MB’s of RAM does it take.
Also when you want to purchase a new mainboard you must know if it is compatible with your CPU
and RAM, because some boards do not support some types of CPU’s and RAM’s
Most computer components are designed to perform only one or a limited number of functions, and
they only do so when it is specifically requested of them. The device responsible for organizing the
actions of these components is the processor, also referred to as the central processing unit, or CPU.
As the “brain” of the computer, the processor receives requests from you, the user; determines the
tasks needed to fulfill the request; and translates the tasks into signals that the required
component(s) can understand. The processor also does math and logic calculations.
CPU
Central processing unit (CPU) refers to part of a computer that interprets and carries out, or
processes, instructions contained in the software. The term processor can refer to a CPU as well.
A microprocessor is a common type of CPUs that are manufactured on a single integrated circuit.
Most, but not all, modern CPUs are microprocessors.
CPU SOCKETS
Another important idea in CPU development and upgradeability is the concept of “sockets.” Each
generation of CPU uses a different number of pins (and pin assignments), so a different physical
socket must be used on the motherboard to accommodate each new generation of processor.
Early CPUs were not readily interchangeable, and upgrading a CPU typically meant upgrading the
motherboard. With the introduction of the i486 CPUs, the notion of “Over-Drive” processors
became popular—replacing an existing CPU with a pin compatible replacement processor that
operated at higher internal clock speeds to enhance system performance. Table 11-2 shows that the
earliest “sockets” were designated Socket 1 for early 486SX and DX processors (you can see the
corresponding sockets illustrated in Fig. 11-3). As CPUs advanced, socket types proliferated to
support an ever-growing selection of compatible processors.
Today, the most common type of socket is Socket 7. Socket 7 motherboards support most Pentium-
type processors (i.e., Intel Pentium, Intel Pentium MMX, (advanced micro devices) AMD K5, AMD
K6, Cyrix 6x86, and Cyrix 6x86MX). By setting the proper clock speed and multiplier, a Socket 7
motherboard can support a variety of Pentium-type CPUs .
There is little doubt that Intel Corporation has been a driving force behind the personal computer
revolution. Each new generation of microprocessor represents not just mediocre improvements in
processing speed, but technological leaps in execution efficiency, raw speed, data throughput, and
design enhancements (such as dynamic execution). This part of the chapter provides a historical
overview of Intel microprocessors and compares their characteristics.
The 29,000-transistor 8086 marked the first 16-bit microprocessor—that is, there are 16 data bits
available from the CPU itself. This immediately offered twice the data throughput of earlier 8-bit
CPUs. Each of the 24 registers in the 8086/8088 is expanded to 16 bits, rather than just 8. Twenty
address lines allow direct access to 1,048,576 bytes (1MB) of external system memory. Although
1MB of RAM is considered almost negligible today, IC designers at the time never suspected that
more than 1MB would ever be needed. Both the 8086 and 8088 (as well as all subsequent Intel
CPUs) can address 64KB of I/O space (as opposed to RAM space). The 8086 was available for four
clock speeds; 5MHz, 6MHz, 8MHz, and 10MHz. Three clock speeds allowed the 8086 to process
0.33, 0.66, and 0.75 MIPS (Millions of Instructions Per Second), respectively. The 8088 was only
available in 5MHz and 8MHz versions (for 0.33 and 0.75 MIPS, respectively), but its rather unique
multiplexing nature reduces its data bandwidth to only 2MB/Intel took a small step backward in
1988 to produce the 80386SX CPU. The i386SX uses 24 address lines for 16MB of addressable RAM
and an external data bus of 16 bits, instead of a full 32 bits from the DX. Correspondingly, the
processing power for the i386SX is only 3.6 MIPS at 33MHz. In spite of these compromises, this
offered a significantly less-expensive CPU, which helped to propagate the i386 family into desktop
and portable computers. Aside from changes to the address and bus width, the i386 architecture is
virtually unchanged from that of the i386DX.
By 1990, Intel integrated the i386 into an 855,000-transistor, low-power version, called the
80386SL. The i386SL incorporated an ISA-compatible chip set along with power management
circuitry that optimized the i386 for use in mobile computers. The i386SL resembled the i386SX
version in its 24 address lines and 16-bit external data bus.
Each member of the i386 family uses stand-alone math co-processors (80387DX, 80387SX, and
80387SL, respectively). All versions of the 80386 can switch between real mode and protected-
mode, as needed, so they will run the same software as (and are backwardly compatible with) the
80286 and the 8086/8088.
80486 (1989–1994)
The consistent push for higher speed and performance resulted in the development of Intel’s 1.2
million-transistor, 29-register, 32-bit microprocessor, called the 80486DX, in 1989. The i486DX
provides full 32-bit addressing for access to 4GB of physical RAM and up to 64TB (tera bytes) of
virtual memory. The i486DX offers twice the performance ance of the i386DX with 26.9 MIPS at
33MHz. Two initial versions (25 and 33MHz) were available.
As with the i386 family, the i486 series uses pipelining to improve instruction execution, but the
i486 series also adds 8KB of cache memory right on the IC. Cache saves memory access time by
predicting the next instructions that will be needed by the CPU and loading them into the cache
memory before the CPU actually needs them. If the needed instruction is indeed in cache, the CPU
can access the information from cache without wasting time waiting for memory access. Another
improvement of the i486DX is the inclusion of a floating-point unit (an MCP) in the CPU itself,
rather than requiring a separate coprocessor IC. This is not true of all members of the i486 family,
however.
A third departure for the i486DX is that it is offered in 5- and 3-V versions. The 3-V version is
intended for laptop, notebook, and other low-power mobile computing applications.
Finally, the i486DX is upgradeable. Up to 1989/1990, personal computers were limited by their
CPU—when the CPU became obsolete, so did the computer (more specifically the motherboard).
This traditionally forced the computer user to purchase new computers (or upgrade the
motherboard) every few years to utilize current technology. The architecture of the i486 is
intended to support CPU upgrades where a future CPU using a faster internal clock can be inserted
into the existing system. Intel has dubbed this as “Over-Drive” technology. While Over-Drive
performance is not as high as a newer PC would be, it is much less expensive, and allows computer
users to protect their computer investments for a longer period of time. It is vital to note that not all
i486 versions are upgradeable, and the CPU socket on the motherboard itself must be designed
specifically to accept an Over-Drive CPU (see the “CPU sockets” section).
The i486DX was only the first in a long line of variations from Intel. In 1991, Intel released the
80486SX and the 80486DX/50. Both the i486SX and i486DX/50 offer 32-bit addressing, a 32-bit
data path, and 8KB of on-chip cache memory.
HARD DISK
The hard disk uses rigid rotating platters (disks). It stores and retrieves digital data from a planar
magnetic surface. Information is written to the disk by transmitting an electromagnetic flux
through an antenna or write head that is very close to a magnetic material, which in turn changes
its polarization due to the flux. Information can be read back in a reverse manner, as the magnetic
fields cause electrical change in the coil or read head that passes over it.
A typical hard disk drive design consists of a central axis or spindle upon which the platters spin at
a constant speed. Moving along and between the platters on a common armature are the read-write
heads, with one head for each platter face. The armature moves the heads radially across the
platters as they spin, allowing each head access to the entirety of the platter.
The associated electronics control the movement of the read-write armature and the rotation of the
disk, and perform reads and writes on demand from the disk controller.
Modern drive electronics are capable of scheduling reads and writes efficiently across the disk and
remapping sectors of the disk which have failed.
Also, most major hard drive and motherboard vendors now support S.M.A.R.T. technology, by
which impending failures can often be predicted, allowing the user to be alerted in time to prevent
data loss.
The (mostly) sealed enclosure protects the drive internals from dust, condensation, and other
sources of contamination. The hard disk's read-write heads fly on an air bearing (a cushion of air)
only nanometers above the disk surface. The disk surface and the drive's internal environment
must therefore be kept immaculately clean to prevent damage from finger prints, hair, dust, smoke
particles, etc. given the submicroscopic gap between the heads and disk.
Some people believe a disk drive contains a vacuum — this is incorrect, as the system relies on air
pressure inside the drive to support the heads at their proper flying height while the disk is in
motion. Another common misconception is that a hard drive is totally sealed. A hard disk drive
requires a certain range of air pressures in order to operate properly. If the air pressure is too low,
the air will not exert enough force on the flying head, the head will not be at the proper height, and
there is a risk of head crashes and data loss. (Specially manufactured sealed and pressurized drives
are needed for reliable high-altitude operation, above about 10,000 feet. This does not apply to
pressurized enclosures, like an airplane cabin.) Modern drives include temperature sensors and
adjust their operation to the operating environment.
The inside of a hard disk with the platter removed. To the left is the read-write arm. In the middle
the electromagnets of the platter's motor can be seen.
Hard disk drives are not airtight. They have a permeable filter (a breather filter) between the top
cover and inside of the drive, to allow the pressure inside and outside the drive to equalize while
keeping out dust and dirt. The filter also allows moisture in the air to enter the drive. Very high
humidity year-round will cause accelerated wear of the drive's heads (by increasing stiction, or the
tendency for the heads to stick to the disk surface, which causes physical damage to the disk and
spindle motor). You can see these breather holes on all drives -- they usually have a warning sticker
next to them, informing the user not to cover the holes. The air inside the operating drive is
constantly moving too, being swept in motion by friction with the spinning disk platters. This air
passes through an internal filter to remove any leftover contaminants from manufacture, any
particles that may have somehow entered the drive, and any particles generated by head crash.
Due to the extremely close spacing of the heads and disk surface, any contamination of the read-
write heads or disk platters can lead to a head crash, i:e a failure of the disk in which the head
scrapes across the platter surface, often grinding away the thin magnetic film. For GMR heads in
particular, a minor head crash from contamination (that does not remove the magnetic surface of
the disk) will still result in the head temporarily overheating, due to friction with the disk surface,
and renders the disk unreadable until the head temperature stabilizes. Head crashes can be caused
by electronic failure, a sudden power failure, physical shock, wear and tear, or poorly manufactured
disks.
Normally, when powering down, a hard disk moves its heads to a safe area of the disk, where no
data is ever kept (the landing zone). However, especially in old models, sudden power interruptions
or a power supply failure can result in the drive shutting down with the heads in the data zone,
which increases the risk of data loss. Newer drives are designed such that the rotational inertia in
the platters is used to safely park the heads in the case of unexpected power loss. IBM pioneered
drives with "head unloading" technology that lifts the heads off the platters onto "ramps" instead of
having them rest on the platters, reducing the risk of stiction. Other manufacturers also use this
technology.
Spring tension from the head mounting constantly pushes the heads towards the disk. While the
disk is spinning, the heads are supported by an air bearing and experience no physical contact
wear. The sliders (the part of the heads that are closest to the disk and contain the pickup coil itself)
are designed to reliably survive a number of landings and take offs from the disk surface, though
wear and tear on these microscopic components eventually takes its toll. Most manufacturers
design the sliders to survive 50,000 contact cycles before the chance of damage on startup rises
above 50%. However, the decay rate is not linear — when a drive is younger and has fewer
start/stop cycles, it has a better chance of surviving the next startup than an older, higher-mileage
drive (as the head literally drags along the drive's surface until the air bearing is established).
For example, the Maxtor Diamond Max series of desktop hard drives are rated to 50,000 start-stop
cycles. This means that no failures attributed to the head-disk interface were seen before at least
50,000 start-stop cycles during testing.
Using rigid platters and sealing the unit allows much tighter tolerances than in a floppy disk.
Consequently, hard disks can store much more data than floppy disk, and access and transmit it
faster. In 2005, a typical workstation hard disk might store between 80 GB and 400 GB of data,
rotate at 7,200 to 10,000 rpm, and have a sequential transfer rate of over 50 MB/s. The fastest
workstation hard drives spin at 15,000 rpm. Notebook hard drives, which are physically smaller
than their desktop counterparts, tend to be slower and have less capacity. Most spin at only 4,200
rpm or 5,400 rpm, though the newest top models spin at 7,200 rpm. (Revolution per minute)
A hard disk is generally accessed over one of a number of bus types, including AT Attachment ATA
(IDE, EIDE) an interface standard for connection of storage devices such as the hard drive, floppy
drive and optical disc drive, SCSI, FireWire/IEEE 1394, USB, and Fiber Channel.
In late 2002 Serial ATA was introduced.
Back in the days of the ST-506 interface, the data encoding scheme was also important. The first ST-
506 disks used Modified Frequency Modulation (MFM) encoding (which is still used on the
common "1.44 MB" (1.4 MiB) mebibyte 3.5-inch floppy), and ran at a data rate of 5 megabits per
second. Later on, controllers using 2.7 run length limited (RLL) encoding increased this by half, to
7.5 megabits per second; it also increased drive capacity by half.
Many ST-506 interface drives were only certified by the manufacturer to run at the lower MFM data
rate, while other models (usually more expensive versions of the same basic drive) were certified to
run at the higher RLL data rate. In some cases, the drive was over engineered just enough to allow
the MFM-certified model to run at the faster data rate; however, this was often unreliable and was
not recommended. (An RLL-certified drive could run on a MFM controller, but with 1/3 less data
capacity and speed.)
ESDI (Enhanced small disk interface) also supported multiple data rates (ESDI drives always used
2.7 RLL, but at 10, 15 or 20 megabits per second), but this was usually negotiated automatically by
the drive and controller; most of the time, however, 15 or 20 megabit ESDI drives weren't
downward compatible (i.e. a 15 or 20 megabit drive wouldn't run on a 10 megabit controller). ESDI
drives typically also had jumpers to set the number of sectors per track and (in some cases) sector
size.
SCSI (Small computer system interface) originally had just one speed, 5 MHz (for a maximum data
rate of 5 megabytes per second), but later this was increased dramatically. The SCSI bus speed had
no bearing on the drive's internal speed because of buffering between the SCSI bus and the drive's
internal data bus; however, many early drives had very small buffers, and thus had to be
reformatted to a different interleave (just like ST-506 drives) when used on slow computers, such
as early IBM PC compatibles and Apple Macintoshes.
ATA drives have typically had no problems with interleave or data rate, due to their controller
design, but many early models were incompatible with each other and couldn't run in a
master/slave setup (two drives on the same cable). This was mostly remedied by the mid-1990s,
when ATA's specification was standardized and the details begun to be cleaned up, but still causes
problems occasionally (especially with CD-ROM and DVDROM drives, and when mixing Ultra DMA
and non-UDMA devices).
Serial ATA does away with master/slave setups entirely, placing each drive on its own channel
(with its own set of I/O ports) instead.
FireWire/IEEE 1394 and USB(1.0/2.0) hard disks are external units containing generally ATA or
SCSI drives with ports on the back allowing very simple and effective expansion and mobility. Most
FireWire/IEEE 1394 models are able to daisy-chain in order to continue adding peripherals
without requiring additional ports on the computer itself.
Other characteristics
Almost all hard disks today are of either the 3.5", used in desktops, or 2.5", used in laptops, variety.
2.5" drives are usually slower and have less capacity but use less power and are more tolerant of
movement. Additionally, there is the CF form factor Microdrive which is usually used as storage for
portable devices such as mp3 players and digital cameras. The size designations can be slightly
confusing, for example a 3.5" disk drive has a case that is 4" wide.
SATA 1.0 drives support speeds up to 10,000 rpm and mean time between failure (MTBF) levels up
to 1 million hours under an eight-hour, low-duty cycle. Fibre Channel (FC) drives support up to
15,000 rpm and an MTBF of 1.4 million hours under a 24-hour duty cycle.
Addressing modes
There are two modes of addressing the data blocks on more recent hard disks. The older mode is
CHS addressing (Cylinder-Head-Sector), used on old ST-506 and ATA drives and internally by the
PC BIOS. The more recent mode is the LBA (Logical Block Addressing), used by SCSI drives and
newer ATA drives (ATA drives power up in CHS mode for historical reasons).
CHS describes the disk space in terms of its physical dimensions, data-wise; this is the traditional
way of accessing a disk on IBM PC compatible hardware, and while it works well for floppies (for
which it was originally designed) and small hard disks, it caused problems when disks started to
exceed the design limits of the PC's CHS implementation.
The traditional CHS limit was 1024 cylinders, 16 heads and 63 sectors; on a drive with 512-byte
sectors, this comes to 504 MiB (528 megabytes). The origin of the CHS limit lies in a combination of
the limitations of IBM's BIOS interface (which allowed 1024 cylinders, 256 heads and 64 sectors;
sectors were counted from 1, reducing that number to 63, giving an addressing limit of 8064 MiB or
7.8 GiB), and a hardware limitation of the AT's hard disk controller (which allowed up to 65536
cylinders and 256 sectors, but only 16 heads, putting its addressing limit at 2^28 bits or 128 GiB).
When drives larger than 504 MiB began to appear in the mid-1990s, many system BIOS’s had
problems communicating with them, requiring LBA BIOS upgrades or special driver software to
work correctly. Even after the introduction of LBA, similar limitations reappeared several times
over the following years: at 2.1, 4.2, 8.4, 32, and 128 GiB. The 2.1, 4.2 and 32 GiB limits are hard
limits: fitting a drive larger than the limit results in a PC that refuses to boot, unless the drive
includes special jumpers to make it appear as a smaller capacity. The 8.4 and 128 GiB limits are soft
limits: the PC simply ignores the extra capacity and reports a drive of the maximum size it is able to
communicate with.
SCSI drives, however, have always used LBA addressing, which describes the disk as a linear,
sequentially-numbered set of blocks. SCSI mode page commands can be used to get the physical
specifications of the disk, but this is not used to read or write data; this is an artifact of the early
days of SCSI, circa 1986, when a disk attached to a SCSI bus could just as well be an ST-506 or ESDI
drive attached through a bridge (and therefore having a CHS configuration that was subject to
change) as it could be a native SCSI device. Because PCs use CHS addressing internally, the BIOS
code on PC SCSI host adapters does CHS-to-LBA translation, and provides a set of CHS drive
parameters that tries to match the total number of LBA blocks as closely as possible.
ATA drives can either use their native CHS parameters (only on very early drives; hard drives made
since the early 1990s use zone bit recording, and thus don't have a set number of sectors per track),
use a "translated" CHS profile (similar to what SCSI host adapters provide), or run in ATA LBA
mode, as specified by ATA-2. To maintain some degree of compatibility with older computers, LBA
mode generally has to be requested explicitly by the host computer. ATA drives larger than 8 GiB
are always accessed by LBA, due to the 8 GiB limit described above.
Manufacturers
Most of the world's hard disks are now manufactured by just a handful of large firms: Seagate,
Maxtor, Western Digital, Samsung, and the former drive manufacturing division of IBM, now sold to
Hitachi. Fujitsu continues to make specialist notebook and SCSI drives but exited the mass market
in 2001. Toshiba is a major manufacturer of 2.5-inch and 1.8-inch notebook drives.
Dozens of former hard drive manufacturers have gone out of business, merged, or closed their hard
drive divisions; as capacities and demand for products increased, profits became hard to find, and
there were shakeouts in the late 1980s and late 1990s. The first notable casualty of the business in
the PC era was Computer Memories International or CMI; after the 1985 incident with the faulty
20MB AT drives, CMI's reputation never recovered, and they exited the hard drive business in 1987.
Another notable failure was MiniScribe, who went bankrupt in 1990 after it was found that they
had "cooked the books" and inflated sales numbers for several years. Many other smaller
companies (like Kalok, Microscience, LaPine, Areal, Priam and PrairieTek) also did not survive the
shakeout, and had disappeared by 1993; Micropolis was able to hold on until 1997, and JTS, a
relative latecomer to the scene, lasted only a few years and was gone by 1999.
Rodime was also an important manufacturer during the 1980s, but stopped making drives in the
early 1990s amid the shakeout and now concentrates on technology licensing; they hold a number
of patents related to 3.5-inch form factor hard drives.
There have also been a number of notable mergers in the hard disk industry:
Tandon sold its disk manufacturing division to Western Digital (which was then a controller maker
and ASIC house) in 1988; by the early 1990s Western Digital disks were among the top sellers.
Quantum bought DEC's storage division in 1994, and later (2000) sold the hard disk division to
Maxtor to concentrate on tape drives.
In 1995, Conner Peripherals announced a merger with Seagate (who had earlier bought Imprimis
from CDC), which completed in early 1996.
JTS infamously merged with Atari in 1996, giving it the capital it needed to bring its drive range into
production.
In 2003, following the controversy over the mass failures of the Deskstar 75GXP range (which
resulted in lost sales of its follow-ons), hard disk pioneer IBM sold the majority of its disk division
to Hitachi, who renamed it Hitachi Global Storage Technologies.
It is important to note that hard drive manufacturers often use the metric definition of the prefixes
"giga" and "mega." However, nearly all operating system utilities report capacities using binary
definitions for the prefixes. This is largely historical, since when storage capacities started to exceed
thousands of bytes, there were no standard binary prefixes (the IEC only standardized binary
prefixes in 1999), so 210 (1024) bytes was called a kilobyte because 1024 is "close enough" to the
metric prefix kilo, which is defined as 103 or 1000. This trend became habit and continued to be
applied to the prefixes "mega," "giga," and even "tera." Obviously the discrepancy becomes much
more noticeable in reported capacities in the multiple gigabyte range, and users will often notice
that the volume capacity reported by their OS is significantly less than that advertised by the hard
drive manufacturer. For example, a drive advertised as 200 GB can be expected to store close to 200
x 109, or 200 billion, bytes.
This uses the proper SI definition of "giga," 109 and cannot be considered as incorrect. Since
utilities provided by the operating system probably define a Gigabyte as 230, or 1073741824,
bytes, the reported capacity of the drive will be closer to 186.26 GB (actually, GiB), a difference of
well over ten gigabytes. For this very reason, many utilities that report capacity have begun to use
the aforementioned IEC standard binary prefixes (e.g. KiB, MiB, GiB) since their definitions are not
ambiguous.
Another side point is that many people mistakenly attribute the discrepancy in reported and
advertised capacities to reserved space used for file system and partition accounting information.
However, for large (several GiB) filesystems, this data rarely occupies more than several MiB, and
therefore cannot possibly account for the apparent "loss" of tens of Gigabytes.
From the original use of a hard drive in a single computer, techniques for guarding against hard
disk failure were developed such as the redundant array of independent disks (RAID). Hard disks
are also found in network attached storage (NAS) devices, but for large volumes of data are most
efficiently used in a storage area network (SAN).
Applications for hard disk drives expanded to include personal video recorders, digital audio
players, digital organizers and digital cameras. In 2005 the first cellular telephones to include hard
disk drives were introduced by Samsung and Nokia.
History
The first computer with a hard disk drive as standard was the IBM 350 Disk File, introduced in
1955 with the IBM 305 computer. This drive had fifty 24 inch platters, with a total capacity of five
million characters. In 1952, an IBM engineer named Reynolds Johnson developed a massive hard
disk consisting of fifty platters, each two feet wide that rotated on a spindle at 1200 rpm with
read/write heads for the first database running RCAs Bismark computer.
In 1973, IBM introduced the 3340 "Winchester" disk system (the 30MB + 30 millisecond access
time led the project to be named after the Winchester 30-30 rifle), the first to use a sealed
head/disk assembly (HDA). Almost all modern disk drives now use this technology, and the term
"Winchester" became a common description for all hard disks, though generally falling out of use
during the 1990s.
For many years, hard disks were large, cumbersome devices, more suited to use in the protected
environment of a data center or large office than in a harsh industrial environment (due to their
delicacy), or small office or home (due to their size and power consumption). Before the early
1980s, most hard disks had 8-inch or 14-inch platters, required an equipment rack or a large
amount of floor space (especially the large removable-media drives, which were often referred to as
"washing machines"), and in many cases needed special power hookups for the large motors they
used. Because of this, hard disks were not commonly used with microcomputers until after 1980,
when Seagate Technology introduced the ST-506, the first 5.25-inch hard drive, with a capacity of 5
megabytes. In fact, in its factory configuration the original IBM PC (IBM 5150) was not equipped
with a hard drive.
Most microcomputer hard disk drives in the early 1980s were not sold under their manufacturer's
names, but by OEMs as part of larger peripherals (such as the Corvus Disk System and the Apple
ProFile). The IBM PC/XT had an internal hard disk, however, and this started a trend toward buying
"bare" drives (often by mail order) and installing them directly into a system. Hard disk makers
started marketing to end users as well as OEMs, and by the mid-1990s, hard disks had become
available on retail store shelves.
While internal drives became the system of choice on PCs, external hard drives remained popular
for much longer on the Apple Macintosh and other platforms. Every Mac made between 1986 and
1998 has a SCSI port on the back, making external expansion easy; also, "toaster" Macs did not have
easily accessible hard drive bays (or, in the case of the Mac Plus, any hard drive bay at all), so on
those models, external SCSI disks were the only reasonable option. External SCSI drives were also
popular with older microcomputers such as the Apple II series and the Commodore 64, and were
also used extensively in servers, a usage which is still popular today. The appearance in the late
1990s of high-speed external interfaces such as USB and IEEE 1394 (FireWire) has made external
disk systems popular among regular users once again, especially for users that move large amounts
of data between two or more locations, and most hard disk makers now make their disks available
in external cases.
The capacity of hard drives has grown exponentially over time. With early personal computers, a
drive with a 20 megabyte capacity was considered large. In the latter half of the 1990s, hard drives
with capacities of 1 gigabyte and greater became available. As of early 2005, the "smallest" desktop
hard disk in production has a capacity of 40 gigabytes, while the largest-capacity internal drives are
a half terabyte (500 gigabytes), with external drives at or exceeding one terabyte. As far as PC
history is concerned, the major drive families have been MFM, RLL, ESDI, SCSI, IDE and EIDE, and
now SATA.
MFM drives required that the electronics on the "controller" be compatible with the electronics on
the card — disks and controllers had to be compatible. RLL (Run Length Limited) was a way of
encoding bits onto the platters that allowed for better density. Most RLL drives also needed to be
"compatible" with the controllers that communicated with them.
ESDI was an interface developed by Maxtor. It allowed for faster communication between the PC
and the disk. SCSI (originally named SASI for Shugart (sic) Associates) or Small Computer System
Interface was an early competitor with ESDI.
When the prices of electronics dropped (and because of a demand by consumers) the electronics
that had been stored on the controller card was moved to the disk drive itself. This advance was
known as "Integrated Drive Electronics" or IDE. Eventually, IDE manufacturers wanted the speed of
IDE to approach the speed of SCSI drives. IDE drives were slower because they did not have as big a
cache as the SCSI drives, and they could not write directly to RAM. IDE manufacturers attempted to
close this speed gap by introducing Logical Block Addressing (LBA). These drives were known as
EIDE. While EIDE was introduced, though, SCSI manufacturers continued to improve SCSI's
performance. The increase in SCSI performance came at a price — its interfaces were more
expensive. In order for EIDE's performance to increase (while keeping the cost of the associated
electronics low), it was realized that the only way to do this was to move from "parallel" interfaces
to "serial" interfaces, the result of which is the SATA interface. However, as of 2005, performance of
SATA and PATA disks is comparable. Fibre channel (FC) interfaces are left to discussions of server
drives.
Hard Drives
Throughout the operation of the computer, the hard drive will be accessed over and over
again. Information will be read from, saved to, and moved from one place to another on the
drive. Its operation is critical to the perceived efficiency of the computer.
However, the more the hard drive is used, the less efficient it tends to become. The next
two subsections describe common hard drive problems and how to resolve them.
Using Scandisk
The indexing of data on a disk is very important when that data is being saved and
retrieved. Each file on the disk occupies one or more clusters, and no two files can exist on
a single cluster. The first cluster on the disk contains an index of file names and locations.
This index is called a file allocation table (FAT). Whenever you access a file, the controller
first looks it up in the FAT to determine its location on the disk, and then retrieves it.
Without a FAT, the hard or floppy drive would have to search every cluster until it found
the requested file.
However, it is possible for the FAT to develop errors over time. Cross-linked clusters occur
when the FAT records a single cluster as belonging to two different files. Lost clusters occur
(when does lost cluster occurs?) when a cluster containing data is not referenced in the
FAT at all. Either of these errors can cause the file to be reported as missing. You can
resolve these errors by running Microsoft’s Scandisk utility.
Scandisk searches the entire disk and compares the contents of each cluster to the
information in the FAT. Scandisk then updates the FAT with the proper information about
the disk’s contents and file locations. Another function of Scandisk is to locate physical “bad
spots” on the disk that cannot store data. Any existing data on these spots is moved, and the
clusters are marked as “bad” so that no new data is stored there.
Although Scandisk can mark clusters as “bad” and retrieve information from them, it cannot
repair bad clusters!
When files are saved to the hard or floppy drive, they are written to the first available
cluster(s). Ideally, subsequent files are all saved in consecutive clusters on the disk.
However, suppose a file resided on cluster 4, and another file resided on clusters 5–8. If you
increased the size of the first file so that it no longer fit on one cluster, it would occupy
clusters 4 and 9. Next, suppose you deleted the file on clusters 5–8 and replaced it with a
larger file. That file would now reside on clusters 5–8 and perhaps 10 and 11. These files no
longer reside on consecutive clusters and are said to be fragmented. Fragmentation can
cause the hard or floppy drive to retrieve files more slowly, and can actually cause undue
wear and tear on the drive’s read/write heads.
To defragment a hard or floppy disk, you can run Microsoft’s Disk Defragmenter in
Windows 9x. This utility rewrites the data on the disk so that files are placed on contiguous
clusters.
Follow the steps in Exercise 2-5 to defragment a hard drive.
EXERCISE 2-5:
Defragmenting a Hard Drive
2. From the Start menu, select Programs | Accessories | System Tools | Disk Defragmenter.
3. You will be presented with a Select Drive dialog box. Select the appropriate hard drive
from the drop-down menu, then click OK.
4. The Defragmenter utility will begin. You can view the cluster-by-cluster details of the
operation by clicking the Show Details button.
5. When the process is complete, you will be informed via a dialog box, the contents of
which will vary, depending on the OS you are using. Choose to either exit the utility or
defragment another disk.
Q’s: My computer reports that a particular file doesn’t exist, but I know I saved it. What happened,
and how can I find that file?
Ans: It is possible that the file exists but is not indexed properly. Run Scandisk. If the file exists, the
FAT will be updated and you will be able to access it.
The VDU is a device, such as a television screen, which produces a visible display of data, it
is also called Monitor. The computer monitor is an output device, a part of the computer's
display system. It connects to a video adapter (video card) that is installed in an expansion
slot on the computer’s motherboard. This system converts signals into text and pictures
and displays them on a TV-like screen (the monitor). The computer sends a signal to the
video adapter, telling it what character, image or graphic to display. The video adapter
converts that signal to a set of instructions that tell the display device (monitor) how to
draw the image on the screen.
Classification of Monitor
There are many ways to classify monitors but the most basic is in terms of color
capabilities, which separates monitors into three classes:
Monochrome: Monochrome monitors actually display two colors, one for the background
and one for the foreground. The colors can be black and white, green and black, or amber
and black.
Colour: Colour monitors can display anywhere from 16 to over 1 million different colors.
Color monitors are sometimes called RGB monitors because they accept three separate
signals -- red, green, and blue.
There are mainly two types of monitors: CRT monitor and LCD Monitor
Working Principle
A CRT is a vacuumed sealed tube with no air inside. In a CRT monitor, the electron gun
produces a beam of electrons that travels through a focusing system, deflection coils, and
then into the screen to display a picture. A beam of electrons (cathode rays) is emitted by
the electron gun, passes through various focusing and deflection systems, and then hits
specific areas on a phosphor coated screen.
Electron Gun
The electron gun consists of a metal cathode, control grid, and various anodes, It is
important to remember that electrons are small negatively particles, because their
direction is controlled by the type of voltage. Negative charges repel each other while
opposites attract.
Focusing System
After the electron beam leaves the electron gun, the electrons go through another focusing
system. The focusing system, a metal cylinder, uses a positive electric field that causes the
electrons to converge into a small point.
This ensures that the electron beams will only hit one spot on the monitor at a time.
Improving the focusing system increases the sharpness of the picture on the screen.
Deflection Coils
The magnetic deflection coils are used to hit the correct part of the screen. They are
mounted on all sides of the cathode-ray tube, and they control the horizontal and vertical
direction of the electron beam. Varying the electricity running through the coils aims the
beam at the proper screen location.
The light on the screen that a user sees is caused by electrons illuminating a phosphor
coating. Part of the energy from the electrons is converted to heat by friction, and the rest
of the energy causes the phosphor to become “excited.” The phosphor does not hold its
excited state for long, and the light quickly dwindles. Different phosphors hold the light for
different amounts of time. The amount of time it takes for the phosphor to lose 9/10ths of
its original intensity is called the persistence.
Color
A CRT monitor displays color by using the phosphor and the shadow mask method. This is
the same system that televisions use. It is based on the RGB model, which means there is a
red, green, and blue dot at each position on the screen. Three electron guns are used to
activate each color separately. Varying the intensity of each electron gun, or shutting the
32 FEDERAL COLLEGE OF AGRICULTURAL PRODUCE TECHNOLOGY
PC UPGRADE & MAINTENANCE 2015
beams off, determines the color. This is shown in the figure below. If all beams are off then
the dot is black. If all beams are on, then the color of the dot is white.
Today, the average monitor has the capability to display millions of different colors.
Raster Scanning
Since the phosphor dots lose their color and light very quickly, a system must be
incorporated to refresh, or redraw, the picture at a high rate. The picture is refreshed by
raster-scanning, which is based off television technology. The electron beams are swept
along each row activating each spot on the screen to display the proper colors. It activates
each spot from top to bottom as well. Each of these spots is called a pixel. Increasing the
refresh rate, number of pixels, and colors creates a better quality picture on the screen.
A CRT monitor is a system with many parts and methods, working in perfect unison. The
process starts with electrons being emitted by the electron gun. They are pushed into the
cathode-ray tube, and then the electrons light up phosphor pixels on the screen. So every
picture that a computer user sees is not solid at all; it is actually made up of tiny individual
pixels constantly be refreshed. This entire cycle is controlled behind the scenes by a
complex video card and computer.
A Liquid crystal display or LCD is a digital display technology that produces images on a flat
surface by shining light through liquid crystals and colored filters.
CHARACTERISTICS
Takes up less space, consumes less power, and produces less heat than traditional
cathode-ray tube monitors.
Lack of flicker and low glare reduce eyestrain.
Much more expensive than CRTs of comparable size.
Working principle
Liquid crystals are liquid chemicals whose molecules can be aligned precisely when
subjected to electrical fields. When properly aligned, the liquid crystals allow light to pass
through.
through a layer that contains thousands of liquid crystal blobs arrayed in tiny containers
called cells. The cells are, in turn, arrayed in rows across the screen; one or more cells make
up one pixel (the smallest discernible dot on a display). Electric, leads around the edge of
the LCD create an electric field that twists the crystal molecule, which lines the light up
with the second polarizing filter and allows it to pass through. Figure below shows the
constructional details of an LCD panel.
In a color LCD panel, each pixel is made up of three liquid crystal cells; each of those three
cells is fronted by a red, green, or blue filter. Light passing through the filtered cells creates
the colors you see on the LCD.
Occasionally the mechanism that sends the electrical current to one or more pixels fails, in
those instances you'll see a completely dark, "bad" pixel.
Early graphics adapters received data from the processor and basically forwarded the
signals to the monitor, leaving the CPU to do all the work related to processing and
calculating. In non-accelerated graphics adapters, the computer needed to change each
pixel individually to change the image on the screen. After graphical user interfaces (like
Windows for example) became popular, systems began to slow down as the CPU was left
trying to move large amounts of data from the system RAM to the video card.
Today all new video cards are accelerated and are connected to the system's CPU through
high-speed buses such as PCI or AGP. Also known as a 3D accelerator, the graphics
accelerator card is an internal board that generally is installed into the PCI or AGP slot and
reduces the time it takes to produce images on the computer screen by incorporating its
own processor and memory. The biggest difference between accelerated and non-
accelerated cards is that with accelerated video cards, the CPU no longer has to carry the
bulk of the processing burden from graphics calculations. Since the video card has its own
processor, it is able to perform most of the work, leaving your CPU free to process other
tasks.
SGRAM is a type of video adapter that contains its own processor to boost performance
levels. These processors are specialized for computing graphical transformations, so they
achieve better results than the general purpose CPU used by the computer. In addition,
they free up the computer's CPU to execute other commands while the graphics accelerator
is handling graphics computations. The popularity of graphical applications, and especially
multimedia applications, has made graphics accelerators not only a common enhancement,
but a necessity. Most computer manufacturers now bundle a graphics accelerator with
their mid-range and high-end systems.
Monitor
34 FEDERAL COLLEGE OF AGRICULTURAL PRODUCE TECHNOLOGY
PC UPGRADE & MAINTENANCE 2015
The function of a monitor is to produce visual responses to user requests. Most desktop
computers use cathode ray tube (CRT) monitors. CRTs use an electron gun to activate
phosphors behind the screen. Each dot on the monitor, called a pixel, has the ability to
generate red, green, or blue, depending on the signals it receives. This combination of
colors results in the total display you see on the monitor.
Monitors are available in a wide array of colors and resolutions. The word resolution refers
to the size and number of pixels that a monitor can display. Higher resolutions display
more pixels and have better visual output. Lower resolutions result in grainy displays.
Color graphics adapter (CGA) monitors are an older type and can display combinations of
red, green, and blue at different intensities, resulting in 16 different colors. The maximum
resolution of a CGA monitor is 640 x 200 pixels in monochrome mode and 160 x 100 pixels
in 16-color mode. Enhanced graphics adapter (EGA) monitors are capable of generating up
to 64 colors, of which 16 can be displayed at any one time. EGA monitors have a maximum
resolution of 720 x 350 when displaying text only and 640 x 350 in graphics mode.
Virtual graphics array (VGA) monitors were the first to use analog rather than digital
output. Instead of creating displays based on the absence or presence of a color (as in
digital CGA and EGA monitors), VGA monitors can display a wide range of colors and
intensities. They can produce around 16 million different colors but can display only up to
256 different colors at a time. This color setting is often called 16-bit high color. VGA
monitors have a maximum resolution of 720 x 400 in text mode and 640 x 480 in graphics
mode.
Super VGA (SVGA) monitors introduce yet another improvement: They also use analog input
and can provide resolutions as high as 1280 x 1024. Some SVGA monitors can provide even
higher resolutions. SVGA monitors can display up to 16 million colors at once, referred to
as 32-bit true color, because the human eye can distinguish only approximately 10 million
different colors, it is likely that monitor technology will focus on improving resolution only.
All monitors receive their signals from video cards attached to the motherboard. The
monitor technology must match the technology of the video card to which it is attached.
That is, an EGA monitor will work only with an EGA video card, and an SVGA monitor must
be attached to an SVGA video card.
MONOCHROME:
CGA 16 16 640X 200, DIGITAL
COLOR
160X100
16 TEXT MODE: DIGITAL
EGA 64 720X350
GRAPHIC
MODE:
640X350
TEXT ANALOG
VGA OVER 16 256 MODE:720X400,
MILLION GRAPHIC
MODE:
640X480
SVGA OVER 16 OVER 16 MILLION 1280X1024 ANALOG
MILLION
CRT
A modern CRT display has considerable flexibility: it can often handle all resolutions from
640 by 480 pixels (640×480) up to 2048 by 1536 pixels (2048×1536) with 32- bit colour
and a variety of refresh rates.
LCD
A liquid crystal display (LCD) is a thin, flat display device made up of any number of color
or monochrome pixels arrayed in front of a light source or reflector. It is prized by
engineers because it uses very small amounts of electric power, and is therefore suitable
for use in battery-powered electronic devices.
One of the most important items one has to care about regarding his personal computer is
his display unit.
To decide to replace a display unit is not a simple decision. The following are some
guidelines to tell you if you need to replace your display unit or not:
1- The size of monitor you have and the size of the newer one. Sometimes you have a 15”
Cathode-Ray-Tube (CRT) monitor. You may replace it with a 17” or 19 “one. A bigger
monitor is better for your sight.
2- The type of monitor. Some brand names are better than others. The difference in
requirements can be studied by the sheets with each. One of the most important thing
about a display unit is its resolution. It is measured in pixels. The more the resolution of a
display unit, the better it will be. Also there is another factor which is the dot pitch and it is
measured in mm’s. The lower the dot pitch, the better the display unit.
3- Sometimes one may think of replacing a CRT monitor with an LCD one. LCD monitors are
better for the person who sits a lot in front of a computer. It costs more but you feel better
when using it instead of a CRT monitor. Replacing the monitor is very simple. You just turn
off the power, pull out the power cord and then put the newer monitor in its place.
Input/output
of a computer, and any movement of information from or to that complex, for example to or
from a disk drive, is also considered I/O
Sometimes we need to get better performance of our PC regarding video’s or audio’s, in this
case we replace your VGA card or sound card.
A typical motherboard has an AGP slot where we assemble a VGA card and 6 PCI slots
where we assemble sound cards, modems and other add-on cards.
One may need to replace a video card if it fails or if he/she wants a newer one with better
specifications. The VGA card is very important in many software applications like video-
editing, animations, CAD, or else. When you decide to replace your old VGA card you must
know in advance that the newer one will meet your requirements and has compatibility
with the motherboard. The following are the steps of replacing a VGA card.
STEPS
Network cards have many different features; one of the most important is the speed. So, we
need to replace it if it fails or if we need a newer one with better performance.
The Keyboard is an input device of computers. Keyboard is also the primary way to interact
with a PC.
Types of Keyboards
Keyboards have changed very little in layout since their introduction, In fact, the most
common change has simply been the natural evolution of adding more keys that provide
additional functionality. The most common keyboards classified based on the number of
keys are:
Keyboards are available in various forms. The different types of keyboards are:
d. Cordless Keyboards: These keyboards do not make use of any cord or cable
connection. This communicates the data using radio frequency and is operated by a
battery. The distance at which the key board can be used is around 15 feet.
e. Projection keyboards: The latest types of keyboards they are virtual keyboards
that can be projected and touched on any surface. The keyboard watches your fingers
move and translates that action into keystrokes in the device.
Troubleshooting Keyboards
A good Keyboard will have to be smooth keys and definitely working properly. Keyboard
cleaning tips is very necessary for all of us who use computers. We all get tension when any
button of our Keyboard sticks and cannot work properly as we like. Mostly this happened
because of dust or dirt in Keyboard therefor the Keyboard needs cleaning.
Once a keyboard or a mouse has to be replaced, some issues have to be taken into
consideration. First, we must look at the motherboard if supports the type of the keyboard
or a mouse. For keyboards we have AT, PS2, USB and wireless types. Some old
motherboards do not support USB type, so we must be careful when we choose this type of
keyboard. This also applies to mouse but instead of AT we have serial mouse. The
Input/output ports that are found on the rear side of a computer motherboard are shown
in figure 13.
In the shown figure we can plug the keyboard in either B or C. B is the PS2 port and C is
the USB port. The mouse can be plugged in A, C or D. A is the PS2, C is the USB and D is the
Serial port.
When deciding to install a new mouse or keyboard you have to ensure that your
motherboard supports it. Sometimes we may use what is called a PS2-to- AT adaptor to
connect a new ps2- keyboard to an old motherboard that only supports AT connection.