CSC 101 Introduction to Computer note for Engineering students
CSC 101 Introduction to Computer note for Engineering students
1
JOSHUA OBIBINE AKA ORACLE
INTRODUCTION
The computer is fast becoming the universal machine of the twenty-first century. Early computers were
large in size and too expensive to be owned by individuals. Thus they were confined to the
laboratories and few research institutes. They could only be programmed by computer engineers. The
basic applications were confined to undertaking complex calculations in science and engineering.
Today, the computer is no longer confined to the laboratory. Computers, and indeed, computing have
become embedded in almost every item we use. Computing is fast becoming ubiquitous. Its application
in engineering, communication, space science, aviation, financial institutions, social sciences,
humanities, the military, transportation, manufacturing, the extractive industries to mention but a few.
Definitions
Computer is an advanced electronic device that takes raw data as input from the user and processes
these data under the control of set of instructions (called program) and gives the result (output)
and saves output for the future use. It can process both numerical and non-numerical (arithmetic and
logical) calculations.
A computer has four functions:
A Computer System
2
JOSHUA OBIBINE AKA ORACLE
A complete history of computing would include a multitude of diverse devices such as the ancient
Chinese abacus, the Jacquard loom (1805) and C har les Babbage’s “analytical e n gi ne ” (1834). It
woul d also include a discussion of mechanical, analog and digital computing architectures. As
late as the 1960s, mechanical devices, such as the Merchant calculator, still found widespread
application in science and engineering. During the early days of electronic computing devices, there
was much discussion about the relative merits of analog vs. digital computers. In fact, as late as the
1960s, analog computers wer e routinely used to solve systems of finite difference equations arising
in oil reservoir modeling. In the end, digital computing devices proved to have the power, economics and
scalability necessary to deal with large scale computations. Digital computers now dominate the
computing world in all areas ranging from the hand calculator to the supercomputer and are pervasive
throughout society. Therefore, this brief sketch of the development of scientific c o mp u t i ng i s limited
to the area of digital, electronic computers.
The evolution of digital computing is often divided into generations. Each generation is characterized
by dramatic improvements over the previous generation in the technology used to build computers,
the internal organization of computer systems, and programming languages. Although not usually
associated with computer generations, there has been a steady improvement in algorithms, including
algorithms used in computational science. The following history has been organized using these widely
recognized generations as mileposts.
Vacuum tube
• These were also the first computers that stored their instructions in their memory, which moved
from a magnetic drum to magnetic core technology
Transistor
Integrated Circuit
Microprocessor
• The goal is to develop devices that respond to natural language input and arecapable of
learning and self-organization.
Computer Generations and Their Characteristics
CHARACTERISTICS OF A COMPUTER
• Speed: The computer can manipulate large data at incredible speed and response time can be
very fast.
• Accuracy: Its accuracy is very high and its consistency can be relied upon. Errors in
computing are mostly due to human rather than technological weakness. There is in-built error
detecting schemes in the computer.
• Storage: It has both internal and external storage facilities for holding data and instructions.
This capacity varies from one machine to the other. Memories are built up in K (Kilo) modules
where K=1024 memory locations.
• Automatic: Once a program is in the computer’s memory, it can run automatically each time it
is opened. The individual has little or no instruction togive again.
• Reliability: Being a machine, a computer does not suffer human traits of tiredness and lack of
concentration. It will perform the last job with the same speed and accuracy as the first job
every time even if ten million jobs are involved.
• Flexibility: It can perform any type of task once it can be reduced to logical steps.
Modern computers can be used to perform a variety of functions like on-line processing,
multi-programming, real time processing e.t.c.
CLASSES OF COMPUTER
• Analog Computers: This class of computer are special purpose machines that surfaced in the
late forties (1948). They are used solving scientific and mathematical equations or problems. An
example is the thermal analyzer. Data and figures are represented by physical quantities such as
angular positions and voltage.
• Digital Computers: They are machines made up of combinations of chips, flip-flops, buttons
and other electronic devices to make them function at a very fast speed. A digital computer has
5
JOSHUA OBIBINE AKA ORACLE
its numbers, data letters or other symbols represented in digital format. They are mostly special
purpose machines unless minor specifications are included inthe design.
• A computer that combines the features of a digital and analog computer is called a
hybrid computer.
TYPES OF COMPUTER
Super Computers
• A super computer is the most powerful computer available at any given time. These machines
are built to process huge amounts of information and do so very quickly.
• Supercomputers are built specifically for researchers or scientists working on projects that
demand very huge amounts of data variables; an example is in nuclear research, where scientists
want to know exactly what will happen during every millisecond of a nuclear chain reaction. (To
demonstrate the capability of super computers, for an air pollution control project that involves
more than 500,000 variables, it will take a mini computer about 45 hours to complete the
simulation process while it will take a super computer 30 minutes only).
• They are big in size, generate a lot of heat and are very expensive. (Super computers are made
by CRAY Company).
Mainframe Computers
• The largest types of computers in common use are the mainframe computers. They are designed
to handle tremendous amounts of input, output and storage.
• They are used mainly by large organization like the PHCN, NITEL, and CBN.
• Other users access mainframe computers through terminals. Terminals consist of a type of
keyboard and a video display i.e. monitors. The mainframe is usually in the computer room
(Mainframe computers are made by IBM, Boroughs & Univac).
Mini Computers
• These are physically small compared to mainframes and are generally used for special purposes
or small-scale general purposes.
• The best way to explain the capabilities of mini computers is to say they lie between
mainframes and personal computers. Like mainframes, they can handle a great deal more input
and output than personal computers.
• Although some minicomputers are designed for a single user, many can handle dozens or even
hundreds of terminals.
• Advances in circuitry means modern mini computers can out-perform older mainframes of the
60s. (Examples are Digital Equipment Company’s PDP II and Vax rang)
Workstations
• Between mini computers and micro computers – in terms of processing power is a class of
computers known as workstations.
• A workstation looks like a personal computer and is typically used by one person, although it is
still more powerful than the average personal computer.
• The differences in the capabilities of these types of machines are growing smaller. They
significantly differ from micro computers in two ways: the central processing unit (CPU) of
workstations are designed differently to enable faster processing of instructions and most of the
micro computers can run any of the four major operating systems.
• Workstations [(Reduced Instruction Set Computing (RISC)] use UNIX operating system or a
variation of it. (A note of caution: Many people use the term workstation to refer to any
computer or terminal that is connected to another computer. Although this usage was once a
common meaning of the term, it has become out dated) (The biggest manufacturers of
workstations are Sun Microsystems).
Micro Computers/Personal Computers
• The term microcomputers and personal computers are used interchangeably to mean the small
free- standing computers that are commonly found in offices, homes and classrooms.
6
JOSHUA OBIBINE AKA ORACLE
• Many micro computers are built specially to be used in watches, clocks, and cameras. Today,
PCs are seriously challenging mainframes and mini computers in many areas. In fact today PCs
are more powerful than mainframes of just a few years ago, and competition is producing
smaller, faster models every year.
TYPES OF PERSONAL COMPUTERS
• THE DESKTOP: This is the first type of PCs and the most common. Most desktops are small
enough to fit on a desk, but are a little too big to carry around.
• THE LAPTOP: They weigh about 10pounds (4.5kg). They are battery – operated computers
with built–in screens. They are designed to be carried and used in locations without electricity.
Laptops typically have an almost full –sized keyboard.
• THE NOTEBOOK: They are similar to laptops and PCs, but smaller. They weigh about 6 to7
pounds (2.7 – 3.2kg). As the name implies, they are approximately the size of a notebook and
can easily fit inside a brief case.
• THE PALMTOP: They are also known as personal digital assistance (PDAs) and are thesmallest
of portable computers. Palmtops are much less powerful than notebooks or desktops models and
feature built-in applications such as word processing. They are mostly used to display important
telephone numbers and addresses.
7
JOSHUA OBIBINE AKA ORACLE
keys.
ii. A mouse is a pointing device that enables you to quickly move around on the screen,
and to select commands from menus rather than type the commands. A mouse is
useful because it enables the user to point at items on the screen and clicks a button to
select the item. It is convenient for entering certain data.
iii. Track Balls: A trackball is an input device that works like an upside-down mouse. You
rest your hand on the exposed ball and the fingers on the button. To move the cursor
around the screen, you roll the ball with your thumb. Trackballs are much popular with
notebook computers. They require less desk space than the mouse.
iv. The Joystick: This is a pointing device commonly used for games. It is not used for
business applications.
v. The Pen: It is an input device that allows a user to write on or point at a special pad on
the screen of a pen-based computer, such as a personal digital assistant (PDAS).
vi. The Touch Screen: A computer screen that accepts input directly into themonitor; users
touch electronic buttons displayed on the screen. It is appropriate in environment where
dirt or weather would render keyboards and pointingdevices useless.
vii. The Scanner: This is an input device used to copy images into a computer memory
without manual keying. It works by converting any image into electronic form by
shinning light on the image and sensing the intensity of reflection at every point. There
are several kinds of scanners. These includes: hand held, flatbed, and sheet-feds.
viii. The Bar-Code Reader: This is one of the most commonly used input devices after the
keyboard and mouse. It is commonly found in supermarkets and department stores. This
device converts a pattern of printed bars on products into a product number by emitting
a beam of light frequently from a laser that reflects off the bar code image. A light
sensitive detector identifies the bar-code image by special bars at both ends of the
image. Once it has identified the barcode, it converts the individual bar patterns into
numeric digits.
Input Devices
8
JOSHUA OBIBINE AKA ORACLE
[2] Processing Devices: Basically two components handle processing in a computer: the central
processing unit (CPU) and the memory.
a. The Central Processing Unit (CPU): The central processing unit (CPU) is a tiny electronic
chip known as the micro processor located in the system unit. It is installed on the main
circuit board of the computer, the motherboard. The CPU as the name implies is where
information is processed within the computer. In this regard, you might think of the CPU
(processor) as the brain of the computer. The CPU is otherwise known as microprocessor.
Every CPU has at least two basic parts. The control unit and the Arithmetic Logic Unit
(ALU). The control unit coordinates all the computer activities and contains the CPUs
instruction to carry out commands. The ALU is responsible for carrying out arithmetic and
logic functions. In other words, when the control unit encounters an instruction that involves
arithmetic and logic it refers it to the ALU.
b. Memory: What happens to all the information we put into the computer: before, while and
after information is processed? It is held in the computer memory or Random Access
Memory (RAM). The memory to which we are referring here is not the kind of long-term
storage that allows you to save work on a floppy disk and months later to use it, but rather a
short term holding area that is built into the computer hardware. While the CPU is fast and
efficient, it cannot remember anything by itself. It often refers to the memory in the
computer for software instruction and to remember what it is working on. The term RAM
and memory are often interchangeable. RAM refers to the way the CPU searches through
memory for the information it needs. For the workings of a memory, information is stored in
memory chips. The CPU can get information faster from RAM than it can from a disk. A
computer then reads information or instruction from disks and stores the information in the
RAM where it can get the information quickly. The CPU processes the information and
then returns to the RAM.
9
JOSHUA OBIBINE AKA ORACLE
[3] Storage Devices: Among the most important part of a computer system are the devices that allow
you to save data or information. The physical components or materials on which data are stored are
called storage media. A storage device is a piece of hardware that permanently stores information.
Unlike electronic memory, a storage device retains information when electric power is turned off.
There are several storage devices and primary among them are:
a. The Floppy Disk: The floppy disk is a circular flat piece of plastic made of a
flexible (or floppy) magnetic material on which data are recorded. Floppy disk drives store
data on both sides of the disks. Earlier computers stored data on only a single side of the
floppy disk.
b. The Hard Disk: The hard disk is generally not visible because hard disks are usually
enclosed within the system unit. The hard disk is a stack of metal platters that spin on
one spindle like a stack of rigid floppy disks. Unlike floppy disks where the disk and drive
are separate, the hard-disk drive, or hard drive is the whole unit. Generally you cannot
remove the hard disk from its drive; however some manufacturers make removable hard
disks that plug into a separate drive unit.
c. The CD-ROM: CD-ROM disks are hard, plastic, silver – a colored disk. CD-ROM is an
acronym for Compact Disc Read – Only Memory. This implies that the disk can only be
read. You cannot change or overwrite the contents of a CD-ROM disk.
d. Tape Drives: A tape drive is a device that reads and writes data to the surface of a magnetic
tape, generally used for backing up or restoring the data of an entire hard disk.
e. The Zip Drive: Zip drives are an alternative to tape backup units or tape drives. A zip drive
can be internal or external. Zip drives have removable cartridges or disk. A zip drive holds
about 100MB to 250 MB of Data.
Storage Devices
10
JOSHUA OBIBINE AKA ORACLE
[4] Output Devices: Output devices return processed data, that is, information back to the user. In other
words, output devices allow the computer ‘talk’ to us. The most common output devices are the
monitor and the printer. Others include modems and speakers.
a. The Monitor: The monitor is an output device that enables the computer to display to the
user what is going on. It has a screen like that of a television. It is commonly referred to as
the screen or display. It is the main source for output of information from the computer. As
data is entered through an input device, the monitor changes to show the effects of the
command. Messages displayed on the screen allow the user to know if the command is
correct.
b. The Printer: The printer is an output device that produces on hard copy or a print out on
a paper i.e. it takes data from its electronic form and prints it out on paper. There are three
principal types of printers; Laser, Inkjet and Dot – Matrix.
c. The Sound Card: Sound Cards, otherwise known as sound boards, is a hard ware board. It
is a device that produces audio sounds and usually provides ports in the back of a
computer for external speakers. It is installed in one of the expansion slot inside the system
unit’s motherboard.
d. The Modem: The modem is a device that allows a computer to communicate with another
computer through a telephone line. Both computers need compatible modem. With a
modem, a computer and required software, you can connect with other computers all over
the world.
Output Devices
11
JOSHUA OBIBINE AKA ORACLE
SOFTWARE COMPOINENTS
Software is a set of instructions that operate a computer, manipulate the data and execute particular
functions or tasks. In other words, it is a programs, routines, and symbolic languages that control the
function of the hardware.
For software (the instructions) to perform various functions, it must be programmed. That is, the
instructions need to be written in a programming language that the computer can understand. Without a
program, a computer is useless.
Computer program is a sequence of instructions that can be executed by a computer to carry out a
process.
There are two kinds of software, systems software and applications software.
[1] Applications Software: Applications software includes programs that user access tocarry
out work. They include applications for the following functions.
Word processing is the most common applications software. The great advantageof
word processing over using a typewriter is that you can make changes without
12
JOSHUA OBIBINE AKA ORACLE
retyping the entire document. Word processors make it easy to manipulate and format
documents. Example of word processing software is Microsoft Office Word, Microsoft
Works Word, Open Office Word, etc.
Spreadsheets are computer programs that let people electronically create and manipulate
spreadsheets (tables of values arranged in rows and columns with predefined
relationships to each other). Spreadsheets are used for mathematical calculations such as
accounts, budgets, statistics and so on. Example; Microsoft Excel, Lotus 1, 2, 3, and
SPSS.
Database management applications are computer programs that let people create and
manipulate data in a database. A database is a collection of related information that can
be manipulated and used to sort information, conduct statistical analyses or generate
reports. Example is Microsoft Access, Microsoft SQL, MySQL and Oracle database.
Presentation packages and graphics are computer programs that enable users to create
highly stylized images for slide presentations and reports. They can also be used to
produce various types of charts and graphs. Many software applications include
graphics components including: paint programs, desktop publishing applications and
so on. Example is Microsoft PowerPoint.
Communications applications typically include software to enable people to send faxes
and emails and dial into other computers.
[2] Systems Software: includes the operating system and all the utilities that enable the computer to
function. The most important program that runs on a computer is the operating system. Every
general-purpose computer must have an operating system in order to run other programs. This
includes controlling functions such as the coordination of the hardware and applications
software, allocating storage facilities, controlling the input and output devices and managing
time sharing for linked or networked computers.
13
JOSHUA OBIBINE AKA ORACLE
14
JOSHUA OBIBINE AKA ORACLE
• With asymmetrical multiprocessing one main CPU retains the overall control of the
computer as well as that of the other microprocessor.
• On the other hand in symmetrical multiprocessing there is no single controlling CPU.
This arrangement provides a linear increase in system capacity for each processor added
to the system.
• Some extensions of UNIX supports asymmetric multiprocessing while Windows NT
supports symmetric multiprocessing.
15
Processors Microsoft Windows NT is multitasking and purely graphical OS with network software
to make a network client or server. It is single- user and allows access to command line interface of
the DOS unlike the Macintosh.
Input unit
The central processing unit (CPU)
Control unit
Arithmetic unit
Registers
Memory unit
Output unit
1. Input Unit:
Computers need to receive data and instructions to solve any problem. The input unit links the
external world or environment to the computer system. It consists of one or more input devices. The
keyboard and mouse are the most commonly used input devices. Apart from this some are the
different input devices which you can see in next session link is provided below this post.
2.3 Registers:
The CPU consists of several temporary storage units, which are used to store instructions &
intermediate data which may be generated during processing.
3. Memory Unit:
The data and instructions required for processing have to be stored in the memory unit before actual
processing starts. Similarly, the results generated have to be preserved before it is displayed. The
memory unit thus provides space to store input data, intermediate results and final output
generated. eg: hard disks, pen drives, floppy disks.
4. Output Unit :
It is used to print or display the result obtained by the execution of a program. Whenever the user
wants output from the computer, the control unit sends a signal to this unit to be ready to accept
processed data from memory and display it. Eg. Monitor, Printer, Speakers, etc.
Although the two graphs look different in their appearance, notice that they repeat themselves at equal
time intervals. Electrical signals or waveforms of this nature are said to be periodic.Generally,a
periodic wave representing a signal can be described using the following parameters
Amplitude(A)
Frequency(f)
periodic time(T)
Amplitude (A): this is the maximum displacement that the waveform of an electric signal can attain.
Frequency (f): is the number of cycles made by a signal in one second. It is measured in hertz.1hert is
equivalent to 1 cycle/second.
Periodic time (T): the time taken by a signal to complete one cycle is called periodic time. Periodic
time is given by the formula T=1/f, where f is the frequency of the wave.
When a digital signal is to be sent over analog telephone lines e.g. e-mail, it has to be converted to
analog signal. This is done by connecting a device called a modem to the digital computer. This
process of converting a digital signal to an analog signal is known as modulation. On the receiving
end, the incoming analog signal is converted back to digital form in a process known
as demodulation.
2. concepts of data representation in digital computers
Data and instructions cannot be entered and processed directly into computers using human language.
Any type of data be it numbers, letters, special symbols, sound or pictures must first be converted into
machine-readable form i.e. binary form. Due to this reason, it is important to understand how a
computer together with its peripheral devices handles data in its electronic circuits, on magnetic media
and in optical devices.
Electronic components, such as microprocessor, are made up of millions of electronic circuits. The
availability of high voltage(on) in these circuits is interpreted as ‘1’ while a low voltage (off) is
interpreted as ‘0’.This concept can be compared to switching on and off an electric circuit. When the
switch is closed the high voltage in the circuit causes the bulb to light (‘1’ state).on the other hand
when the switch is open, the bulb goes off (‘0’ state). This forms a basis for describing data
representation in digital computers using the binary number system.
The laser beam reflected from the land is interpreted, as 1.The laser entering the pot is not reflected.
This is interpreted as 0.The reflected pattern of light from the rotating disk falls on a receiving
photoelectric detector that transforms the patterns into digital form.The presence of a magnetic field in
one direction on magnetic media is interpreted as 1; while the field in the opposite direction is
interpreted as “0”.Magnetic technology is mostly used on storage devices that are coated with special
magnetic materials such as iron oxide. Data is written on the media by arranging the magnetic dipoles
of some iron oxide particles to face in the same direction and some others in the opposite direction
In optical devices, the presence of light is interpreted as ‘1’ while its absence is interpreted as ‘0’.Optical
devices use this technology to read or store data. Take example of a CD-ROM, if the shiny surface is placed
under a powerful microscope, the surface is observed to have very tiny holes called pits. The areas that do not
have pits are called land.
Reason for use of binary system in computers
It has proved difficult to develop devices that can understand natural language directly due to the
complexity of natural languages. However, it is easier to construct electric circuits based on the binary
or ON and OFF logic. All forms of data can be represented in binary system format. Other reasons for
the use of binary are that digital devices are more reliable, small and use less energy as compared to
analog devices.
The terms bits, bytes, nibble and word are used widely in reference to computer memory and data size.
Bits: can be defined as either a binary, which can be 0, or 1.It is the basic unit of data or information
in digital computers.
Byte: a group of bits (8 bits) used to represent a character. A byte is considered as the basic unit of
measuring memory size in computer.
A nibble: is half a byte, which is usually a grouping of 4 bytes.
Word: two or more bits make a word. The term word length is used as the measure of the number of
bits in each word. For example, a word can have a length of 16 bits, 32 bits, 64 bits etc.
Types of data representation
Computers not only process numbers, letters and special symbols but also complex types of data such
as sound and pictures. However, these complex types of data take a lot of memory and processor time
when coded in binary form.
This limitation necessitates the need to develop better ways of handling long streams of binary digits.
Higher number systems are used in computing to reduce these streams of binary digits into
manageable form. This helps to improve the processing speed and optimize memory usage.
o A number system is a set of symbols used to represent values derived from a common base or radix.
o As far as computers are concerned, number systems can be classified into two major categories:
o decimal number system
o binary number system
o octal number system
o hexadecimal number system
o The term decimal is derived from a Latin prefix deci, which means ten. Decimal number system has
ten digits ranging from 0-9. Because this system has ten digits; it is also called a base ten number
system or denary number system.
o A decimal number should always be written with a subscript 10 e.g. X 10
o But since this is the most widely used number system in the world, the subscript is usually understood
and ignored in written work. However ,when many number systems are considered together, the
subscript must always be put so as to differentiate the number systems.
o The magnitude of a number can be considered using these parameters.
o Absolute value
o Place value or positional value
o Base value
The absolute value is the magnitude of a digit in a number. for example the digit 5 in 7458 has an
absolute value of 5 according to its value in the number line.
The place value of a digit in a number refers to the position of the digit in that number i.e. whether;
tens, hundreds, thousands etc.
The total value of a number is the sum of the place value of each digit making the number.
The base value of a number also k known as the radix, depends on the type of the number systems
that is being used .The value of any number depends on the radix. for example the number 10010 is not
equivalent to 1002.
Binary number system
It uses two digits namely, 1 and 0 to represent numbers. unlike in decimal numbers where the place value goes
up in factors of ten, in binary system, the place values increase by the factor of 2.binary numbers are written as
X2.consider a binary number such as 10112.The right most digit has a place value of 1×20 while the left most
has a place value of 1×23.
Consists of eight digits ranging from 0-7.the place value of octal numbers goes up in factors of eight from right
to left.
A hexadecimal number can be denoted using 16 as a subscript or capital letter H to the right of the
number .For example, 94B can be written as 94B16 or 94BH.
Further conversion of numbers from one number system to another
To convert numbers from one system to another. the following conversions will be considered.
First, write the place values starting from the right hand side.
Write each digit under its place value.
Multiply each digit by its corresponding place value.
Add up the products. The answer will be the decimal number in base ten.
EXAMPLE
N10=32+0+8+4+0+1
=4510
32*1=32
16*0=0
8*1=8 4*1=4
2*0=0
1*1=1
=4510
NB: remember to indicate the base subscript since it is the value that distinguishes the different systems.
The binary equivalent of the fractional part is extracted from the products by reading the respective
integral digits from the top downwards as shown by the arrow next page.
Combine the two parts together to set the binary equivalent.
0.375×2=0.750
0.750×2=1.500
Therefore 0.37510=0.0112
NB: When converting a real number from binary to decimal, work out the integral part and the fractional parts
separately then combine them.
Example
Solution
Convert the integral and the fractional parts separately then add them up.
2×1= 2.000
1×1= +1.000
3.00010
Weight 21 20 . 2-1 2-2 2-3
Binary digit 1 1 .0 1 1
Values in base 10 2 1 . 0 0.25 0.125
0.50×0 =0.000
0.25×1 =0.250
0.125×1=+0.125
0.37510
3.00010+0.37510= 3.37510
Thus 11.0112=3.37510
Divide the integral part continuously by 2.For the fractional part, proceed as follows:
Take the fractional part of the immediate product and multiply it by 2 again.
Continue this process until the fractional part of the subsequent product is 0 or starts to repeat itself.
o The following examples illustrate how to convert hexadecimal number to a decimal numberExample
Solution
Working from left to the right, each octal number is represented using three digits and then combined
we get the final binary equivalent. Therefore:
3=0112
2=0102
1=0012
3218 =0110100012
To convert binary numbers to their binary equivalents, simply group the digits of the binary number
into groups of four from right to left e.g. 11010001.The next step is to write the hexadecimal
equivalent of each group e.g.
1101- D
0001- 1
The binary equivalent of the fractional part is extracted from the products by reading the respective
integral digits from the top downwards as shown by the arrow next pag
0.375×2=0.750
0.750×2=1.500
Therefore 0.37510=0.0112
Example 1.13
Solution
Place value 82 81 80
64 8 1
Octal digit 5 1 2
=(5 x 64)+8+2
=320+8+2
N10=33010
64 x 5=320
8 x 1= 8
1 x 2=+ 2
330
Therefore5128 =33010
To convert an octal number to binary, each digit is represented by three binary digits because the
maximum octal digit i.e. 7 can be represented with a maximum of seven digits. See table:
Octal digit Binary equivalents
0 000
1 001
2 010
3 011
4 100
5 101
6 110
7 111
Example
Solution
16 x 1 = 16
1 x 1= + 1
273
Example
Solution
Working from left to the right, each octal number is represented using three digits and then combined we get
the final binary equivalent. Therefore:
3=0112
2=0102
1=0012
3218 =0110100012
To convert binary numbers to their binary equivalents, simply group the digits of the binary number into
groups of four from right to left e.g. 11010001.The next step is to write the hexadecimal equivalent of each
group e.g.
1101- D
0001- 1
First, write the place values starting from the right hand side.
Multiply each hexadecimal digit with its corresponding place value and then add the products
The following examples illustrate how to convert hexadecimal number to a decimal number
Example
Solution
16 x 1 = 16
1 x 1= + 1
273
Since F is equivalent to a binary number11112 the hexadecimal number are therefore represented
using4 digits as shown in the table below
Hexadecimal digit Decimal equivalent Binary equivalent
00 00 0000
01 01 0001
02 02 0010
03 03 0011
04 04 0100
05 05 0101
06 06 0110
07 07 0111
08 08 1000
09 09 1001
A 10 1010
B 11 1011
C 12 1100
D 13 1101
E 14 1110
F 15 1111
The simplest method of converting a hexadecimal number to binary is to express each hexadecimal digit as a
four bit binary digit number and then arranging the group according to their corresponding positions as shown
in example
Example 1
Convert 32116
Hexadecimal digit 3 2 1
Binary equivalent 0011 0010 0001
32116 = 0011001000012
Example 2
Hexadecimal digit
5 E 6
Binary equivalent 0101 1110 0110
5E616 = 0101111001102
Symbolic representation using coding schemes
Binary Coded Decimal is a 4-bit code used to represent numeric data only. For example, a number like
9 can be represented using Binary Coded Decimal as 10012 .
Binary Coded Decimal is mostly used in simple electronic devices like calculators and microwaves.
This is because it makes it easier to process and display individual numbers on their Liquid Crystal
Display (LCD) screens.
A standard Binary Coded Decimal, an enhanced format of Binary Coded Decimal, is a 6-bit
representation scheme which can represent non-numeric characters. This allows 64 characters to be
represented. For letter A can be represented as 110001 2 using standard Binary Coded Decimal
Extended Binary Coded Decimal Interchange code (EBCDIC) is an 8-bit character-coding scheme
used primarily on IBM computers. A total of 256 (28) characters can be coded using this scheme. For
example, the symbolic representation of letter A using Extended Binary Coded Decimal Interchange
code is 110000012.
American standard code for information interchange (ASCII) is a 7-bit code, which means that only
128 characters i.e. 27 can be represented. However, manufactures have added an eight bit to this
coding scheme, which can now provide for 256 characters.
This 8-bit coding scheme is referred to as an 8-bit American standard code for information
interchange. The symbolic representation of letter A using this scheme is 1000001 2..
Binary arithmetic operations
In mathematics, the four basic arithmetic operations applied on numbers are addition, subtraction,
multiplications and division.
In computers, the same operations are performed inside the central processing unit by the arithmetic
and logic unit (ALU). However, the arithmetic and logic unit cannot perform binary subtractions
directly. It performs binary subtractions using a process known as For multiplication and division, the
arithmetic and logic unit uses a method called shifting before adding the bits.
In computer technology, there are three common ways of representing a signed binary number.
In decimal numbers, a signed number has a prefix “+” for a positive number e.g. +27 10 and “-“ for a
negative number e.g.-27
However, in binary, a negative number may be represented by prefixing a digit 1 to the number while
a positive number may be represented by prefixing a digit 0. For example, the 7-bit binary equivalent
of 127 is 11111112. To indicate that it is positive, we add an extra bit (0) to the left of the number i.e.
(0)11111112.
To indicate that it is negative number we add an extra bit (1) i.e. (1)1111111 2.
The problem of using this method is that the zero can be represented in two ways i.e.(0)0000000 2 and
(1)00000002.
Ones compliment
The term compliment refers to a part which together with another makes up a whole. For example in
geometry two complimentary angle (900).
The idea of compliment is used to address the problem of signed numbers i.e. positive and negative.
In decimal numbers (0 to 9), we talk of nine’s compliment. For example the nines compliment
Of 9 is 0, that of 5 is 4 while 3 is 6.
However, in binary numbers, the ones compliment is the bitwise NOT applied to the number. Bitwise
NOT is a unary operator (operation on only one operand) that performs logical negation on each bit.
For example the bitwise NOT of 11002 is 00112e.
0s are negated to 1s while 1s are negated to 0s.
Twos compliment
Twos compliment, equivalent to tens compliment in decimal numbers, is the most popular way of
representing negative numbers in computer systems. The advantages of using this method are:
1. There are no two ways of representing a zero as in the case with other two methods.
2. Effective addition and subtraction can be done even with numbers that are represented with a sign bit
without a need for circuitries to examine the sign of an operand.
The twos compliment of a number is obtained by getting the ones compliment then adding a 1. For
example, to get the twos compliment of a decimal number 4510,
First convert it to its binary equivalent then find its ones compliment. Add a 1 to ones compliment i.e.
4510=001011012
= 110100112
Binary addition
1. 0+0=0
2. 0+ 12 = 12
3. 12 + 0 = 12
4. 12 + 12 = 102 (read as 0, carry 1)
5. 12 + 12 + 12 = 112 (read as 1, carry 1)
Example 1
Solution
+1. From the sum, write down digit one the carry
Forward)
Example 2
101102
10112
+ 1112
Solution
Add the first two numbers and then add the sum to the third number a follows:
Step 1 Step 2
101102 1000012
+ 10112 + 1112
1000012 1010002
Binary subtraction
Direct subtraction
1. 0–0=0
2. 12 – 0 = 12
3. 12 – 12 = 0
4. 102 – 12 = 12 ( borrow 1 from the next most significant digit to make 0 become 10 2,
The main purpose of using ones compliment in computers is to perform binary subtraction. For example to get
the difference in 5 – 3, using the ones compliment, we proceed as follows:
1. Rewrite the problem as 5 + (-3) to show that he computer binary subtraction by adding the binary
equivalent of 5 to ones compliment of 3.
2. Convert the absolute value of 3 into 8-bits equivalent i.e. 000000112.
3. Take the ones compliment of 000000112e. 111111002 which is the binary representation of -310.
4. Add the binary equivalent of 5 to ones compliment of 3 i.e.
00000101
+ 11111000
(1)00000001
Subtraction using twos compliments.
Like in ones compliment, the twos compliment of a number is obtained by negating a positive number to is
negative counterpart. For example to get the difference in 5-3, using twos compliment, we proceed as follow:
00000101
+ 11111001
(1)00000010 Ignoring the overflow bit, the resulting number is 00000010, which is directly read as a binary
equivalent of +2.
Example
3110 = 000111112
Computer Algorithm
What is an algorithm?
An algorithm is a procedure used for solving a problem or performing a computation. Algorithms act as an
exact list of instructions that conduct specified actions step by step in either hardware- or software-
based routines.
Algorithms are widely used throughout all areas of IT. In mathematics and computer science, an algorithm
usually refers to a small procedure that solves a recurrent problem. Algorithms are also used as specifications
for performing data processing and play a major role in automated systems.
An algorithm could be used for sorting sets of numbers or for more complicated tasks, like recommending
user content on social media. Algorithms typically start with initial input and instructions that describe a
specific computation. When the computation is executed, the process produces an output.
How do algorithms work?
Algorithms can be expressed as natural languages, programming languages, pseudocode, flowcharts and
control tables. Natural language expressions are rare, as they are more ambiguous. Programming languages are
normally used for expressing algorithms executed by a computer.
Algorithms use an initial input along with a set of instructions. The input is the initial data needed to make
decisions and can be represented in the form of numbers or words. The input data gets put through a set of
instructions, or computations, which can include arithmetic and decision-making processes. The output is the
last step in an algorithm and is normally expressed as more data.
For example, a search algorithm takes a search query as input and runs it through a set of instructions for
searching through a database for relevant items to the query. Automation software acts as another example of
algorithms, as automation follows a set of rules to complete tasks. Many algorithms make up automation
software, and they all work to automate a given process.
What are different types of algorithms?
There are several types of algorithms, all designed to accomplish different tasks. For example, algorithms
perform the following:
Search engine algorithm. This algorithm takes search stringsof keywords and operators as input, searches
its associated database for relevant webpages and returns results.
Encryption algorithm. This computing algorithm transforms data according to specified actions to protect
it. A symmetric key algorithm, such as the Data Encryption Standard, for example, uses the same keyto
encrypt and decrypt data. As long as the algorithm is sufficiently sophisticated, no one lacking the key can
decrypt the data.
Greedy algorithm. This algorithm solves optimization problems by finding the locally optimal solution,
hoping it is the optimal solution at the global level. However, it does not guarantee the most optimal
solution.
Recursive algorithm. This algorithm calls itself repeatedly until it solves a problem. Recursive algorithms
call themselves with a smaller value every time a recursive function is invoked.
Backtracking algorithm. This algorithm finds a solution to a given problem in incremental approaches
and solves it one piece at a time.
Divide-and-conquer algorithm. This common algorithm is divided into two parts. One part divides a
problem into smaller subproblems. The second part solves these problems and then combines them
together to produce a solution.
Dynamic programming algorithm. This algorithm solves problems by dividing them into subproblems.
The results are then stored to be applied for future corresponding problems.
Brute-force algorithm. This algorithm iterates all possible solutions to a problem blindly, searching for
one or more solutions to a function.
Sorting algorithm. Sorting algorithms are used to rearrange data structure based on a comparison
operator, which is used to decide a new order for data.
Hashing algorithm. This algorithm takes data and converts it into a uniform message with a hashing
Randomized algorithm. This algorithm reduces running times and time-based complexities. It uses
random elements as part of its logic.
Unsupervised machine learning involves algorithms that train on unlabeled data. Unsupervised machine
learning algorithms sift through unlabeled data to look for patterns that can be used to group data points into
subsets. Most types of deep learning, including neural networks, are unsupervised algorithms.
Machine learning used in artificial intelligence also relies on algorithms. However, machine learning-based
systems may have inherent biases in the data that feeds the machine learning algorithm. This could result in
systems that are untrustworthy and potentially harmful.
ALGORITHM /PSEUDO-CODE
Algorithm: An algorithm is a set of steps for solving a particular problem. To be an algorithm, a set of
rules must be unambiguous and have a clear stopping point”. There may be more than one way to solve a
problem, so there may be more than one algorithm for a given problem.
Pseudo-code: A pseudo-code is an algorithm but in this case it uses a mixture of English statements,
some mathematical notations, and selected keywords from a programming language. Most at time
when we say algorithm in computer science we mean pseudo-code.
Before writing an algorithm/pseudo-code for a problem, one should find out what is/are the inputs to the
algorithm and what is/are expected output after running the algorithm. Now let us take some exercises to
develop an algorithm for some simple problems: While writing algorithms we will use following symbol
for different operations:
‘+’ for Addition
‘-’ for Subtraction ‘*’
for Multiplication‘/’
for Division and
‘ ’ for assignment. For example A =X*3 means A will have a value of X*3.
SAMPLES OF ALGORITHM AND PSEUDOCODE
PSEUDO-CODE ALGORITHM
START BEGIN:
1. INPUT A First, accept the first number
2. INPUT B 19 Second, accept the second number
3. Sum A + B Add the first and second number together
4. PRINT Sum Print the result
END OF ALGORITHM END OF ALGORITHM
CHARACTERISTICS OF AN ALGORITHM
Each step of an algorithm must be exact. This goes without saying. An algorithm must be
precisely and unambiguously described, so that there remains no uncertainty. An instruction that
says “shuffle the deck of card” may make sense to some of us, but the machine will not have a
clue on how to execute it, unless the detail steps are described. An instruction that says “lift the
restriction” will cause much puzzlement even to the human readers.
An algorithm must terminate. The ultimate purpose of an algorithm is to solve a problem. If the
program does not stop when executed, we will not be able to get any result from it. Therefore, an
algorithm must contain a finite number of steps in its execution. Note that an algorithm that
merely contains a finite number of steps may not terminate during execution, due to the presence
of ‘infinite loop’.
An algorithm must be effective: An algorithm must provide the correct answer to the problem.
An algorithm must be general: An algorithm must solve every instance of the problem. For
example a program that computes the area of a rectangle should work on all possible dimensions
of the rectangle.
FLOWCHART
A flowchart is a graphical or pictorial representation use to solve a giving problem. To be more precise,
it is a graphical representation of algorithm. It shows sequence of operations and procedures to be taken
to solve the problem. This means by seeing a flow chart one can know the operations performed and the
sequence of these operations in a system. Algorithms are nothing but sequence of steps for solving
problems. So a flow chart can be used for representing an algorithm. A flowchart, will describe the
operations (and in what sequence) are required to solve a given problem. You can see a flow chart as a
blueprint of a design you have made for solving a problem.
20
For example suppose you are going for a picnic with your friends then you plan for the activities you
will do there. If you have a plan of activities then you know clearly when you will do what activity.
Similarly when you have a problem to solve using computer or in other word you need to write a
computer program for a problem then it will be good to draw a flowchart prior to writing a computer
program. Flowchart is drawn according to defined rules.
Information system flowcharts show how data flows from source documents through thecomputer to final
distribution to users.
21
On-page connector Provide connection of
program flow within
the same page
Shows direction of
Flow lines flow
Expected output:
Area of the Circle
Algorithm:
Step1: Start
Step2: Read\input the Radius r of the Circle
Step3: Area PI*r*r // calculation of area Step4:
Print Area
Step5: End
Start
Read r
Area=3.142*r*r
22
Print: Area
End
Problem2: Write an algorithm to read two numbers and find their sum.
Inputs to the algorithm:
num1.
num2.
Expected output:
Sum of the two numbers.
Algorithm:
Step1: Start
Step2: Read\input: num1.
Step3: Read\input: num2.
Step4: Sum =num1+num2 // calculation of sum
Step5: Print: Sum
Step6: End
Start
Sum=Num1+Num2
Print: Sum
End
Expected output:
Temperature in Celsius
Algorithm:
Step1: Start
Step 2: Read Temperature in Fahrenheit F
Step 3: C 5/9*(F32)
Step 4: Print Temperature in Celsius: C
Step5: End
Start
Read F
C=5/9* (F - 32)
Print: C
End
In the late 1960s, two mathematicians, Carrado Boehm and Giuseppe Jocopini, proved that even the most
complex logic can be expressed using the three general types of logic or control structures: Sequential
(Begin – End), Selection (If-Then-Else) and Iteration (Do-While or Do- Until). Naturally, these general
types of logic or control structures can be combined in any fashion or combination to produce a process
which when executed will yield the desired result.
24
SEQUENTIAL (BEGIN – END)
The sequence is exemplified by sequence of statements place one after the other - the one aboveor before
another gets executed first. In flowcharts, sequence of statements is usually contained in the rectangular
process box.
SELECTION/ BRANCHING (IF-THEN-ELSE)
The branch refers to a binary decision based on some condition. If the condition is true, one of the two
branches is explored; if the condition is false, the other alternative is taken. This is usually represented
by the ‘if-then’ construct in pseudo-codes and programs. In flowcharts, this is represented by the
diamond-shaped decision box. This structure is also known as the selectionstructure.
ITERATIVE/LOOP
The loop allows a statement or a sequence of statements to be repeatedly executed based on some loop
condition. It is represented by the ‘while’ and ‘for’ constructs in most programming languages, for
unbounded loops and bounded loops respectively. (Unbounded loops refer to those whose number of
iterations depends on the eventuality that the termination condition is satisfied; bounded loops refer to
those whose number of iterations is known before-hand.) In the flowcharts, a back arrow hints the
presence of a loop. A trip around the loop is known as iteration. You must ensure that the condition for
the termination of the looping must be satisfied after some finite number of iterations, otherwise it ends
up as an infinite loop, a common mistake made by inexperienced programmers. The loop is also known
as the repetition structure.
The three basic control structures can be represented pictorially as shown below:
Visual Basic
Visual basic is an event driven language which has some features of Object
Oriented Programming (OOP). Actions are tied to the occurrence of events,
e.g. an action may be triggered by clicking the mouse. This approach makes
application programs more friendly and natural to the end user. In this unit
students are introduced to the concept of working with graphical objects and the
general visual basic programming concepts.
25
CSC101 READ MORE
26