0% found this document useful (0 votes)
54 views

Software Is Capable of Accepting Data As Input, Processing The Data and Giving Out Information As Output. This

The document provides a history of the development of computers from early counting devices like stones and abacuses to modern digital computers. It discusses early mechanical calculating machines like the Pascaline and Leibniz's stepped reckoner. Punched card technology developed for textile looms in the 1800s was an important precursor, allowing programs to control machines. Analog computers that represented values continuously also emerged, performing calculations in real-time. Overall the document traces the incremental development of computing hardware from simple counting aids to modern programmable digital computers.

Uploaded by

Osikhekha Ineh
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

Software Is Capable of Accepting Data As Input, Processing The Data and Giving Out Information As Output. This

The document provides a history of the development of computers from early counting devices like stones and abacuses to modern digital computers. It discusses early mechanical calculating machines like the Pascaline and Leibniz's stepped reckoner. Punched card technology developed for textile looms in the 1800s was an important precursor, allowing programs to control machines. Analog computers that represented values continuously also emerged, performing calculations in real-time. Overall the document traces the incremental development of computing hardware from simple counting aids to modern programmable digital computers.

Uploaded by

Osikhekha Ineh
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

INTRODUCTION:

The history of the computer is the record of ongoing effort to make computer hardware faster, cheaper and
capable of storing more data. This history refers to computing hardware that have evolved from machines that
needed separate manual action to perform each arithmetic operation to punched card machines and then to
stored programs computer.
The stored programs computer refers to the organization of units to perform input, output, to store data and
to operate as an integrated mechanism. Increases in speed and memory capacity, decreases in cost and size in
relation to computer power as well as evolution of the computer language are major features of the history.
DEFINITION:
What Is a Computer? A computer is an electronic machine which under the control stored programs called
software is capable of accepting data as input, processing the data and giving out information as output. This
definition includes the following operations:

 INPUT: A computer accepts data that are provided by means of an input device e.g. keyboard.
 PROCESSING: A computer performs operations on the data received and transforms it to information.
 OUTPUT: This is when the computer produces the information as output on a device such as a visual
display unit, printer, that shows the results of the processing operations.
 STORAGE: A computer stores the results of the processing operations for future use.

This definition is often referred to as the IPOS CYCLE. The four steps of the IPOS cycle do not have to
occur in rigid I-P-O-S sequence. Under the direction of a program a computer uses the steps of this
process when needed and as often as needed.

BLOCK DIAGRAM OF THE IPOS CYCLE.


OVERVIEW:
Before the development of the general-purpose computer, most calculations were done by humans. Tools
to help humans calculate were then called "calculating machines", by proprietary names, or even as they are
now, calculators. It was those humans who used the machines who were then called computers.
Calculators have continued to develop, but computers add the critical element of conditional response and
larger memory, allowing automation of both numerical calculation and in general, automation of many symbol-
manipulation tasks. Computer technology has undergone profound changes every decade since the 1940s.

Computing hardware has become a platform for uses other than mere computation, such as process automation,
electronic communications, equipment control, entertainment, education, etc. Each field in turn has imposed its
own requirements on the hardware, which has evolved in response to those requirements, such as the role of the
touch screen to create a more intuitive and natural user interface .
Aside from written numerals, the first aids to computation were purely mechanical devices which
required the operator to set up the initial values of an elementary arithmetic operation, then manipulate the
device through manual manipulations to obtain the result. A sophisticated (and comparatively recent) example
is the slide rule in which numbers are represented as lengths on a logarithmic scale and computation is
performed by setting a cursor and aligning sliding scales, thus adding those lengths. Numbers could be
represented in a continuous "analog" form, for instance a voltage or some other physical property was set to be
proportional to the number. Analog computers, like those designed and built by Vannevar Bush before WWII
were of this type. Or, numbers could be represented in the form of digits, automatically manipulated by a
mechanical mechanism. Although this last approach required more complex mechanisms in many cases, it made
for greater precision of results.
Both analog and digital mechanical techniques continued to be developed, producing many practical
computing machines. Electrical methods rapidly improved the speed and precision of calculating machines, at
first by providing motive power for mechanical calculating devices, and later directly as the medium for
representation of numbers. Numbers could be represented by voltages or currents and manipulated by linear
electronic amplifiers. Or, numbers could be represented as discrete binary or decimal digits, and electrically
controlled switches and combinational circuits could perform mathematical operations.
The invention of electronic amplifiers made calculating machines much faster than their mechanical or
electromechanical predecessors. Vacuum tube (thermionic valve) amplifiers gave way to solid state transistors,
and then rapidly to integrated circuits which continue to improve, placing millions of electrical switches
(typically transistors) on a single elaborately manufactured piece of semi-conductor the size of a fingernail. By
defeating the tyranny of numbers, integrated circuits made high-speed and low-cost digital computers a
widespread commodity.
HISTORICAL DEVELOPMENT.

1. EARLIEST HARDWARE.
The development of the computer has been an incremental process. Devices have been used to aid
computation for thousands of years, mostly using one-to-one correspondence with our fingers. Humans used
various methods to represent numbers. The first was probably stones. The word "calculate" comes from the
Latin word "calculus." Calculus means small stone. Eventually these stones were placed in rows and
columns. However, as numbers increased, the number of stones required became prohibitive. This led to the
creation of counting boards. The earliest counting device was probably a form of tally stick. The counting
rod is another example.

2. ABACUS:
The abacus was early used for arithmetic tasks. The first surviving written record of an abacus was by the
ancient Greek historian Herodotus. Herodotus mentioned the use of the abacus by Greeks and Egyptians.
What we now call the Roman abacus was used in Babylonia as early as 2400 BC. Since then, many other
forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered
cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to
calculating sums of money. The Chinese counting board used rods and beads in a wooden frame. In Russia,
the abacus was modified to have colored beads in the middle indicating the decimal place. he abacus is still
widely used at markets in the former Soviet Union.

3. ANALOG COMPUTERS:
An analog computer is a form of computer using electronic or mechanical phenomena to model the
problem being solved by using one kind of physical quantity to represent another. The term is used in
distinction to digital computers, in which physical or mechanical phenomena are used to construct a finite state
machine which is then used to model the problem being solved. There Computations are often performed, in
analog computers, by using properties of electrical resistance, voltages and so on.
The use of electrical properties in analog computers means that certain calculations on a computer are
performed in real time, without calculation delays as on digital computers. This property allows certain useful
calculations that are comparatively "difficult" for digital computers to perform for example numerical
integration. Any physical process which models some computation can be interpreted as an analog computer
Analog computers often have a complicated framework, but they have, at their core, a set of key
electrical components which perform the calculations, which the operator manipulates through the computer's
framework.
Several analog computers were constructed in ancient and medieval times to perform astronomical
calculations. These include the Antikythera mechanism and the astrolabe from ancient Greece (c. 150–100 BC),
which are generally regarded as the earliest known mechanical analog computers. Other early versions of
mechanical devices used to perform one or another type of calculations include the planisphere and other
mechanical computing devices invented by Abū Rayhān al-Bīrū (AD 1000).Some examples of modern analog
computers are:

A. Pascaline - This was a mathematical calculator which performs addition and subtraction. I n 1642, while still
a teenager, Blaise Pascal started some pioneering work on calculating machines and after three years of effort
and 50 prototypes, he invented the mechanical calculator (called the Pascaline). He built twenty of these
machines in the following ten years.
B. G.W Leinbiz : Gottfried Wilhelm von Leibniz invented the Stepped Reckoner and his famous cylinders
around 1672 while adding direct multiplication and division to the Pascaline. With stepped up performance and
uses punch card for storage of information, they had got additional function besides addition and subtraction,
and also performs square root functionality. Leibniz also described the binary numeral system, as a central
ingredient of all modern computers. However, up to the 1940s, many subsequent designs (including Charles
Babbage's machines of the 1822 and even ENIAC of 1945) were based on the decimal system.

C. PUNCHED CARD TECHNOLGY:


The Punched Card Technology started in 18O1 when Joseph-Marie Jacquard developed a loom in which
the pattern being woven was controlled by punched cards. The series of cards could be changed without
changing the mechanical design of the loom. Powered by water, this "machine" came 140 years before the
development of the modern computer. This was a landmark achievement in programmability.
In 1834 Charles Babbage created a device called the Analytical Engine. It was a direct improvement of the
Jacquard device. This device used many of the innovations incorporated in electronic computers. In 1835,
Babbage described his analytical engine. It was a general-purpose programmable computer, employing punch
cards for input and a steam engine for power, using the positions of gears and shafts to represent numbers. His
initial idea was to use punch-cards to control a machine that could calculate and print logarithmic tables with
huge precision (a special purpose machine). Babbage's idea soon developed into a general-purpose
programmable computer.
Data could be inputted via metal punch cards. Data could be outputted either as punch cards or print. The
device had a mechanical memory. The central processing unit (CPU) used registers and drums to translate user
instructions into control of the hardware. His different engine is sufficiently developed by 1842 that Ada
Lovelace uses it to mechanically translate a short written work. She is generally regarded as the first
programmer.
In the late 1880s, the American Herman Hollerith invented data storage on a medium that could then be
read by a machine. Prior uses of machine readable media had been for control (automatons such as piano rolls
or looms), not data. After some initial trials with paper tape, he settled on punched cards. To process these
punched cards he invented the tabulator, and the key punch machine. These inventions were the foundation of
the modern information processing industry. Hollerith's company eventually became the core of IBM. IBM
developed punch card technology into a powerful tool for business data-processing and produced an extensive
line of unit record equipment. By 1950, the IBM card had become ubiquitous in industry and government. Other
companies that also developed punch cards are Remington and Burroughs. As compared to today’s machines,
these computers were slow, usually processing 50 – 220 cards per minute, each card holding about 80 decimal
numbers (characters). At the time, however, punched cards were a huge step forward. They provided a means of
Input/output, and memory storage on a huge scale. For more than 50 years after their first use, punched card
machines did most of the world’s first business computing, and a considerable amount of the computing work in
science. Punched cards are still used and manufactured to this day, and their distinctive dimensions (and 80-
column capacity) can still be recognized in forms, records, and programs around the world.

D. DESKTOP CALCULATORS:
By the 20th century, earlier mechanical calculators, cash registers, accounting machines, and so on were
redesigned to use electric motors, with gear position as the representation for the state of a variable. The word
"computer" was a job title assigned to people who used these calculators to perform mathematical calculations.
Companies like Friden, Marchant Calculator and Monroe made desktop mechanical calculators from the 1930s
that could add, subtract, multiply and divide. In 1948, the CURTA was introduced. This was a small, portable,
mechanical calculator that was about the size of a pepper grinder. Over time, during the 1950s and 1960s a
variety of different brands of mechanical calculators appeared on the market.
The first all-electronic desktop calculator was the British ANITA Mk.VII, which used a Nixie tube
display. In June 1963, Friden introduced the four-function EC-130. It had an all transistor design. These are the
earliest examples of BINARY VACUUM-TUBE COMPUTERS.

4. HYBRID COMPUTERS:
A hybrid computer is an intermediate device in which an analog output is converted into digits. Because
of their ease of use and because of technological breakthroughs in digital computers in the early 70s, the
analog-digital hybrids were replacing the analog-only systems. Hybrid computers are used to obtain a very
accurate but not very mathematically precise value, using an analog computer front-end, which value is then
fed into a digital computer, using an iterative process to achieve the final desired degree of precision. With a
three or four digit precision, highly-accurate numerical value, the total computation time necessary to reach
the desired precision is dramatically reduced, since many fewer digital iterations are required (and the analog
computer reaches its result almost instantaneously). Digital electronic computers like the ENIAC spelled the
end for most analog computing machines, but hybrid analog computers, controlled by digital electronics,
remained in substantial use into the 1950s and 1960s, and later in some specialized applications.

5. ELECTRONIC DIGITAL COMPUTERS:


The era of modern computing began with a flurry of development before and during World War II, as
electronic circuit elements replaced mechanical equivalents, and digital calculations replaced analog
calculations.
These computers were built by hand using circuits containing relays or valves (vacuum tubes), and
often used punched cards or punched paper tape for input and as the main (non-volatile) storage medium.
In 1936, Alan Turing published a paper that proved enormously influential in computing and
computer science. The main purpose was to prove that there were problems (namely the halting problem) that
could not be solved by any sequential process. In doing so, Turing provided a definition of a universal computer
which executes a program stored on tape. This construct came to be called a Turing machine. Except for the
limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is
to say, they have algorithm execution capability equivalent to a universal Turing machine. For a computing
machine to be a practical general-purpose computer, there must be some convenient read-write mechanism,
punched tape for output.
Electronic digital computers were categorized under three parallel streams during the World War II era
with the first called ZUSE being the work of a German called Konrad Zuse. The second was a secret
development called COLOSSUS in the UK while the third was the work of the Americans called ENIAC
which also roughly coincided with the invention of the First Generation computers. Other examples are the
Atannossof-Berry computer which was the first electronic computer and the Hardvark Mark 1.

A. ATANASOFF-BERRY COMPUTER (ABC).


The Atanasoff-Berry Computer was built between 1939 and 1942 and it was the first electronic
computer. It was developed by physics and mathematics professor John Atanasoff and his graduate student,
Clifford Barry. This computer used the binary system found in modern computers and its method for
storing data is quite similar to that of the modern computer. The design used over 300 vacuum tubes and
employed capacitors fixed in a mechanically rotating drum for memory. Though the ABC machine was not
programmable, it was the first to use electronic tubes in an adder.
B. ZUSE:
Konrad Zuse started construction in 1936 of his first Z-series computers featuring memory and
programmability. Zuse's purely mechanical, but already binary Z1, finished in 1938, never worked reliably.
Zuse's later machine, the Z3 was finished in 1941. The Z3 thus became the first functional program-
controlled, all purpose, digital computer. In many ways it was quite similar to modern machines although
programs were fed into it on punched films. In 1946, IBM bought the patents of the Zuse computers.

C. COLOSSUS:
The Colossus was also built during the World War II. Colossus is a computer specifically designed for
code breaking. It was used by the British during World War II to break German coded messages. Colossus
was the world's first totally electronic programmable computing device. The Colossus used a large number
of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety
of Boolean logical operations on its data.

D. ENIAC:
In April of 1943, the building of the Electronic Numerical Integrator Analyzer and Computer (ENIAC)
commenced and was developed by colleagues John Mauchley and J. Presper Eckert Junior and built at the
University of Pennsylvania's Moore School of Electrical Engineering. ENIAC was the first general-
purpose, all-electronic, programmable digital computer. It combined, for the first time, the high speed of
electronics with the ability to be programmed for many complex problems. It could add or subtract 5000
times a second, a thousand times faster than any other machine. It also had modules to multiply, divide,
and square root. High speed memory was limited to 20 words (about 80 bytes). The ENIAC also contained
accumulators, special registers used to store data, and in addition the computer used a digital number
system rather than the binary system used in modern computers today.

6. GENERATIONS OF COMPUTER:
The history of computer development from the era of the electronic digital computation is often referred
to in reference to the different generations of computing devices. Each generation of computer is characterized
by a major technological development that fundamentally changed the way computers operate, resulting in
increasingly smaller, cheaper, more powerful and more efficient devices. A generation refers to the state of
improvement in the product development process. This term is also used in the different advancements of new
computer technology.
With each new generation, the circuitry has gotten smaller and more advanced than the previous generation
before it. As a result of the miniaturization, speed, power, and computer memory has proportionally increased.
New discoveries are constantly being developed that affect the way we live, work and play. The generations are
as follows:

(A). FIRST GENERATION COMPUTERS (1940-1956): VACUUM TUBES.


The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were
often enormous, taking up entire rooms. They were very expensive to operate and in addition to using a great
deal of electricity, generated a lot of heat, which was often the cause of malfunctions.
First generation computers relied on machine language to perform operations, and they could only
solve one problem at a time. Machine languages are the only languages understood by computers. While easily
understood by computers, machine languages are almost impossible for humans to use because they consist
entirely of numbers.
Computer Programmers, therefore, use either high level programming languages or an assembly
language programming. An assembly language contains the same instructions as a machine language, but the
instructions and variables have names instead of being just numbers.
Programs written in high level programming languages retranslated into assembly language or machine
language by a compiler. Assembly language program retranslated into machine language by a program called an
assembler (assembly language compiler).
Every CPU has its own unique machine language. Programs must be rewritten or recompiled, therefore,
to run on different types of computers. Input was based on punch card and paper tapes, and output was
displayed on printouts.
EDVAC (Electronic Discrete Variable Automatic Computer), the successor to ENIAC was the first
stored-program computer designed as a first generation computer.
The UNIVAC I (Universal Automatic Computer) was the first commercial computer. It used 5,200
vacuum tubes and consumed 125 kW of power. A key feature of the UNIVAC system was a newly invented
type of metal magnetic tape, and a high-speed tape unit, for non-volatile storage and was capable of storing
1,000 words of 11 decimal digits (72-bit words).
I n 1952, IBM publicly announced the IBM 701 Electronic Data Processing Machine and its first IBM
mainframe computer. The IBM 704, introduced in 1954, used magnetic core memory, which became the
standard for large machines. The first implemented high-level general purpose programming language, Fortran,
was also being developed at IBM for the 704.

Characteristics of 1st generation Computers.


 These computers uses vacuum tube for data processing and storage.
 They had a memory size of 20bytes speed of 5mbps.
 They produced a lot of heat.
 These computers were unreliable and could not work fast with a lot of data.
 They uses punch card for data storage.
 The programmer were machine dependent.
 First generation consume a lot of power.

(B). SECOND GENERATION COMPUTERS (1956-1963)- TRANSISTORS:


The bipolar transistor was invented in 1947. From 1955 onwards transistors replaced vacuum tubes in
computer designs, giving rise to the "second generation" of computers. Prior to the invention of transistors,
digital circuits were composed of vacuum tubes, which had many disadvantages. They were much larger,
required more energy, dissipated more heat, and were more prone to failures. It's safe to say that without the
invention of transistors, computing as we know it today would not be possible. The transistor was far superior to
the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient and more
reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that
subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation
computers still relied on punched cards for input and printouts for output.
Second-generation computers moved from cryptic binary machine language to symbolic, or assembly
languages, which allowed programmers to specify instructions in words. High-level programming languages
were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the
first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic
core technology.
Examples of second generation computers include the ATLAS, IBM 1401.

Characteristics of 2nd generation computers.


 Were capable of translating, process and store data.
 Had got memory size of 32bytes speed of 10mbps.
 Were reliable compared to first generation computers.
 Produced less heat compared to first generation computers.
 They uses punch card for data storage.
 Consumed less energy compared to first generation computers.

(C). THIRD GENERATION (1964-1971)- INTEGRATED CIRCUITS:


The mass increase in the use of computers accelerated with 'Third Generation' computers. The first
integrated circuit was produced in September 1958 but computers using them didn't begin to appear until 1964.
The development of the integrated circuit was the hallmark of the third generation of computers. Transistors
were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and
efficiency of computers.
A chip is a small piece of semi conducting material(usually silicon) on which an integrated circuit is
embedded. A typical chip is less than ¼-square inches and can contain millions of electronic
components(transistors). Computers consist of many chips placed on electronic boards called printed circuit
boards. There are different types of chips. For example, CPU chips (also called microprocessors) contain an
entire processing unit, whereas memory chips contain blank memory.
Instead of punched cards and printouts, users interacted with third generation computers through
keyboards and monitors and interfaced with an operating system, which allowed the device to run many
different applications at one time with a central program that monitored the memory. Computers for the first
time became accessible to a mass audience because they were smaller and cheaper than their predecessors.
Smaller, affordable hardware also brought about the development of important new operating systems like
Unix.
Some of their earlier users were NASA and the Military. Digital Equipment Corporation became the
number two computer company behind IBM with their popular PDP and VAX computer systems. In 1966,
Hewlett-Packard invented its HP-2116, offering computing power formerly found only in much larger
computers. It supported a wide variety of languages, among them BASIC, ALGOL, and FORTRAN.

Characteristics of third generation computers:


 They used integrated circuit (I.C) to store data.
 The integrated circuit consisted of many transistors.
 Uses storage disk for data storage e.g. magnetic disks, tapes.
 Third generation computers were more reliable compared to other previous generations.
 They produced less heat.

(D). FOURTH GENERATION (1971-PRESENT)-MICROPROCESSORS:


The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were
rebuilt onto a single silicon chip. A silicon chip that contains a CPU. The invention of the microprocessors led
to the further reduction in size of computers as opposed to previous generations. What in the first generation
filled an entire room could now fit in the palm of the hand. In the world of personal computers, the terms
microprocessor and CPU are used interchangeably. At the heart of all personal computers and most
workstations sits a microprocessor. Sometimes referred to simply as the processor or central processor, the CPU
is where most calculations take place. In terms of computing power, the CPU is the most important element of a
computer system. Microprocessors also control the logic of almost all digital devices, from clock radios to fuel-
injection systems for automobiles.
Two typical components of a CPU are:

 The arithmetic logic unit (ALU), which performs arithmetic and logical operations.
 The control unit, which extracts instructions from memory and decodes and executes them, calling on
the ALU when necessary.

Microprocessors also moved out of the realm of desktop computers and into many areas of life as more
and more everyday products began to use microprocessors.
As these small computers became more powerful, they could be linked together to form networks, which
eventually led to the development of the Internet. Fourth generation computers also saw the development of
GUI's (Graphical User Interface), the mouse and handheld devices.
Companies which have been associated with the invention of the Fourth Generation Computer include
Intel 4001 chip developed in 1971, IBM which introduced its first home user computer in 1981 and Apple
which introduced the Macintosh in 1984 and present day personal computers by companies like HP, Dell,
Compaq etc.

Characteristics of Fourth Generation Computers.


 These computers uses micro processors to process data.
 The micro processors are single chip which perform computer operation.
 Programs are machine independent.
 Were more reliable.
(E). FIFTH GENERATION (PRESENT AND BEYOND)-ARTIFICIAL INTELLIGENCE:
Fifth generation computing devices, based on artificial intelligence, are still in development, though
there are some applications, such as voice recognition, that are being used today.
Artificial Intelligence is the branch of computer science concerned with making computers behave like humans.
Artificial intelligence includes:
Games Playing: programming computers to play games such as chess and checkers

Expert Systems: programming computers to make decisions in real-life situations (e.g. some expert systems
help doctors diagnose diseases based on symptoms)
Natural Language: programming computers to understand natural human languages

Neural Networks: Systems that simulate intelligence by attempting to reproduce the types of physical
connections that occur in animal brains

Robotics: programming computers to see and hear and react to other sensory stimuli

Currently, no computers exhibit full artificial intelligence. The greatest advances have occurred in the
field of games playing where computer chess programs are now capable of beating humans.
In the area of robotics, computers are now widely used in assembly plants, but they are capable only of
very limited tasks.
Natural-language processing offers the greatest potential rewards because it would allow people to
interact with computers without needing any specialized knowledge. You could simply walk up to a computer
and talk to it. Some rudimentary translation systems that translate from one human language to another are
already in existence.
There are also voice recognition systems that can convert spoken sounds into written words, but they do
not understand what they are writing; they simply take dictation.
In the early 1980s, expert systems were believed to represent the future of artificial intelligence and of
computers in general, however, they have not lived up to expectations. Many expert systems help human
experts in such fields as medicine and engineering, but they are very expensive to produce and are helpful only
in special situations.
There are several programming languages that are known as AI languages because they are used almost
exclusively for AI applications. The two most common are LISP and Prolog.
Basically, the goal of fifth generation computing is to develop devices that respond to natural language input
and are capable of learning and self organization.

You might also like