Unit Ii
Unit Ii
Introduction to Algorithms,
An algorithm is a method for solving a computational problem. A formula or set of steps for solving a
problem. To be an algorithm, a set of rules must be unambiguous and have a clear stopping point.
Algorithms can be expressed in any language, from natural languages like English or French to
programming languages like FORTRAN.
For example, we might be able to say that our algorithm indeed correctly solves the problem in question and
runs in time at most f(n) on any input of size n.
Definition. An algorithm is a finite set of instructions for performing a computation or solving a problem.
Types of Algorithms Considered. In this course, we will concentrate on several different types of relatively
simple algorithms, namely:
Selection -- Finding a value in a list, counting numbers;
Sorting -- Arranging numbers in order of increasing or decreasing value; and
Comparison -- Matching a test pattern with patterns in a database.
Input -- Values that are accepted by the algorithm is called input or arguments.
Output -- The result produced by the algorithm is the solution to the problem that the algorithm is designed
to address.
Definiteness -- The steps in each algorithm must be well defined.
Correctness -- The algorithm must perform the task it is designed to perform, for all input combinations.
Finiteness -- Output is produced by the algorithm after a finite number of computational steps.
Effectiveness -- Each step of the algorithm must be performed exactly, infinite time.
Generality -- The procedure inherent in a specific algorithm should be applicable to all algorithms of the
same general form, with minor modifications permitted.
Complexities of Algorithm:
Algorithmic complexity is concerned about how fast or slow algorithm performs.
Space S(n) -- How much storage (memory or disk) is required for an algorithm to produce a specified
output given n inputs?
Time T(n) -- How long does it take to compute the algorithm (with n inputs) on a given architecture?
Cost C(n) = T(n) · S(n) -- Sometimes called the space-time bandwidth product, this measure tells a system
designer what expenditure of aggregate computational resources is required to compute a given algorithm
with n inputs.
FLOWCHART
1. The flowchart is a type of diagram (graphical or symbolic) that represents an algorithm or process.
2. Each step in the process is represented by a different symbol and contains a short description of the
process step.
3. The flowchart symbols are linked together with arrows showing the process flow direction.
4. A flowchart typically shows the flow of data in a process.
5. Flowcharts are used in analyzing, designing, documenting or managing a process or program in
various fields.
6. Flowcharts are generally drawn in the early stages of formulating computer solutions.
7. Flowcharts often facilitate communication between programmers and business people.
Example: - Draw a flowchart to find the largest among three different numbers entered by the
user.
Flowchart Symbols
1. Terminator: “Start” or “End”
2. Process
3. Decision: Yes/No question or True/False
4. Connector: jump in the process flow
5. Data: data input or output (I/O)
6. Delay
7. Arrow: flow of control in a process.
Introduction to Programming
An organized list of instructions that, when executed, causes the computer to behave in a predetermined
manner. Without programs, computers are useless.
A program is like a recipe. It contains a list of ingredients (called variables) and a list of directions
(called statements) that tell the computer what to do with the variables. The variables can represent
numeric data, text, or graphical images.
There are many programming languages -- C, C++, Pascal, BASIC, FORTRAN, COBOL, and LISP are
just a few. These are all high-level languages. One can also write programs in low-level languages
called assembly languages, although this is more difficult. Low-level languages are closer to the
language used by a computer, while high-level languages are closer to human languages.
1. Readability: A good high-level language will allow programs to be written in some ways that
resemble a quite-English description of the underlying algorithms. If care is taken, the coding may be
done in a way that is essentially self-documenting.
2. Portability: High-level languages, being essentially machine independent, should be able to
develop portable software.
3. Generality: Most high-level languages allow the writing of a wide variety of programs, thus
relieving the programmer of the need to become an expert in many diverse languages.
4. Brevity: Language should have the ability to implement the algorithm with less amount of
code. Programs expressed in high-level languages are often considerably shorter than their low-
level equivalents.
5. Error checking: Being human, a programmer is likely to make many mistakes in the development of
a computer program. Many high-level languages enforce a great deal of error checking both at compile-
time and at run-time.
6. Cost: The ultimate cost of a programming language is a function of many of its characteristics.
7. Familiar notation: A language should have a familiar notation, so it can be understood by most of
the programmers.
8. Quick translation: It should admit quick translation.
9. Efficiency: It should permit the generation of efficient object code.
10. Modularity: It is desirable that programs can be developed in the language as a collection of
separately compiled modules, with appropriate mechanisms for ensuring self-consistency between these
modules.
11. Widely available: Language should be widely available and it should be possible to provide
translators for all the major machines and for all the major operating systems.
Difference between Procedure Oriented Programming (POP) & Object-Oriented Programming (OOP)