0% found this document useful (0 votes)
10 views

chap 1

Uploaded by

umair03355
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

chap 1

Uploaded by

umair03355
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Chap 1

Introduction to Algorithm
Data: Data is nothing but a fact or raw fact or part
of information that can be stored in computer.
Example: name of a person, name of a place, name
of things, any number of any type, a character,
string, photo, etc.

Data structure: A data structure is a particular way


of organizing data in a computer so that it can be
used effectively.

For example, we can store a list of items having the


same data-type using the array data structure.
Variable:

Variable is nothing but a name given to a storage area that our programs can
manipulate. Each variable in C has a specific type, which determines the size and
layout of the variable's memory; the range of values that can be stored within
that memory; and the set of operations that can be applied to the variable. The
name of a variable can be composed of letters, digits, and the underscore
character. It must begin with either a letter or an underscore. Upper and
lowercase letters are distinct because C is case-sensitive.

Type variable_list;

Local Variables:

Variables that are declared inside a function or block are called local variables.
They can be used only by statements that are inside that function or block of
code. Local variables are not known to functions outside their own. Following is
the example using local variables. Here all the variables a, b and c are local to
main() function.

#include <stdio.h>

int main ()

/* local variable declaration */

int a, b;

int c;

/* actual initialization */
a = 10;

b = 20;

c = a + b;

printf ("value of a = %d, b = %d and c = %d\n", a, b, c);

return 0;

}
Global Variables
Global variables are defined outside of a function, usually on top of the program.
The global variables will hold their value throughout the lifetime of your program
and they can be accessed inside any of the functions defined for the program. A
global variable can be accessed by any function. That is, a global variable is
available for use throughout your entire program after its declaration. Following is
the example using global and local variables:

#include <stdio.h>

/* global variable declaration */

int g;

int main ()

/* local variable declaration */

int a, b;

/* actual initialization */

a = 10;

b = 20;
g = a + b;

printf ("value of a = %d, b = %d and g = %d\n", a, b, g);

return 0;

Data Types:
In the C programming language, data types refer to an extensive system used for
declaring variables or functions of different types. The type of a variable
determines how much space it occupies in storage and how the bit pattern stored
is interpreted.

Introduction to algorithm:

 Algorithm has come to refer to a method that can be used by a


computer for the solution of a problem.

 This is what makes algorithm different from words such as process or


method.

 An algorithm is a finite set of instructions that, if followed,


accomplishes a particular task.

 In addition, all algorithms must satisfy the following criteria:

1. Input - Zero or more quantities are externally supplied.


2. Output - At least one quantity is produced.
3. Definiteness – Each instruction is clear and unambiguous.
4. Finiteness – If we trace out the instructions of an algorithm, then for
all cases, the algorithm terminates after a finite number of steps.
5. Effectiveness - Every instruction must be very basic so that it can be
carried out, in principle, by a person using only pencil and paper. It
is not enough that each operation be definite as in criterion 3; it
also must be feasible.

 Informally, an algorithm is any well−defined computational procedure


that takes some value, or set of values, as input and produces some
value, or set of values, as output.

 An algorithm is thus a sequence of computational steps that transform


the input into the output.

 We can also view an algorithm as a tool for solving a well specified


computational problem.

 The statement of the problem specifies in general terms the desired


input/output relationship.

 The algorithm describes a specific computational procedure for


achieving that input/output relationship.

 We begin our study of algorithms with the problem of sorting a


sequence of numbers into non decreasing order.

 This problem arises frequently in practice and provides fertile ground


for introducing many standard design techniques and analysis tools.

 Here is how we formally define the sorting problem:

Input: A sequence of n numbers a1, a2, ….. , an.


Output: A permutation (reordering) of the input sequence such that
given an input sequence such as 31, 41, 59, 26, 41, 58 a sorting
algorithm returns as output the sequence 26, 31, 41, 41, 58, 59.

 Such an input sequence is called an instance of the sorting problem.

 In general, an instance of a problem consists of all the inputs


(satisfying whatever constraints are imposed in the problem
statement) needed to compute a solution to the problem.

 Sorting is a fundamental operation in computer science (many


programs use it as an intermediate step), and as a result a large
number of good sorting algorithms have been developed.

 Which algorithm is best for a given application depends on the


number of items to be sorted, the extent to which the items are
already somewhat sorted, and the kind of storage device to be used:
main memory, disks, or tapes.

 An algorithm is said to be correct if, for every input instance, it halts


with the correct output.

 We say that a correct algorithm solves the given computational


problem.

 An incorrect algorithm might not halt at all on some input instances,


or it might halt with other than the desired answer.

Insertion sort

 We start with insertion sort, which is an efficient algorithm for sorting


a small number of elements.

 Insertion sort works the way many people sort gin rummy hand.
 We start with an empty left hand and the cards face down on the
table.

 We then remove one card at a time from the table and insert it into
the correct position in the left hand.

 To find the correct position for a card, we compare it with each of the
cards already in the hand, from right to left, as illustrated in Figure 1.1.

Figure 1.1 sorting a hand of cards using insertion sort.

 Our pseudocode for insertion sort is presented as a procedure called


INSERTION−SORT, which takes as a parameter an array A[1 . . n]
containing a sequence of length n that is to be sorted. (In the code,
the number n of elements in A is denoted by length[A].)

 The input numbers are sorted in place: the numbers are rearranged
within the array A, with at most a constant number of them stored
outside the array at any time.

 The input array A contains the sorted output sequence when


INSERTION−SORT is finished.

INSERTION−SORT (A)
Step 1 for i= 2 to length[A]
Step 2 do key= A[i]
Step 3 j= i − 1
Step 4 while j > 0 and A[j] > key
Step 5 do A[j + 1] =A[j]
Step 6 j =j − 1
Step 7 A[j + 1] =key

 Figure 1.2 shows how this algorithm works for A = 5, 2, 4, 6, 1, 3.

 The index j indicates the "current card" being inserted into the hand.

 Array elements A[1.. j − 1] constitute the currently sorted hand, and


elements A[j+ 1 . . n] correspond to the pile of cards still on the table.

 The index j moves left to right through the array.

 At each iteration of the "outer" for loop, the element A[j] is picked out
of the array (line 2).

 Then, starting in position j − 1, elements are successively moved one


position to the right until the proper position for A[j] is found (lines
4−7), at which point it is inserted (line 8).

Figure 1.2 The operation of INSERTION−SORT on the array A = 5, 2, 4, 6, 1, 3.


 The position of index j is indicated by a circle.
Efficiency of algorithm

 Efficiency considerations for algorithms are inherently tied in with the


design, implementation, and analysis of algorithms.

 Every algorithm must use up some of a computer’s resources to


complete its task.

 The resources most relevant in relation to efficiency are central


processor time (cpu time) and internal memory.

 Because of the high cost of computing resources it is always desirable


to design algorithms that are economical in the use of CPU time and
memory.

 This is an easy statement to make but one that is often difficult to


follow either because of bad design habits, or the inherent complexity
of the problem, or both.

 As with most other aspects of algorithm design, there is no recipe for


designing efficient algorithms.

 Despite there being some generalities each problem has its own
characteristics which demand specific responses to solve the problem
efficiently.

 Within the framework of this last statement we will try to make a few
suggestions that can sometimes be useful in designing efficient
algorithms.
 Redundant computations –

 Most of the inefficiencies that creep into the implementation of


algorithms come about because redundant computations are made or
unnecessary storage is used.

 The effects of redundant computations are most serious when they


are embedded within a loop that must be executed many times.

 The most common mistake using loops is to repeatedly recalculate


part of an expression that remains constant throughout the entire
execution phase of the loop.

For ex:
X=0;
For(i=0;i<=n;i++)
{
x=x+0.01;
y=(a*a*a +c)*x*x + b*b*x;
Printf(“\n x= %f \t y= %f ”,x,y);
}

 This loop does twice the number of multiplications necessary to


complete the computation.

 The unnecessary multiplications and additions can be removed by


precomputing two other constants for a*a*a* and b*b before
executing the loop.

Just like:
P=a*a*a +c;
Q=b*b;
X=0;
For(i=0;i<=n;i++)
{
X=x+0.01;
Y= p*x*x + q*x;
Printf(“\n x= %f \t y= %f ”,x,y);
}

 The savings in this instance are not all that significant but there are
many other situations where they are much more significant.

 It is always most important to strive to eliminate redundancies in the


innermost loops of computations as these inefficiencies can be most
costly.

 Referencing array elements –

 If care is not exercised redundant computations can also easily creep


into array processing.

 Consider, for example, two versions of an algorithm for finding the


maximum and its position in an array.

Version 1: Version 2:
P=1;
P=1; Max=a[1];
For(i=2;i<n+1;i++) For(i=2;i<n+1;i++)
{ {
If(a[i]>a[p]) If(a[i]>max)
P=I; {
Max=a[p]; MAX= A[I];
} P=I;
}
}

 The version 2 implementation would be normally preferred because


the conditional test (i.e. a[i] > max) which is the dominant instruction
is more efficient to perform than the corresponding test in version 1.
 It is more efficient because to use the variable max only one memory
reference instruction is required, whereas to use the variable[p]
requires two memory references and an addition operation to locate
the correct value for use in the test.

 Also in version 2, introduction of the variable max makes it clearer


what task is to be accomplished.

 Inefficiency due to late termination—

 Another place inefficiencies can come into an implementation is


where considerably more tests are done than are required to solve the
problem at hand.

 This type of inefficiency can be best illustrated by example.

 Suppose we had to linear search an alphabetically ordered list of


names for some particular name.

 An inefficient implementation in this instance would be one where all


names were examined even if the point in the list was reached where
it was known that the name could not occur later.

 The inefficient implementation could have the form:

While name sought<> current name and no end of file do


Get next name from list.

 A most efficient implementation would be:

While name sought<> current name and no end of file do


Get next name from list.
Test if current name is equal to name sought.
 Bubble sort algorithm can built same sort of inefficiency if care is not
taken with the implementation.

 Early detection of desired output conditions—

 The bubble sort also provides us with an example of another related


type of inefficiency involving termination.

 It sometimes happens, due to the nature of the input data, that the
algorithm establishes the desired output condition before the general
conditions for termination have been met.

 For example, a bubble sort might be used to sort a set of data that is
already almost in sorted order.

 When this happens it is very likely that the algorithm will have the
data in sorted order long before the loop termination conditions are
met.

 It is therefore desirable to determine the sort as soon as it is


established that the data is already sorted.

 To do this all we need to do is check whether there have been any


exchanges in the current pass of the inner loop.

 If there have been no exchanges in the current pass the data must be
sorted and so early termination can be applied.

 In general, we must include additional steps and tests to detect the


conditions for early termination.

 However, if they can be kept inexpensive then it is worth including


them.
 That is, when early termination is possible, we always have to trade
tests and maybe even storage to bring about the early termination.

 Trading storage for efficiency gains—

 A trade between storage and efficiency is often used to improve the


performance of an algorithm.

 What usually happens in this type of tradeoff is that we precompute


or save some intermediate results and avoid having to do a lot of
unnecessary testing and computation later on.

 One strategy that is sometimes used to try to speed up an algorithm is


to implement it using the least number of loops.

 While this is usually possible, inevitably it makes programs much


harder to read and debug.

 It is therefore usually better to stick to the rule of having ‘one loop do


one job’ just as we have one variable doing one job.

 When a more efficient solution to a problem is required it is far better


to try to improve the algorithm rather than resorting to “programming
tricks” that tend to obscure what is being done.

 A clear implementation of a better algorithm is to be preferred to a


“tricky” implementation of an algorithm that is not as good.

 We are now left within the task of trying to measure the efficiency of
algorithm.
Analyzing algorithms:

 Analyzing an algorithm has come to mean predicting the resources


that the algorithm requires.

 Occasionally, resources such as memory, communication bandwidth,


or logic gates are of primary concern, but most often it is
computational time that we want to measure.

 Generally, by analyzing several candidate algorithms for a problem, a


most efficient one can be easily identified.

 Such analysis may indicate more than one viable candidate, but
several inferior algorithms are usually discarded in the process.

 Before we can analyze an algorithm, we must have a model of the


implementation technology that will be used, including a model for
the resources of that technology and their costs.

 Analyzing even a simple algorithm can be a challenge.

 The mathematical tools required may include discrete combinatorics,


elementary probability theory, algebraic dexterity, and the ability to
identify the most significant terms in a formula.

 Because the behavior of an algorithm may be different for each


possible input, we need a means for summarizing that behavior in
simple, easily understood formulas.

 Even though we typically select only one machine model to analyze a


given algorithm, we still face many choices in deciding how to express
our analysis.
 One immediate goal is to find a means of expression that is simple to
write and manipulate, shows the important characteristics of an
algorithm's resource requirements, and suppresses tedious details.

Analysis of insertion sort

 We start by presenting the INSERTION−SORT procedure with the time


"cost" of each statement and the number of times each statement is
executed.

 For each i = 2, 3, . . . , n, where n = length[A], we let tj be the number


of times the while loop test in line 4 is executed for that value of i.

 We assume that comments are not executable statements, and so


they take no time.

 The running time of the algorithm is the sum of running times for each
statement executed; a statement that takes ci steps to execute and is
executed n times will contribute ci n to the total running time.

 To compute T(n), the running time of INSERTION−SORT, we sum the


products of the cost and times columns, obtaining this characteristic
does not necessarily hold for a resource such as memory.

 A statement that references m words of memory and is executed n


times does not necessarily consume mn words of memory in total.

 Even for inputs of a given size, an algorithm's running time may


depend on which input of that size is given.

 For example, in INSERTION−SORT, the best case occurs if the array is


already sorted.
 For each j = 2, 3, . . . , n, we then find that A[i] key in line 5 when i has
its initial value of j − 1.

 Thus tj = 1 for j = 2,3, . . ., n, and the best−case running time is


T(n) = c1n + c2 (n − 1) + c4 (n − 1) + c5 (n − 1) + c8 (n − 1)
= (c1 + c2 + c4 + c8)n − (c2 + c4 + c5 + c8).

 This running time can be expressed as an + b for constants a and b


that depend on the statement costs ci; it is thus a linear function of n.

 If the array is in reverse sorted order−−that is, in decreasing order−


−the worst case results.

 We must compare each element A[j] with each element in the entire
sorted subarray A[1. . j − 1], and so tj = j for j = 2,3, . . . , n. and we find
that in the worst case, the running time of INSERTION−SORT.

 This worst−case running time can be expressed as an2 + bn+ c for


constants a, b, and c that again depend on the statement costs ci; it is
thus a quadratic function of n.

 Typically, as in insertion sort, the running time of an algorithm is fixed


for a given input.

Worst−case and average−case analysis

 In our analysis of insertion sort, we looked at both the best case, in


which the input array was already sorted, and the worst case, in which
the input array was reverse sorted.

 In searching a database for a particular piece of information, the


searching algorithm's worst case will often occur when the
information is not present in the database.
 In some searching applications, searches for absent information may
be frequent.

 The "average case" is often roughly as bad as the worst case.

 Suppose that we randomly choose n numbers and apply insertion sort.

 How long does it take to determine where in subarray A[1. . j − 1] to


insert element A[j]?

 On average, half the elements in A[1. . j − 1] are less than A[j], and half
the elements are greater.

 On average, therefore, we check half of the subarray A[1. . j − 1], so tj


= j/2.

 If we work out the resulting average−case running time, it turns out to


be a quadratic function of the input size, just like the worst−case
running time.

 In some particular cases, we shall be interested in the average−case or


expected running time of an algorithm.

 One problem with performing an average−case analysis, however, is


that it may not be apparent what constitutes an "average" input for a
particular problem.

 Often, we shall assume that all inputs of a given size are equally likely.

 In practice, this assumption may be violated, but a randomized


algorithm can sometimes force it to hold.
PERFORMANCE ANALYSIS

 There are many criteria upon which we can judge an algorithm.

 For instance:

1. Does it do what we want it to do?


2. Does it work correctly according to the original specifications of the
task?
3. Is there documentation that describe show to use it and how it
works?
4. Are procedures created in such a way that they perform logical sub
functions.
5. Is the code readable?

 These criteria are all vitally important when it comes to writing


software, most especially for large systems.

 There are other criteria for judging algorithms that have a more direct
relationship to performance.

 These have to do with their computing time and storage


requirements.

Space/Time complexity

 The space complexity of an algorithm is the amount of memory it


needs to run to completion.

 The time complexity of an algorithm is the amount of computer time it


needs to run to completion.

 Performance evaluation can be loosely divided into two major phases:


(1) a prior estimates
(2) a poster or testing.

 We refer to these as performance analysis and performance


measurement respectively.

Space Complexity

1. Algorithm abc (Algorithm1.5) computes a+ b+b*c+(a+ bc) / (a+b) + 4.0;


2. Algorithm Sum (Algorithm 1.6) computes Ya=i aM iteratively, where
the a[i]' are real numbers; and
3. RSum (Algorithm 1.7) is a recursive algorithm.

 Algorithm abc (a,b,c) // Algorithm1.5


{
return a + b + b * c+ (a + b -c)/(a+ b) + 4.0;
}

 Algorithm Sum(a,n) // Algorithm 1.6


{
s:=0.0;
for i :=1to n do
s :=s+ a[i];
returns;
}

 Algorithm RSum(a,n) // Algorithm 1.7


{
if (n <0) then return 0.0;
else return RSum (a,n1)+ a[n];
}

 The space needed by each of these algorithms is seen to be the sum of


the following components:
 A fixed part that is independent of the characteristics (e.g., number,
size) of the inputs and outputs.

 This part typically includes the instruction space (i.e., space for the
code), space for simple variables and fixed-size component variables
(also called aggregate), space for constants, and soon.

 A variable part that consists of the space needed by component


variables whose size is dependent on the particular problem instance
being solved, the space needed by referenced variables (to the extent
that this depends on instance characteristics and the recursion stack
space (in so far as this space depends on the instance characteristics ).

 The space requirement S(P) of any algorithm P may therefore be


written as S(P)=c+S(p) (instance characteristics w), here c is a constant.

 When analyzing the space complexity of an algorithm, we concentrate


solely on estimating S(p) (instance characteristics).

 For any given problem, we need first to determine which instance


characteristics to use to measure the space requirements.

 This is very problem specific and we resort to examples to illustrate


the various possibilities.

 Generally speaking, our choices are limited to quantities related to the


number and magnitude of the inputs to and outputs from the
algorithm.

 At times, more complex measures of the interrelationships among the


data items are used.

 For Algorithm 1.5, the problem instance is characterized by the


specific values of a, b, and c.
 Making the assumption that one word is adequate to store the values
of each of a, b, c, and the result.

 We see that the space needed by a, b, c is independent of the instance


characteristics.

Time Complexity

 The time T(P) taken by a program P is the sum of the compile time and
the run (or execution) time.

 The compile time does not depend on the instance characteristics.

 Also, we may assume that a compiled program will be run several


times without recompilation.

 Consequently, we concern ourselves with just the run time of a


program.

 This run time is denoted by T(p) (instance characteristics).

 Because many of the factors T(p) depends on are not known at the
time a program is conceived it, is reasonable to attempt only to
estimate T(p).

 If we knew the characteristics of the compiler to be used, we could


proceed to determine the number of additions, subtractions,
multiplications, divisions, compares, loads, stores, and soon, that
would be made by the code for P.

 So,we could obtain an expression for T(p)(n) of the form


tP(n) = caADD(n)+ csSUB(n)cmMUL(n)+ cdDIV(n)
 where n denotes the instance characteristics and ca ,cs, cm, cd, and so
on, respectively, denote the time needed for an addition, subtraction,
multiplication, division, and soon, and ADD, SUB, MUL, DIV, and so on,
are functions whose values are the numbers of additions, subtractions,
multiplications, divisions, and soon, that are performed when the code
for P is used on an instance with characteristic n.

 Obtaining such an exact formula is in itself an impossible task, since


the time needed for an addition, subtraction, multiplication, and soon,
often depends on the numbers being added, subtracted, and
multiplied, and so on.

 The value of T(p)(n) for any given n can be obtained only


experimentally.

 The program is typed, compiled, and run on a particular machine.

 The execution time is physically clocked, and tp(n) obtained.

 Even with this experimental approach, one could face difficulties.

 In a multiuser system, the execution time depends on such factors as


system load, the number of other programs running on the computer
at the time program P is run, the characteristics of these other
programs, and so on.

 Given the minimal utility of determining the exact number of


additions, subtractions, and so on, that are needed to solve a problem
instance with characteristics given by n, we might as well lump all the
operations together (provided that the time required by each is
relatively independent of the instance characteristics) and obtain a
count for the total number of operations.

 We can go one step further and count only the number of program
steps.
 A program step is loosely defined as a syntactically or semantically
meaningful segment of a program that has an execution time that is
independent of the instance characteristics.

 For example, the entire statement return a + b+b*c+(a+ b-c)/(a+ b) +


4.0; of Algorithm 1.5 could be regarded as a step since its execution
time is independent of the instance characteristics (this statement is
not strictly true, since the time for a multiply and divide generally
depends on the numbers involved in the operation).

 The number of steps any program statement is assigned depends on


the kind of statement.

 For example, comments count as zero steps; an assignment statement


which does not involve any calls to other algorithms is counted as one
step; in an iterative statement such as the for, while, and repeat-until
statements, we consider the step counts only for the control part of
the statement.

 The control parts for ‘for’ and ‘while’ statements have the following
forms:
for % :=(expr)to (expr 1)do
while ((expr))do

 Each execution of the control part of a while statement is given a step


count, equal to the number of step counts assignable to (expr).

 The step count for each execution of the control part of a for
statement is one, unless the counts attributable to (expr) and (expr 1)
are functions of the instance characteristics.

 In this latter case, the first execution of the control part of the ‘for’ has
a step count equal to the sum of the counts for (expr) and (expr 1)
(note that these expressions are computed only when the loop is
started).
 Remaining executions of the ‘for’ statement have a step count of one
and soon.

 We can determine the number of steps needed by a program to solve


a particular problem instance in one of two ways.

 In the first method, we introduce a new variable, count, into the


program.

 This is a global variable with initial value 0.

 Statements to increment count by the appropriate amount are


introduced into the program.

 This is done so that each time a statement in the original program is


executed; count is incremented by the step count of that statement.

Some fundamental Algorithms for Exchange, Counting,


Summation, etc.

Exchange

Problem: Given two variables, a and b, exchange the values assign to them.

Algorithm development
 The problem of interchanging the values associated with two variables
involves a very fundamental mechanism that occurs in many sorting
and data manipulation algorithms.
 To define the problem more clearly we will examine a specific
example. Consider that the variables a and b are assigned values as
outlined below.
 That is,
Starting configuration

a = 721

b = 463

Target configuration

a = 463

b = 721

 If we do,
a = b;

b= a;

 It will first overwrite the value of b into a. i.e. a = 463.


 Then for second statement it will copy the value of a(which is 463
now) into b, i.e. b = 463.
 This won’t lead to the target configuration.
 So we need one temporary variable to store one value, assume t.
t = a;

a = b;

b= t;

 This will give us our target configuration.


 First statement will do: t = 721
 Second statement will do: a = 463
 Third statement will do: b = 721
 The exchange procedure can now be outlined.

Algorithm description
1. Save the original value of a in t.
2. Assign to a the original value of b.
3. Assign to b the original value of a that is stored in t.

Pascal implementation
Procedure exchange (var a,b :integer);

Var t : integer;

Begin {save the original value of a then exchange a and b}

T := a;

A := b;

B := t;

End

Application :- Sorting algorithms.

Counting

Problem : Given a set of n students examination marks, make a count of the


number of students that passed the examination. A pass is awarded for all
marks of 50 and above.

Algorithm development
 Generally a count must be made of the number of items in a set which
possess some particular property or which satisfy some particular
conditions.
 As a starting point for developing a computer algorithm for this
problem we can consider how we might solve a particular example by
hand.
 Suppose that we have the given set of marks
55,42,77,63,29,57,89

 In this, we have to count the number of students having marks >=50.


 The process will start from very first element of the list.
 In more detail :
Marks Counting details for passes

55 Previous count = 0 current count = 1


42 Previous count = 1 current count = 1
Order in which marks 77 Previous count = 1 current count = 2
are examined 63 Previous count = 2 current count = 3
29 Previous count = 3 current count = 3
57 Previous count = 3 current count = 4
89 Previous count = 4 current count = 5
 So, number of students passed = 5

Algorithm Description
1. Prompt then read the number of marks to be processed.
2. Initialize count to zero.
3. While there are still marks to be processed repeatedly do
a. Read next mark
b. If it is a pass(i.e. >=50) then add one to count.
4. Write out total number of passes.

Pascal implementation
Program passcount (input, output);

Const passmark = 50;

Var count {contains number of passes on termination},

I {current number of marks processed},


M {current mark},

N { total number of marks to be processed} : integer;

Begin {count the number of passes (>=50) in a set of marks}

Writeln (‘enter a number of marks n on a separate line followed


by the marks’);

Readln (n);

{assert : n>=0}

Count := 0;

I := 0;

{invariant: count = number of marks in the first I read that are


>=passmark AND i<=n}

While i<n do

Begin {read next mark, test it for pass and update count if
necessary}

I=i+1;

Read(m);

If eoln (input) then readln;

If m>= passmark then count := count +1

End;

{assert: count = number of passes in the set of n marks read}

Writeln (‘number of passes = ‘,count)

End
Applications: all forms of counting.

Summation

Problem : Given a set of n numbers, design an algorithm that adds these


numbers and returns the resultant sum. Assume n is greater than or equal
to zero.

Algorithm Development
 The approach we need to take to formulate an algorithm to add n
numbers in a computer is different from what we would do
conventionally to solve the problem.
 Conventionally we could write the general equation
S = (a1 + a2 + a3 +…. +an)

 We can take a variable named s, set it to 0 and add each number from
given list to s.
 The core of the algorithm for summing n numbers therefore involves a
special step followed by a set of n iterative steps.
 That is,
1. Compute first sum (s=0) as special case.
2. Build each of the n remaining sums from its predecessor by an
iterative process.
3. Write out the sum of n numbers.

 The only other considerations involve the input of n, the number of


numbers to be summed, and the input of successive numbers with
each iterative step.
 Our complete algorithm can now be outlined.

Algorithm description
1. Prompt and read in the number of numbers to be summed.
2. Initialize sum for zero numbers.
3. While less than n numbers have been summed repeatedly do
a. Read in next number
b. Compute current sum by adding the number read to the most
recent sum.
4. Write out sum of n numbers.

Pascal implementation
Program sum (input, output);

Var I {summing loop index},

N {number of numbers to be summed}: integer;

A {current number to be summed},

S {sum of n numbers on termination}: real;

Begin {computes sum of n real numbers for n>=0}

Writeln (‘input n on a separate line, followed by the numbers to


be summed’);

Readln (n);

{assert : n>=0}

I := 0;

S :=0.0;

{invariant: s = sum of first I numbers read AND i<=n}

While i<n do

Begin {calculate successive partial sums}


I:- i+1;

Read (a);

If eoln (input) then readln;

S := s+a;

End;

{assert : s = sum of n numbers read}

Writeln (‘sum of n = ‘,n,’ numbers = ‘,s)

End

Applications :- average calculations, variance and least squares


calculations.

Excercises

1. Algorithm to find and return the maximum of n given numbers.


2. Selection sort
3. Tower of Hanoi
4. Summation
5. Summation of n numbers with counting
6. Matrix addition
7. Matrix addition with count statement
8. Fibonacci numbers
9. Compute xn .
10. Sequential search.
11. Matrix multiplication
12. Push
13. Pop
14. Link representation of stack.
15. Basic queue operation
16. Searching a binary tree
17. Insertion into a binary tree.
18. Insertion into a heap.
19. Delete from heap.
20. Sorting.
21. Heapsort.
22. Union
23. Find
24. Binary search
25. Maximum and minimum.
26. Merge sort.
27. Insertion sort.
28. Selection sort
29. Prim’s minimum-cost spanning tree algorithm.
30. Kruskal’s algorithm.
31. Greedy algorithm to generate shortest paths.
32. Bellman and ford algorithm to compute shortest paths.
33. Preorder
34. Postorder
35. Breadth first search traversal

You might also like