Algorithm Strategies A LEVEL 9A
Algorithm Strategies A LEVEL 9A
A study of algorithms has come to be recognized as the cornerstone of computer science, ant
it becomes a necessity to study the various strategies and techniques in order to choose the
most appropriate while solving a problem. Algorithm design techniques give guidance and
direction on how to create a new algorithm. Second, the study of these techniques help us to
categorize or organize the algorithms we know and in that way to understand them better."
Learning objectives
After studying this chapter, student should be able to:
To be able to define the concept of recursion as a programming strategy distinct from
other forms of algorithmic decomposition.
Define procedural programming techniques and its implications on logic design
Define modularization and describe its benefits
Describe identifier and module scope and the implications of global, local and
designations
Define and describe benefits of designing programs using object orientated design
techniques
State the principles of some standard sorting and searching algorithm and compare
the different method
Contents
I. ALGORITHM STRATEGIES ........................................................................................... 2
II. RECURSION .................................................................................................................. 3
III. PROGRAMMING TECHNIQUES ................................................................................ 6
IV. SOME STANDARD ALGORITHMS ......................................................................... 10
This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 2 By DZEUGANG Placide
I. ALGORITHM STRATEGIES
An algorithm strategy is a general approach to solve problems algorithmically. Although
more than one technique may be applicable to a specific problem, it is often the case that an
algorithm constructed by one approach is clearly superior to equivalent solutions built using
alternative techniques.
I.1 Examples of strategies
1. Brute Force
Brute force is a straightforward approach to solve a problem based on the problem’s
statement and definitions of the concepts involved. It is considered as one of the easiest
approach to apply and is useful for solving small – size instances of a problem. Some
examples of brute force algorithms are: Computing an (a > 0, n a nonnegative integer) by
multiplying a*a*…*a, computing n!, Selection sort , Bubble sort, Sequential search
2. Greedy Algorithms "take what you can get now" strategy
The solution is constructed through a sequence of steps, each expanding a partially
constructed solution obtained so far. At each step the choice must be locally optimal – this is
the central point of this technique. Examples: Minimal spanning tree, Shortest distance in
graphs
3. Divide-and-Conquer, Decrease and conquer
Also known as stepwise refinement, these are methods of designing algorithms that
(informally) proceed as follows: Given an instance of the problem to be solved, split this
into several smaller sub-instances (of the same problem), independently solve each of the
sub-instances and then combine the sub-instance solutions so as to yield a solution for the
original instance.
With the divide-and-conquer method the size of the problem instance is reduced by a factor
(e.g. half the input size), while with the decrease-and-conquer method the size is reduced by a
constant.
Examples of divide-and-conquer algorithms: Computing an by recursion, Binary search in
a sorted array (recursion), Mergesort algorithm, Quicksort algorithm (recursion)
Examples of decrease-and-conquer algorithms: Insertion sort, Topological sorting, Binary
Tree traversals: inorder, preorder and postorder (recursion)
4. Transform-and-Conquer
These methods work as two-stage procedures. First, the problem is modified to be more
amenable to solution. In the second stage the problem is solved.
Example: consider the problem of finding the two closest numbers in an array of numbers.
This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 3 By DZEUGANG Placide
We now see why factorial is often the introductory example for recursion: the factorial
function is recursive, it is defined in terms of itself. Taking the factorial of n, we have:
1 𝑖𝑓 𝑛 = 0
𝑛! = {
𝑛 ∗ (𝑛 − 1)! 𝑖𝑓 𝑛 > 0
Let's try writing our factorial function factorial(n). We want to code in the n! = n*(n - 1)!
functionality. Easy enough:
This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 4 By DZEUGANG Placide
The Fibonacci sequence is often used to illustrate the power of dynamic programming. The
sequence is defined by the following recurrence relation:
You may remember from your study of binary trees that a nearly
complete one, like the one above, has around 2n nodes, where n is
the height of the tree (number of levels - 1). That means the number
of function calls is around 2n, which is exponential given each
function call is O(1). That's very expensive for such a simple little
function!
b) Power function
Let consider the problem of computing xn for x>0 and n a natural number. Starting from the
definition,
𝑥 𝑛 = 𝑥 ∗ 𝑥 ∗ … ∗ 𝑥 = 𝑥 ∗ 𝑥 𝑛−1 .
Applying the brute force and the recursive techniques respectively, we have
c) Hanoi’s tower
This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 5 By DZEUGANG Placide
Let’s consider three vertical pegs labelled as follows: src (source), dest (destination) and int
(intermediate). On the src peg there are n disc placed in decreasing order of their diameter
(the disc at the bottom is the largest one). We want to move all discs from the peg src to the
rod dest by using int as intermediate peg such that the following constraints are satisfied.
(i) On the destination peg the disc are placed in the same decreasing order
(ii) A disc can never be placed on a smaller disc
The basic idea of solving this problem is: “move n-1 disc from src to int using dest as
intermediate peg; move the largest disc to src to dest; move the n-1 discs from int to dest
using s as intermediate peg”. This idea is described very easy in a recursive algotithm
hanoi(n,src,dest,int)
if n = 1 then
src → dest
else
hanoi(n-1,src,int,dest)
src → dest
hanoi(n-1,int,dest,src)
endif
factorial( n)
if (n > 0)then
return(n * factorial(n-1));
else
return(1)
endif
end
Factorial (0) = 1. Factorial(0) is not put in stack because it does not invoke any other function
This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 6 By DZEUGANG Placide
This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 7 By DZEUGANG Placide
a) Variable scope
The scope of a variable is the region of code within which a variable is visible. Variable
scoping helps avoid variable naming conflicts. The concept is intuitive: two functions can
both have arguments called x without the two x‘s referring to the same thing. Similarly there
are many other cases where different blocks of code can use the same name without referring
to the same thing. Most programming languages support global and local scope with the
newer programming languages also supporting block scope. Scope can vary by programming
language so it is difficult to speak of scope in universal terms.
Local: These variables only exist inside the specific
function that creates them. They are unknown to other
functions and to the main program. As such, they are
normally implemented using a stack. Local variables
cease to exist once the function that created them is
completed. They are recreated each time a function is
executed or called.
Global: These variables can be accessed (ie known) by
any function comprising the program. They are
implemented by associating memory locations with
variable names. They do not get recreated if the function
is recalled.
b) Passing arguments to functions
We have different ways of passing parameter to function such as call by value and call by
reference. Whenever we call a function, then sequence of executable statements get executed.
We can pass some of the information to the function for processing called argument or
parameter
Call by value: The value of a variable is sent to function. The actual parameter cannot be
changed by function. copy going into the procedure. This is the mechanism used by both
C and Java. Note that this mechanism is used for passing objects, where a reference to the
objected is passed by value.
By-Reference When passing parameter using called by address scheme, we are passing
the actual address of the variable to the called function. Any update made inside the
called function will modify the original copy since we are directly modifying the content
of the exact memory location.
This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 8 By DZEUGANG Placide
Example
procedure modify (input) Algorithm
begin begin
input = "red" x = "blue"
endfunct modify(x)
print(x) {"blue" indicates variable was passed by value, "red"
indicates variable was passed by reference}
end
A module is a component of a larger system that interacts with the rest of the system in a
simple and well-defined manner.
This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 9 By DZEUGANG Placide
Object Oriented Programming (OOP) is a style of programming that identifies and groups
variables and modules into class objects. The class object represents a “thing” like a
customer, a purchase, a payment and expresses each in terms of their data (state) and modules
(behavior). The class object is stored in a file where it can then be accessed by the programs
logic and more important be reused by other programs.
Benefits of Reusability
OOP concepts
Term Definition
Aggregation Objects that are made up of other objects are known as aggregations. The
relationship is generally of one of two types:
Composition – the object is composed of other objects. This form of
aggregation is a form of code reuse. E.g. A Car is composed of
Wheels, a Chassis and an Engine
Collection – the object contains other objects. E.g. a List contains
several Items; A Set several Members.
Attribute A characteristic of an object. Collectively the attributes of an object describe
its state. E.g. a Car may have attributes of Speed, Direction, Registration
Number and Driver.
Class The definition of objects of the same abstract data type. In Java class is the
keyword used to define new types.
Encapsulation The combining together of attributes (data) and methods
This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 10 By DZEUGANG Placide
A sorting algorithm is an algorithm that puts elements of a list in a certain order. We can
distinguish many sorting algorithms: Bubble sort , Insertion sort, Selection sort, Heapsort,
Mergesort, Quick sort, …
1) Selection sort
The principle of Selection Sort is to find the smallest element of the data sequence (from
index 0 to n-1) and put it to the beginning of the sequence. This procedure is then applied on
the yet unsorted areas (1 to n-1, 2 to n-1 and so on), until the area from n-2 to n-1 has been
This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 11 By DZEUGANG Placide
sorted. Now the entire data sequence has been transformed to sorted form. To get the smallest
element to the right position, simply exchange it with the first in the (sub-)sequence.
Example: ALGORITHM
Unsorted Sequence 8 4 1 5 4 for i ← 0 to n-2 do
Step 1 8 4 1 5 4 min ← i
for j ← (i + 1) to n-1 do
Step 2 1 4 8 5 4 if A[j] < A[min]
Step 3 1 4 8 5 4 min ← j
Step 4 1 4 4 5 8 swap A[i] and A[min]
Result 1 4 4 5 8
Complexity analysis
Selection sort is not difficult to analyze compared to other sorting algorithms since none of
the loops depend on the data in the array. Selecting the lowest element requires scanning all n
elements (this takes n − 1 comparisons) and then swapping it into the first position. Finding
the next lowest element requires scanning the remaining n − 1 elements and so on, for (n − 1)
+ (n − 2) + ... + 2 + 1 = n(n − 1) / 2 ∈ Θ(n2) comparisons (see arithmetic progression). Each
of these scans requires one swap for n − 1 elements (the final element is already in place).
2) Insertion sort
The data sequence is divided into two sections: an already sorted target sequence and a still
unsorted source sequence. The target sequence initially consists of the first element only.
However, it grows by one element per iteration.
In every iteration the first element of the current source sequence is compared with the
elements of the target sequence in incremental order. If the first element of the source
sequence is lower than or equal to an element of the target sequence, all elements to the left
of the first element of the source sequence and to the right of the current element of the target
sequence are moved to the right, the current element is inserted to the corresponding location
in the target sequence and the iteration is stopped. Sorting is finished once all elements
belong to the target sequence.
ALGORITHM
insertionSort(array A)
Example (the target sequence is in bold):
begin Unsorted Sequence 8 4 1 5 4
for i := 1 to length[A]-1 do Step 1 8 4 1 5 4
begin Step 2 4 8 1 5 4
value := A[i]; Step 3 1 4 8 5 4
j := i-1;
while j ≥ 0 and A[j] > value do Step 4 1 4 5 8 4
begin Result 1 4 4 5 8
A[j + 1] := A[j];
j := j-1;
end;
A[j+1] := value;
end;
end;
This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 12 By DZEUGANG Placide
Complexity analysis
Actual run time could be better (loop may terminate earlier than running for p-1 times!
3) Bubble sort
The name of this algorithm is supposed to indicate that the elements "rise" like bubbles. All
elements are compared with their successors; if the element with the lower index is the
greater one, the elements are exchanged. Once the entire sequence has been processed this
way, the whole process is repeated - until there's been an iteration during which no exchange
took place. Then the data is sorted. Example (the elements that are currently compared with
each other are in bold print):
4) Merge sort
The unsorted sequence is divided into two sub-sequences. These, again, are divided into two
sub-sub-sequences, and so on. The smallest partitions consist of just one element each; thus,
each of them can be regarded as sorted within its borders. Now neighbouring partitions are
merged to larger sub-sequences (that's the source of the name of this algorithm), however not
before the elements have been correctly ordered. The resulting sub-sequences are, again,
merged until finally the entire data sequence has been reunited.
This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 13 By DZEUGANG Placide
ALGORITHM
Example: MergeSort (Array(First..Last))
Unsorted Sequence 8 4 1 5 4 Begin
Division 1 8 4 1|5 4 If Array contains only one element Then
Division 2 8 4|1|5|4 Return Array
Division 3 8|4|1|5|4 Else
Merge 1 4 8|1|5|4 Mid= ((Last + First)/2)
Merge 2 1 4 8|4 5 LHA = MergeSort(Array(First..Mid))
Merge 3 1 4 4 5 8 RHA = MergeSort(Array(Mid+1..Last))
Result 1 4 4 5 8 ResultArray = Merge( LHA , RHA )
Return ResultArray
EndIf
End MergeSort
Example
5) Quicksort
Quicksort is a fast and recursive sorting algorithm, which is used not only for educational
purposes, but widely applied in practice. On the average, it has O(n log n) complexity,
making quicksort suitable for sorting big data volumes.
Algorithm
The divide-and-conquer strategy is used in quicksort. Below the recursion step is described:
1. Choose a pivot value. The Pivot can be any value, which is in range of sorted values,
even if it doesn't present in the array.
2. Partition. Rearrange elements in such a way, that all elements which are lesser than
the pivot go to the left part of the array and all elements greater than the pivot, go to
the right part of the array. Values equal to the pivot can stay in any part of the array.
Notice, that array may be divided in non-equal parts.
3. Sort both parts. Apply quicksort algorithm recursively to the left and the right parts.
This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 14 By DZEUGANG Placide
PARTITION(A, p, r) QUICKSORT(A, p, r)
{ {
x = A[r] // pivot: grab last element if p < r
i = p – 1 // index of last element ≤ pivot; initially before {
array q = PARTITION(A, p, r)
for j = p to r-1 // inspect all elements but pivot QUICKSORT(A, p, q-1)
{ QUICKSORT(A, q+1, r)
if A[j] x // move only elements ≤ pivot to left region }
{ }
i=i+1
swap A[i] and A[j]
}
}
swap A[i+1] and A[r] // put pivot in correct place
return i+1
}
Example
Searching for data is one of the fundamental fields of computing. Often, the difference
between a fast program and a slow one is the use of a good algorithm for the data set.
1) Linear searching
A linear search is the most basic of search algorithm you can have. A linear search
sequentially moves through your collection (or data structure) looking for a matching value.
This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 15 By DZEUGANG Placide
This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 16 By DZEUGANG Placide
Average Case / Worst Case : Θ(n2) ; happens when input is already sorted in
descending order
Best Case : Θ(n) ; when input is already sorted
No. of comparisons : Θ(n2) in worst case & Θ(n) in best case
No. of swaps : Θ(n2) in worst/average case & 0 in Best case
Selection Sort:
Average Case / Worst Case / Best Case: Θ(n2)
No. of comparisons : Θ(n2)
No. of swaps : Θ(n2) in worst/average case & 0 in best case
Merge Sort :
Average Case / Worst Case / Best case : Θ(nlgn) ; doesn't matter at all whether the
input is sorted or not
No. of comparisons : Θ(n+m) in worst case & Θ(n) in best case ; assuming we are
merging two array of size n & m where n<m
No. of swaps : No swaps ! [but requires extra memory, not in-place sort]
Bubble Sort:
Worst Case : Θ(n2)
Best Case : Θ(n) ; on already sorted
No. of comparisons : Θ(n2) in worst case & best case
No. of swaps : Θ(n2) in worst case & 0 in best case
Quicksort
Worst-case: O(N2): This happens when the pivot is the smallest (or the largest)
element.
Best-case O(NlogN) The best case is when the pivot is the median of the array,
Average-case - O(NlogN)
Linear Search:
Worst Case : Θ(n) ; search key not present or last element
Best Case : Θ(1) ; first element
No. of comparisons : Θ(n) in worst case & 1 in best case
Binary Search:
Worst case/Average case : Θ(logn)
Best Case : Θ(1) ; when key is middle element
No. of comparisons : Θ(logn) in worst/average case & 1 in best case
This topic and many others are available on www.dzplacide.overblog.com in PDF format