0% found this document useful (0 votes)
44 views

Algorithm Strategies A LEVEL 9A

The document discusses algorithm design strategies and techniques, specifically recursion. It begins by defining algorithm strategies like brute force, greedy algorithms, divide-and-conquer. It then focuses on recursion, providing examples like computing factorials and Fibonacci sequences recursively. It explains the key aspects of recursion as having a stopping condition and recursive calls. Overall the document provides an overview of algorithm design techniques with a focus on explaining recursion through examples.

Uploaded by

Joseph Ondoua
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views

Algorithm Strategies A LEVEL 9A

The document discusses algorithm design strategies and techniques, specifically recursion. It begins by defining algorithm strategies like brute force, greedy algorithms, divide-and-conquer. It then focuses on recursion, providing examples like computing factorials and Fibonacci sequences recursively. It explains the key aspects of recursion as having a stopping condition and recursive calls. Overall the document provides an overview of algorithm design techniques with a focus on explaining recursion through examples.

Uploaded by

Joseph Ondoua
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Ministry of Secondary Education Republic of Cameroon

Progressive Comprehensive High School Peace – Work – Fatherland


PCHS Mankon – Bamenda School Year: 2014/2015
Department of Computer Studies

ALGORITHM DESIGN AND STRATEGIES


Class: Comp. Sc. A/L By: DZEUGANG PLACIDE

A study of algorithms has come to be recognized as the cornerstone of computer science, ant
it becomes a necessity to study the various strategies and techniques in order to choose the
most appropriate while solving a problem. Algorithm design techniques give guidance and
direction on how to create a new algorithm. Second, the study of these techniques help us to
categorize or organize the algorithms we know and in that way to understand them better."

Learning objectives
After studying this chapter, student should be able to:
 To be able to define the concept of recursion as a programming strategy distinct from
other forms of algorithmic decomposition.
 Define procedural programming techniques and its implications on logic design
 Define modularization and describe its benefits
 Describe identifier and module scope and the implications of global, local and
designations
 Define and describe benefits of designing programs using object orientated design
techniques
 State the principles of some standard sorting and searching algorithm and compare
the different method

Contents
I. ALGORITHM STRATEGIES ........................................................................................... 2
II. RECURSION .................................................................................................................. 3
III. PROGRAMMING TECHNIQUES ................................................................................ 6
IV. SOME STANDARD ALGORITHMS ......................................................................... 10

This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 2 By DZEUGANG Placide

I. ALGORITHM STRATEGIES
An algorithm strategy is a general approach to solve problems algorithmically. Although
more than one technique may be applicable to a specific problem, it is often the case that an
algorithm constructed by one approach is clearly superior to equivalent solutions built using
alternative techniques.
I.1 Examples of strategies
1. Brute Force
Brute force is a straightforward approach to solve a problem based on the problem’s
statement and definitions of the concepts involved. It is considered as one of the easiest
approach to apply and is useful for solving small – size instances of a problem. Some
examples of brute force algorithms are: Computing an (a > 0, n a nonnegative integer) by
multiplying a*a*…*a, computing n!, Selection sort , Bubble sort, Sequential search
2. Greedy Algorithms "take what you can get now" strategy
The solution is constructed through a sequence of steps, each expanding a partially
constructed solution obtained so far. At each step the choice must be locally optimal – this is
the central point of this technique. Examples: Minimal spanning tree, Shortest distance in
graphs
3. Divide-and-Conquer, Decrease and conquer
Also known as stepwise refinement, these are methods of designing algorithms that
(informally) proceed as follows: Given an instance of the problem to be solved, split this
into several smaller sub-instances (of the same problem), independently solve each of the
sub-instances and then combine the sub-instance solutions so as to yield a solution for the
original instance.
With the divide-and-conquer method the size of the problem instance is reduced by a factor
(e.g. half the input size), while with the decrease-and-conquer method the size is reduced by a
constant.
Examples of divide-and-conquer algorithms: Computing an by recursion, Binary search in
a sorted array (recursion), Mergesort algorithm, Quicksort algorithm (recursion)
Examples of decrease-and-conquer algorithms: Insertion sort, Topological sorting, Binary
Tree traversals: inorder, preorder and postorder (recursion)
4. Transform-and-Conquer

These methods work as two-stage procedures. First, the problem is modified to be more
amenable to solution. In the second stage the problem is solved.
Example: consider the problem of finding the two closest numbers in an array of numbers.

I.2 Design approaches


Different approaches are used to build an algorithm

This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 3 By DZEUGANG Placide

a) Top-down design: Top-down approach is essentially the breaking down of a


problem to gain insight into its compositional subproblems. Top-Down
Technique logically progresses from the initial instance down to the smallest sub-
instance via intermediate sub-instances.
b) Bottom-up design: As you might guess, with this approach the details come first. It
is the opposite of the top-down approach. In a Bottom-Up Technique, the smallest
sub-instances are explicitly solved first and the results of these used to construct
solutions to progressively larger sub-instances.
II. RECURSION
A recursive function is a function that calls itself during its execution. The main
characteristics of a recursive algorithm are:
- Stopping condition: It specifies the situation when the result can be obtained directly
without the need of a recursive call.
- Recursive call: The algorithm calls itself at least once. The value of the parameter
corresponding to a cascade of calls should lead eventually to the stopping condition.

II.1 An example of recursive function


In general, problems that are defined in terms of themselves are good candidates for recursive
techniques. The standard example used by many computer science textbooks is the factorial
function.
Let’s consider the problem of computing n! for n be a natural number. Starting from the
definition of 𝑛! = 𝑛 × (𝑛 − 1) × (𝑛 − 2) × … × 2 × 1, and applying the brute force
technique (also known as iterative technique) we obtain the following algorithm.
factorial(n) The factorial function describes the operation of
f←1 multiplying a number by all the positive integers smaller
for i from n to 1 do than it. For example, 5! = 5*4*3*2*1 . And 9! =
f←f*n 9*8*7*6*5*4*3*2*1 .
endfor
return f

We now see why factorial is often the introductory example for recursion: the factorial
function is recursive, it is defined in terms of itself. Taking the factorial of n, we have:
1 𝑖𝑓 𝑛 = 0
𝑛! = {
𝑛 ∗ (𝑛 − 1)! 𝑖𝑓 𝑛 > 0

Let's try writing our factorial function factorial(n). We want to code in the n! = n*(n - 1)!
functionality. Easy enough:

This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 4 By DZEUGANG Placide

factorial(int n) Let’s test it to make sure it is working properly


if (n= 0)then
f ← 1; factorial(3) = 3 * factorial(2)
else = 3 * 2 * factorial(1)
f ← n * factorial(n-1); =3* 2* 1 * factorial(0)
endif =3*2*1*1
return f

II.2 Other examples of recursive functions

a) The Fibonacci sequence

The Fibonacci sequence is often used to illustrate the power of dynamic programming. The
sequence is defined by the following recurrence relation:

F0 = 0 This very easily translates into a recursive function:


F1 = 1
Fn = Fn-1 + Fn-2 Fibonacci( n
if ((n == 0) || (n == 1))
fib ← n;
else
Seems simple enough. What is fib ← Fibonacci(n-1) + Fibonacci(n-2)
the running time of Fibonacci()? return fib
Consider the call Fibonacci(4).

You may remember from your study of binary trees that a nearly
complete one, like the one above, has around 2n nodes, where n is
the height of the tree (number of levels - 1). That means the number
of function calls is around 2n, which is exponential given each
function call is O(1). That's very expensive for such a simple little
function!

b) Power function

Let consider the problem of computing xn for x>0 and n a natural number. Starting from the
definition,

𝑥 𝑛 = 𝑥 ∗ 𝑥 ∗ … ∗ 𝑥 = 𝑥 ∗ 𝑥 𝑛−1 .

Applying the brute force and the recursive techniques respectively, we have

poxer1(real x, integer n) poxer2(real x, integer n)


p←1 if n= 0 then
for i from 1 to n do return 1
p←p*1 else
endfor retun x * power2(x, n-1)
retun p

c) Hanoi’s tower

This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 5 By DZEUGANG Placide

Let’s consider three vertical pegs labelled as follows: src (source), dest (destination) and int
(intermediate). On the src peg there are n disc placed in decreasing order of their diameter
(the disc at the bottom is the largest one). We want to move all discs from the peg src to the
rod dest by using int as intermediate peg such that the following constraints are satisfied.

(i) On the destination peg the disc are placed in the same decreasing order
(ii) A disc can never be placed on a smaller disc

The basic idea of solving this problem is: “move n-1 disc from src to int using dest as
intermediate peg; move the largest disc to src to dest; move the n-1 discs from int to dest
using s as intermediate peg”. This idea is described very easy in a recursive algotithm

hanoi(n,src,dest,int)
if n = 1 then
src → dest
else
hanoi(n-1,src,int,dest)
src → dest
hanoi(n-1,int,dest,src)
endif

II.3 Evaluation of a recursive function using stack


The use of the stack is the best option in many system to execute recursive functions. During
the program execution, as the function call itself, a stack of pointer(addresses of instructions
of the caller function to which control must return after the called function is executed) is
created in the memory. For example for the factorial function described above, the stact is
shown below for n = 4

factorial( n)
if (n > 0)then
return(n * factorial(n-1));
else
return(1)
endif
end

Factorial (0) = 1. Factorial(0) is not put in stack because it does not invoke any other function

This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 6 By DZEUGANG Placide

III. PROGRAMMING TECHNIQUES


There exist three main types of programming techniques: unstructured programming,
structured programming and Object oriented programming.

III.1 Unstructured programming


Unstructured programming refers to writing small and simple programs, which consist of
only one main program. All the actions such as providing input, processing and displaying
output are done within one program only. This style of programming is generally restricted
for developing a small application; but if the application becomes large, then it poses real
difficulties in terms of clarity of the code, modifiability and ease of use. Although this type of
programming style is not recommended, still most programmers start programming using this
technique only.

III.2 Structured programming


The programs generated using unstructured approach are meant for simple and small
problems. If the problem gets lengthy, this approach becomes too complex and obscure. After
some time, even the programmers themselves may find it difficult to understand their own
program. Hence, programming should be performed using a 'structured' (organized)
approach.

Figure. Structured Programming


Using structured programming, a program is broken down into small independent tasks that
are small enough to be understood easily, without having to understand the whole program at
once. Each task has its own functionality and performs a specific part of the actual
processing. These tasks are developed independently, and each task can carry out a specified
function on its own. When these tasks are completed, they are combined together to solve the
problem.
III.2.1 Procedural programming
This programming has a single program that is divided into small segments called
procedures (also known as functions, routines, subroutines or methods). From the main or
controlling procedure, a procedure call is used to invoke the required procedure. After the
sequence is processed, the flow of control continues from the position from where the call
was made. The main program coordinates calls to procedures and hands over appropriate data
as parameters. The data are processed by the procedures and once the program has finished,
the result data are displayed.

This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 7 By DZEUGANG Placide

a) Variable scope
The scope of a variable is the region of code within which a variable is visible. Variable
scoping helps avoid variable naming conflicts. The concept is intuitive: two functions can
both have arguments called x without the two x‘s referring to the same thing. Similarly there
are many other cases where different blocks of code can use the same name without referring
to the same thing. Most programming languages support global and local scope with the
newer programming languages also supporting block scope. Scope can vary by programming
language so it is difficult to speak of scope in universal terms.
 Local: These variables only exist inside the specific
function that creates them. They are unknown to other
functions and to the main program. As such, they are
normally implemented using a stack. Local variables
cease to exist once the function that created them is
completed. They are recreated each time a function is
executed or called.
 Global: These variables can be accessed (ie known) by
any function comprising the program. They are
implemented by associating memory locations with
variable names. They do not get recreated if the function
is recalled.
b) Passing arguments to functions
We have different ways of passing parameter to function such as call by value and call by
reference. Whenever we call a function, then sequence of executable statements get executed.
We can pass some of the information to the function for processing called argument or
parameter
 Call by value: The value of a variable is sent to function. The actual parameter cannot be
changed by function. copy going into the procedure. This is the mechanism used by both
C and Java. Note that this mechanism is used for passing objects, where a reference to the
objected is passed by value.
 By-Reference When passing parameter using called by address scheme, we are passing
the actual address of the variable to the called function. Any update made inside the
called function will modify the original copy since we are directly modifying the content
of the exact memory location.

Illustration call by value Illustration call by reference

This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 8 By DZEUGANG Placide

Example
procedure modify (input) Algorithm
begin begin
input = "red" x = "blue"
endfunct modify(x)
print(x) {"blue" indicates variable was passed by value, "red"
indicates variable was passed by reference}
end

III.2.2 Modular programming

Modular programming is the process of breaking down a problem into smaller


independent tasks. These tasks can then be broken down into subtasks. Each module is
designed to perform a specific function. Note that each module in itself is a small program,
which can be compiled and tested individually and all modules are combined to build the
whole program. Modular programming is an important and beneficial approach to
programming problems. They make program development easier, and they can also help with
future development projects.

A module is a component of a larger system that interacts with the rest of the system in a
simple and well-defined manner.

Importance of modular programming


a) Distributed development: By breaking down the problem into multiple tasks,
different developers can work in parallel. And this will shorten the development time.
b) Code reusability: A program module can be reused in programs. Modules or
procedures can also be reused in future projects.
c) Program readability: A program that has many functions is straightforward. But a
program with no functions can be very long and hard to follow.
d) Manageable tasks: Breaking down a programming project into modules makes it
more manageable. These individual modules are easier to design, implement and test.
e) Maintainability: An error in the system provide from a given module. It is then
easier to maintain a single module than to maintain a whole program

This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 9 By DZEUGANG Placide

III.3 Object oriented Algorithms

Object Oriented Programming (OOP) is a style of programming that identifies and groups
variables and modules into class objects. The class object represents a “thing” like a
customer, a purchase, a payment and expresses each in terms of their data (state) and modules
(behavior). The class object is stored in a file where it can then be accessed by the programs
logic and more important be reused by other programs.

A main platform of object-oriented programming is code reusability. Rather than starting


from scratch with each new application, a programmer will consult libraries of existing
components to see if any are appropriate as starting points for the design of a new
application. Reusability is the ability of software elements to serve for the construction of
many different applications.

Benefits of Reusability

 Reliability. Components built by specialists in their field are more likely to be


designed correctly and reliably
 Efficiency. The component developers are likely to be experts in their field and will
have used the best possible algorithms and data structures.
 Time Savings. By relying upon existing components there is less software to develop
and hence applications can be built quicker.
 Decreased maintenance effort. Using someone else’s components decreases the
amount of maintenance effort that the application developer needs to expend.
 Consistency. Reliance on a library of standard components will tend to spread a
consistency of design message throughout a team of programmers working on an
application.
 Investment. Reusing software will save the cost of developing similar software from
scratch.

OOP concepts

Term Definition
Aggregation Objects that are made up of other objects are known as aggregations. The
relationship is generally of one of two types:
 Composition – the object is composed of other objects. This form of
aggregation is a form of code reuse. E.g. A Car is composed of
Wheels, a Chassis and an Engine
 Collection – the object contains other objects. E.g. a List contains
several Items; A Set several Members.
Attribute A characteristic of an object. Collectively the attributes of an object describe
its state. E.g. a Car may have attributes of Speed, Direction, Registration
Number and Driver.
Class The definition of objects of the same abstract data type. In Java class is the
keyword used to define new types.
Encapsulation The combining together of attributes (data) and methods

This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 10 By DZEUGANG Placide

(behaviour/processes) into a single abstract data type with a public interface


and a private implementation. This allows the implementation to be altered
without affecting the interface.
Inheritance The derivation of one class from another so that the attributes and methods
of one class are part of the definition of another class. The first class is often
referred to the base or parent class. The child is often referred to as a
derived or sub-class.
Derived classes are always ‘a kind of’ their base classes. Derived classes
generally add to the attributes and/or behaviour of the base class. Inheritance
is one form of object-oriented code reuse.
E.g. Both Motorbikes and Cars are kinds of MotorVehicles and therefore
share some common attributes and behaviour but may add their own that
are unique to that particular type.
Interface The behaviour that a class exposes to the outside world; its public face. Also
called its ‘contract’.
Method The implementation of some behaviour of an object.
Message The invoking of a method of an object. In an object-oriented application
objects send each other messages (i.e. execute each others methods) to
achieve the desired behaviour.
Object An instance of a class. Objects have state, identity and behaviour.
Overloading Allowing the same method name to be used for more than one
implementation. The different versions of the method vary according to their
parameter lists.
Polymorphism Giving an action one name that is shared up and down an object hierarchy,
with each object in the hierarchy implementing the action in a way
appropriate to itself.
E.g. Both the Plane and Car types might be able to respond to a turnLeft
message. While the behaviour is the same, the means of achieving it are
specific to each type.

IV. SOME STANDARD ALGORITHMS

VI.1 Sorting algorithms

A sorting algorithm is an algorithm that puts elements of a list in a certain order. We can
distinguish many sorting algorithms: Bubble sort , Insertion sort, Selection sort, Heapsort,
Mergesort, Quick sort, …

1) Selection sort
The principle of Selection Sort is to find the smallest element of the data sequence (from
index 0 to n-1) and put it to the beginning of the sequence. This procedure is then applied on
the yet unsorted areas (1 to n-1, 2 to n-1 and so on), until the area from n-2 to n-1 has been

This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 11 By DZEUGANG Placide

sorted. Now the entire data sequence has been transformed to sorted form. To get the smallest
element to the right position, simply exchange it with the first in the (sub-)sequence.

Example: ALGORITHM
Unsorted Sequence 8 4 1 5 4 for i ← 0 to n-2 do
Step 1 8 4 1 5 4 min ← i
for j ← (i + 1) to n-1 do
Step 2 1 4 8 5 4 if A[j] < A[min]
Step 3 1 4 8 5 4 min ← j
Step 4 1 4 4 5 8 swap A[i] and A[min]
Result 1 4 4 5 8

Complexity analysis

Selection sort is not difficult to analyze compared to other sorting algorithms since none of
the loops depend on the data in the array. Selecting the lowest element requires scanning all n
elements (this takes n − 1 comparisons) and then swapping it into the first position. Finding
the next lowest element requires scanning the remaining n − 1 elements and so on, for (n − 1)
+ (n − 2) + ... + 2 + 1 = n(n − 1) / 2 ∈ Θ(n2) comparisons (see arithmetic progression). Each
of these scans requires one swap for n − 1 elements (the final element is already in place).

2) Insertion sort
The data sequence is divided into two sections: an already sorted target sequence and a still
unsorted source sequence. The target sequence initially consists of the first element only.
However, it grows by one element per iteration.

In every iteration the first element of the current source sequence is compared with the
elements of the target sequence in incremental order. If the first element of the source
sequence is lower than or equal to an element of the target sequence, all elements to the left
of the first element of the source sequence and to the right of the current element of the target
sequence are moved to the right, the current element is inserted to the corresponding location
in the target sequence and the iteration is stopped. Sorting is finished once all elements
belong to the target sequence.

ALGORITHM
insertionSort(array A)
Example (the target sequence is in bold):
begin Unsorted Sequence 8 4 1 5 4
for i := 1 to length[A]-1 do Step 1 8 4 1 5 4
begin Step 2 4 8 1 5 4
value := A[i]; Step 3 1 4 8 5 4
j := i-1;
while j ≥ 0 and A[j] > value do Step 4 1 4 5 8 4
begin Result 1 4 4 5 8
A[j + 1] := A[j];
j := j-1;
end;
A[j+1] := value;
end;
end;

This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 12 By DZEUGANG Placide

Complexity analysis

Worst-case Complexity: 2 + 3 + 4 + … + n = n(n+1)/2 -1 = (n2)/2 + n/2 -1 = O(n2)


(For reverse sorted array).

Actual run time could be better (loop may terminate earlier than running for p-1 times!

The best case scenario (sorted array as input): (n).

3) Bubble sort
The name of this algorithm is supposed to indicate that the elements "rise" like bubbles. All
elements are compared with their successors; if the element with the lower index is the
greater one, the elements are exchanged. Once the entire sequence has been processed this
way, the whole process is repeated - until there's been an iteration during which no exchange
took place. Then the data is sorted. Example (the elements that are currently compared with
each other are in bold print):

Unsorted Sequence 8 4 1 5 4 ALGORITHM


Step 1 8 4 1 5 4 procedure bubbleSort(A:array)
4 8 1 5 4 do
swapped := false
4 1 8 5 4 for i from 0 to n - 2 do:
4 1 5 8 4 if A[i]> A[i+1] then
Step 2 4 1 5 4 8 swap(A[i], A[i+1])
1 4 5 4 8 => no exchange swapped := true
end if
1 4 5 4 8 end for
1 4 4 5 8 => no exchange while swapped
Step 3 1 4 4 5 8 => no exchange end procedure
1 4 4 5 8 => no exchange
1 4 4 5 8 => no exchange
1 4 4 5 8 => no exchange Complexity analysis
Result 1 4 4 5 8
Bubble Sort is very slow most of the
time, except if the unsorted sequence actually is already sorted - then Bubble Sort works
faster than the other algorithms: Already after one iteration (consisting of n-1 comparisons) it
realizes that the sequence doesn't need to be sorted and stops.

Complexity:  i=1n  j=i+1n 1 =  i=1n (n –i) = n2 – n(n+1)/2 = n2/2 – n/2 = (n2)

4) Merge sort
The unsorted sequence is divided into two sub-sequences. These, again, are divided into two
sub-sub-sequences, and so on. The smallest partitions consist of just one element each; thus,
each of them can be regarded as sorted within its borders. Now neighbouring partitions are
merged to larger sub-sequences (that's the source of the name of this algorithm), however not
before the elements have been correctly ordered. The resulting sub-sequences are, again,
merged until finally the entire data sequence has been reunited.

This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 13 By DZEUGANG Placide

ALGORITHM
Example: MergeSort (Array(First..Last))
Unsorted Sequence 8 4 1 5 4 Begin
Division 1 8 4 1|5 4 If Array contains only one element Then
Division 2 8 4|1|5|4 Return Array
Division 3 8|4|1|5|4 Else
Merge 1 4 8|1|5|4 Mid= ((Last + First)/2)
Merge 2 1 4 8|4 5 LHA = MergeSort(Array(First..Mid))
Merge 3 1 4 4 5 8 RHA = MergeSort(Array(Mid+1..Last))
Result 1 4 4 5 8 ResultArray = Merge( LHA , RHA )
Return ResultArray
EndIf
End MergeSort
Example

Divide step Merge step

5) Quicksort

Quicksort is a fast and recursive sorting algorithm, which is used not only for educational
purposes, but widely applied in practice. On the average, it has O(n log n) complexity,
making quicksort suitable for sorting big data volumes.
Algorithm
The divide-and-conquer strategy is used in quicksort. Below the recursion step is described:
1. Choose a pivot value. The Pivot can be any value, which is in range of sorted values,
even if it doesn't present in the array.
2. Partition. Rearrange elements in such a way, that all elements which are lesser than
the pivot go to the left part of the array and all elements greater than the pivot, go to
the right part of the array. Values equal to the pivot can stay in any part of the array.
Notice, that array may be divided in non-equal parts.
3. Sort both parts. Apply quicksort algorithm recursively to the left and the right parts.

This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 14 By DZEUGANG Placide

The algorithm details are as follows:

PARTITION(A, p, r) QUICKSORT(A, p, r)
{ {
x = A[r] // pivot: grab last element if p < r
i = p – 1 // index of last element ≤ pivot; initially before {
array q = PARTITION(A, p, r)
for j = p to r-1 // inspect all elements but pivot QUICKSORT(A, p, q-1)
{ QUICKSORT(A, q+1, r)
if A[j]  x // move only elements ≤ pivot to left region }
{ }
i=i+1
 swap A[i] and A[j]
}
}
swap A[i+1] and A[r] // put pivot in correct place
return i+1
}

Example

VI.2 Searching algorithms

Searching for data is one of the fundamental fields of computing. Often, the difference
between a fast program and a slow one is the use of a good algorithm for the data set.

1) Linear searching
A linear search is the most basic of search algorithm you can have. A linear search
sequentially moves through your collection (or data structure) looking for a matching value.

This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 15 By DZEUGANG Placide

int function LinearSearch(Array A , int Lb, int Ub, int Key );


begin
for i= Lb to Ub do
if A [ i ]= Key then
return i ;
return –1;
end;
2) Binary search
binary search or half-interval search
algorithm locates the position of an item in a
sorted array. Binary search works by comparing
an input value to the middle element of the
array. The comparison determines whether the
element equals the input, less than the input or
greater. When the element being compared to
equals the input the search stops and typically
returns the position of the element. If the
element is not equal to the input then a
comparison is made to determine whether the
input is less than or greater than the element.

Recursive algorithm Iterative algorithm


BinarySearch(A[0..N-1], value, low, high) BinarySearch(A[0..N-1], value)
{ {
if (high < low) low = 0
return -1 // not found high = N - 1
mid = low + ((high - low) / 2) while (low <= high)
if (A[mid] > value) {
return BinarySearch(A, value, low, mid-1) mid = low + ((high - low) / 2)
else if (A[mid] < value) if (A[mid] > value)
return BinarySearch(A, value, mid+1, high) high = mid - 1
else else if (A[mid] < value)
return mid // found low = mid + 1
} else
return mid // found
}
return -1 // not found
}
Comparison
Insertion Sort:

This topic and many others are available on www.dzplacide.overblog.com in PDF format
Topic: Algorithm design and strategies 16 By DZEUGANG Placide

 Average Case / Worst Case : Θ(n2) ; happens when input is already sorted in
descending order
 Best Case : Θ(n) ; when input is already sorted
 No. of comparisons : Θ(n2) in worst case & Θ(n) in best case
 No. of swaps : Θ(n2) in worst/average case & 0 in Best case
Selection Sort:
 Average Case / Worst Case / Best Case: Θ(n2)
 No. of comparisons : Θ(n2)
 No. of swaps : Θ(n2) in worst/average case & 0 in best case
Merge Sort :
 Average Case / Worst Case / Best case : Θ(nlgn) ; doesn't matter at all whether the
input is sorted or not
 No. of comparisons : Θ(n+m) in worst case & Θ(n) in best case ; assuming we are
merging two array of size n & m where n<m
 No. of swaps : No swaps ! [but requires extra memory, not in-place sort]
Bubble Sort:
 Worst Case : Θ(n2)
 Best Case : Θ(n) ; on already sorted
 No. of comparisons : Θ(n2) in worst case & best case
 No. of swaps : Θ(n2) in worst case & 0 in best case
Quicksort
 Worst-case: O(N2): This happens when the pivot is the smallest (or the largest)
element.
 Best-case O(NlogN) The best case is when the pivot is the median of the array,
Average-case - O(NlogN)
Linear Search:
 Worst Case : Θ(n) ; search key not present or last element
 Best Case : Θ(1) ; first element
 No. of comparisons : Θ(n) in worst case & 1 in best case
Binary Search:
 Worst case/Average case : Θ(logn)
 Best Case : Θ(1) ; when key is middle element
 No. of comparisons : Θ(logn) in worst/average case & 1 in best case

Comparing Time Complexities

Algorithm Best case Worst case Average case


Binary search O( 1 ) O(lg n) O(lg n )
Sequential search O( 1 ) O( n ) O( n )
Finding largest O( n ) O( n ) O( n )
Pattern matching O( n ) O( mn ) O( n )
Selection sort O( n2) O( n2) O( n2 )
Quicksort O( n2) O(NlogN) O(NlogN)

This topic and many others are available on www.dzplacide.overblog.com in PDF format

You might also like