0% found this document useful (0 votes)
9 views

Lecture 1 DAA

Uploaded by

paid games
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Lecture 1 DAA

Uploaded by

paid games
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Design and Analysis

of Algorithms
Lecture 1 & 2
Textbook
Introduction to Algorithms 3rd , Thomas H. Corman

Others
Introduction to The Design and Analysis of Algorithms 3rd Edition, Anany
Levitin
Course Objectives
Algorithms are the core subject in computer science when it comes to problem solving
by computers.
The main objective of this undergraduate level course is to introduce students to the
crux of algorithm design, evaluation and implementation.
Famous computer science problems will be discussed and various approaches to the
solution would be compared.
Why study this subject?
Efficient algorithms lead to efficient programs.
Efficient programs sell better
Efficient programs make better use of hardware.
Programmers who write efficient programs are more marketable than those who don’t!
Factors influencing program
efficiency
Problem being solved
Programming language
Compiler
Computer hardware
Programmer ability
Programmer effectiveness
Algorithm
This course
About performance!
Because, performance is the line between feasible and unfeasible.
o E.g. if not fast enough, can not be adopted (fast web search)

This study allows us to develop a theoretical language widely adopted by practitioners,


provides a clean way of thinking about algorithms.
Think of performance as currency!
You need to have good/enough performance to be able to incorporate other
features/specification
Data Structures Review
Data Structure is a way of collecting and organizing data in
such a way that we can perform operations on these data in
an effective way
A data structure is a way to store and organize data
in order to facilitate access and modifications. No
single data structure works well for all purposes, and
so it is important to know the strengths and limitations
of several of them
Data Structures Types
 By Size By Access
oStatic Data Structure
oLinear Data Structure
oDynamic Data Structure
 Sequential Access
 Random Access
By Storage
oPrimitive Data Structure
oNon-Primitive Data oNon-Linear Data
Structure Structure
 Hierarchal Access
By Nature  Group Access
oHomogeneous Data
Structure
oHeterogeneous Data
Structure
By Size
Static Data Structure Dynamic Data Structure

Static data structures are those whose Dynamic structures are those which
sizes and structures associated memory expands or shrinks depending upon the
locations are fixed, at compile time. program need and its execution. Also, their
associated memory locations changes.
By Storage
Primitive Data Structure Non-Primitive Data Structure

Primitive Data Structures are the basic data Non-primitive data structures are more
structures that directly operate upon complicated data structures and are derived
the machine instructions. from primitive data structures.
They emphasize on grouping same or
different data items with relationship between
each data item.
By Nature
Homogeneous Data Structure Heterogeneous Data Structure

In homogeneous data structures, all the In Non-Homogeneous data structure, the


elements are of same type. elements may or may not be of the same
type.
By Access
Linear Data Structure Non-Linear Data Structure

In Linear data structures, the data items are arranged in a linear Hierarchal Access
sequence A hierarchical collection is a group of items divided into levels. An item at
one level can have successor items located at the next lower level. One
Sequential Access common hierarchical collection is the tree.

Sequential access requires visiting all previous locations in sequential


Group Access
order to retrieve a given location.
A nonlinear collection of items that are unordered is called a group. The
Random Access three major categories of group collections are sets, graphs, and
networks.
Random (Direct) access allows retrieval of any data location directly.
Data Structures Operations
Insertion
Deletion
Searching
Traversal
Sorting
Merging
Algorithms
Well defined steps defined to solve a problem
A well-defined procedure that takes some value or
set as input and produces an output.
An algorithm is said to be correct if, for every
input instance, it halts with the correct output. We
say that a correct algorithm solves the given
computational problem
Types of Algorithms
PROBABILISITIC ALGORITHM
In this algorithm, chosen values are used in such a
way that the probability of chosen each value is
known and controlled.

e.g. Randomize Quick Sort


HEURISTIC ALGORITHM

This type of algorithm is based largely on


optimism and often with minimal theoretical
support. Here error can not be controlled but may
be estimated how large it is.
APPROXIMATE ALGORITHM
In this algorithm, answer is obtained that is as
précised as required in decimal notation. In other
words it specifies the error we are willing to
accept.

For example, two figures accuracy or 8 figures or


whatever is required.
ALGORITHM APPROACHES
General approaches to algorithm design
• Brute Force algorithm
• Greedy algorithm
• Recursive algorithm
• Backtracking algorithm
• Divide & Conquer algorithm
• Dynamic programming algorithm
• Randomized algorithm
Brute Force Algorithm
This is the most basic and simplest type of algorithm. A Brute Force
Algorithm is the straightforward approach to a problem i.e., the first
approach that comes to our mind on seeing the problem. More
technically it is just like iterating every possibility available to solve that
problem.

Example:
If there is a lock of 4-digit PIN. The digits to be chosen from 0-9 then the brute force will
be trying all possible combinations one by one like 0001, 0002, 0003, 0004, and so on
until we get the right PIN. In the worst case, it will take 10,000 tries to find the right
combination.
Recursive Algorithm
This type of algorithm is based on recursion. In recursion, a problem is solved by
breaking it into subproblems of the same type and calling own self again and
again until the problem is solved with the help of a base condition.

Some common problem that is solved using recursive algorithms are Factorial of
a Number, Fibonacci Series, Tower of Hanoi, DFS for Graph, etc.
Divide and Conquer Algorithm
In Divide and Conquer algorithms, the idea is to solve the problem in two
sections, the first section divides the problem into subproblems of the same
type. The second section is to solve the smaller problem independently and
then add the combined result to produce the final answer to the problem.

Some common problem that is solved using Divide and Conquers Algorithms are
Binary Search, Merge Sort, Quick Sort, Strassen’s Matrix Multiplication, etc.
Dynamic Programming Algorithms
This type of algorithm is also known as the memoization technique because in
this the idea is to store the previously calculated result to avoid calculating it
again and again. In Dynamic Programming, divide the complex problem into
smaller overlapping subproblems and store the result for future use.

The following problems can be solved using the Dynamic Programming


algorithm Knapsack Problem, Weighted Job Scheduling, Floyd Warshall
Algorithm, etc.
Greedy Algorithm
In the Greedy Algorithm, the solution is built part by part. The decision to
choose the next part is done on the basis that it gives an immediate benefit. It
never considers the choices that had been taken previously.

Some common problems that can be solved through the Greedy Algorithm are
Dijkstra Shortest Path Algorithm, Prim’s Algorithm, Kruskal’s Algorithm, Huffman
Coding, etc.
Backtracking Algorithm
In Backtracking Algorithm, the problem is solved in an incremental way i.e. it is
an algorithmic technique for solving problems recursively by trying to build a
solution incrementally, one piece at a time, removing those solutions that fail to
satisfy the constraints of the problem at any point of time.
Some common problems that can be solved through the Backtracking Algorithm
are the Hamiltonian Cycle, M-Coloring Problem, N Queen Problem, Rat in Maze
Problem, etc.
Randomized Algorithm
In the randomized algorithm, we use a random number.it helps to decide the
expected outcome. The decision to choose the random number so it gives the
immediate benefit
Some common problems that can be solved through the Randomized Algorithm
are Quicksort: In Quicksort we use the random number for selecting the pivot.
Algorithm Specifications
Every algorithm must satisfy the following specifications...
Input - There should be 0 or more inputs supplied
externally to the algorithm.
Output - There should be at least 1 output obtained.
Definiteness - Every step of the algorithm should be
clear and well defined.
Finiteness - The algorithm should have finite number of
steps.
Correctness - Every step of the algorithm must
generate a correct output.
Algorithm Efficiency
An algorithm is said to be efficient and fast, if it
takes less time to execute and consumes less
memory space.
Performance Analysis
Performance analysis of an algorithm is the process of
calculating space required by that algorithm and time
required by that algorithm.
The performance of an algorithm is measured on the basis of
following properties:

◦ Space Complexity - Total amount of computer memory required by an


algorithm to complete its execution is called as space complexity of that
algorithm.
◦ Time Complexity - The time complexity of an algorithm is the total
amount of time required by an algorithm to complete its execution.
Solving Interesting Problems
Create data structures & algorithms to solve problems.
Design

Prove algorithms work


Buggy algorithms are worthless! Analysis
Examine properties of algorithms.
Simplicity, running time, space needed, …
Algorithm Design & Analysis
Process
Problems
Decision Problem
Function Problem
Search Problem
Example
Example
Example
Sys. A executes 1 billion instructions per second
Sys. B executes 10 million instructions per second
A uses insertion sort with 2n2 steps
B uses merge sort with 50n log(n) steps
How much time do they take when n = 1 million?

While Computer B takes


Algorithm Descriptions
Algorithms can be described in many forms and
ways.
High-level natural language description
Formal description (CS)
Coded form
High-level natural language
description
Formal Representation-Pseudo
code
Formal Representation-Pseudo
code
In the book, typically written in a pseudo code
◦ Very clear
◦ Hides low level details
◦ Not concerned with software engineering issues.
Coded form
In the book, typically written in a pseudo code
◦ Very clear
◦ Hides low level details
◦ Not concerned with software engineering issues.
Growth of Functions
The order of growth of the running time of an
algorithm, gives a simple characterization of the
algorithm's efficiency and also allows us to compare
the relative performance of alternative algorithms
Asymptotic Analysis
Asymptotic analysis of an algorithm refers to defining the
mathematical bound/framing of its run-time performance.
Using asymptotic analysis, we can very well conclude the best
case, average case, and worst case scenario of an algorithm.
Usually, the time required by an algorithm falls under three
types:
 Best Case - Minimum time required for program execution.
Average Case - Average time required for program
execution.
Worst Case - Maximum time required for program
execution.
Asymptotic Notation
In asymptotic notation, when we want to represent the
complexity of an algorithm, we use only the most significant
terms in the complexity of that algorithm and ignore least
significant terms in the complexity of that algorithm.
Following are the commonly used asymptotic notations to
calculate the running time complexity of an algorithm.
oBig Oh Notation, Ο
oOmega Notation, Ω
oTheta Notation, θ
Big Oh Notation
The notation Ο(n) is the formal way to express the
upper bound of an algorithm's running time. It
measures the worst case time complexity or the
longest amount of time an algorithm can possibly
take to complete.
Omega Notation
The notation Ω(n) is the formal way to express the
lower bound of an algorithm's running time. It
measures the best case time complexity or the best
amount of time an algorithm can possibly take to
complete.
Theta Notation
Theta notation is used to define the average bound of
an algorithm in terms of Time Complexity. It always
indicates the average time required by an algorithm
for all input values.
Asymptotic Notations
Representation
Following is a list of some common asymptotic
notations that represent time complexity

You might also like