0% found this document useful (0 votes)
512 views71 pages

Advanced Algorithm Analysis Course

The document discusses advance algorithms and their analysis. It covers asymptotic notations like Big-O and introduces concepts like amortized analysis. Amortized analysis allows analyzing algorithms over multiple operations to determine average performance rather than worst-case complexity of a single operation.

Uploaded by

aryatel26
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
512 views71 pages

Advanced Algorithm Analysis Course

The document discusses advance algorithms and their analysis. It covers asymptotic notations like Big-O and introduces concepts like amortized analysis. Amortized analysis allows analyzing algorithms over multiple operations to determine average performance rather than worst-case complexity of a single operation.

Uploaded by

aryatel26
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Advance Algorithm

Class: T.Y. [Link].

An adaptation of various ebooks and online resources for educational purpose


Advance Algorithm

• Prerequisite: Concepts of Data structures, Discrete Mathematics and


Analysis of Algorithm

• Objectives: To provide conceptual and practical knowledge of Advance


Algorithm

• Outcomes: On completion of the course, learner will be able to:

• 1. Analyze the chosen algorithm.

• 2. Choose appropriate data structure and algorithm for given problem


statement.

• 3. Design the algorithm.

An adaptation of various ebooks and


online resources for educational
purpose
Syllabus

An adaptation of various ebooks and


online resources for educational
purpose
Syllabus

An adaptation of various ebooks and


online resources for educational
purpose
Books Recommended
Text books:
• 1. Introduction to Algorithms by Thomas H Cormen, Charles E. Leiserson, Ronald L Rivest, Clifford Stein, Third
Edition.

• 2. Design and analysis of algorithms by S. Sridhar

• 3. Horowitz, Sahani and Rajsekaran, ―Fundamentals of Computer Algorithms‖, Galgotia.

• 4. Harsh Bhasin, Algorithms Design and Analysis, Oxford, 2015.

Reference Books:
• 1. Rajeev Motwani, Prabhakar Raghavan, Randomized Algorithm, Cambridge University

• 2. S. K. Basu, Design Methods and Analysis of Algorithm, PHI

• 3. Vijay V. Vajirani, Approximation Algorithms, Springer.

• 4. Computational Complexity, Stanford University.


An adaptation of various ebooks and
online resources for educational
purpose
Unit 1

Analysis of Algorithm Based on Time


Unit 1

Asymptotic Notations
The Role of Algorithms in Computing

• What are algorithms?


• Why is the study of algorithms worthwhile?
• What is the role of algorithms relative to other technologies
used in computers?
Design and Analysis of Algorithms

• Analysis:
• Sequence of computational steps that transform the
• input into the output
• Predict the cost of an algorithm in terms of resources and
performance
• Tool for solving a well-specified computational problem

• Design: design algorithms which minimize the cost


Our Machine Model

Generic Random Access Machine (RAM)


• Executes operations sequentially
• Set of primitive operations:
Arithmetic. Logical, Comparisons, Function calls
• Simplifying assumption: all ops cost 1 unit
Eliminates dependence on the speed of our computer,
otherwise impossible to verify and to compare

An adaptation of various ebooks and


online resources for educational
purpose
The problem of sorting

Input: sequence a1, a2, …, an of numbers.


Output: permutation a'1, a'2, …, a'n such that
a'1  a'2 …  a'n .
Example:
Input: 8 2 4 9 3 6
Output: 2 3 4 6 8 9
Insertion sort

INSERTION-SORT (A, n) ⊳ A[1 . . n]


for j ← 2 to n
do key ← A[ j]
“pseudocode” i←j–1
while i > 0 and A[i] > key
do A[i+1] ← A[i]
i←i–1
A[i+1] = key
1 i j n
A:

key
sorted
What kinds of problems are solved by algorithms?
• Sorting is by no means the only computational problem for
which algorithms have been developed.
• Ex.
1. Finding good routes on which the data will travel
2. Using a search engine to quickly find pages on which particular
information resides
3. Human Genome Project has made great progress toward the goals of
identifying all the 100,000 genes in human DNA, determining the
sequences of the 3 billion chemical base pairs that make up human
DNA, storing this information in databases, and developing tools for
data analysis. Each of these steps requires sophisticated algorithms.
Analyzing Algorithms

• Analyzing an algorithm has come to mean predicting the


resources that the algorithm requires
•memory, communication bandwidth, computer h/w,
computation time
• Generic Computation Model: Random-Access Machine (RAM)
•instructions are executed one after another, with no concurrent
operations
• The running time of an algorithm on a particular input is the
number of primitive operations or “steps” executed

L1.14
Running time

• The running time depends on the input: an already sorted sequence


is easier to sort.
• Major Simplifying Convention: Parameterize the running time
by the size of the input, since short sequences are easier to sort
than long ones.
TA(n) = time of A on length n inputs
• Generally, we seek upper bounds on the running time, to have a
guarantee of performance.
Analyzing Algorithms

• Analyzing an algorithm has come to mean predicting the


resources that the algorithm requires
•memory, communication bandwidth, computer h/w,
computation time
• Generic Computation Model: Random-access machine (RAM)
•instructions are executed one after another, with no concurrent
operations
• The running time of an algorithm on a particular input is the
number of primitive operations or “steps” executed

L1.16
Complexity Analysis using RAM model

L1.17
Complexity Analysis using RAM model

L1.18
Kinds of analyses

Worst-case: (usually)
• T(n) = maximum time of algorithm on any input of size n.

Average-case: (sometimes)
• T(n) = expected time of algorithm over all inputs of size n.
• Need assumption of statistical distribution of inputs.

Best-case: (NEVER)
• Cheat with a slow algorithm that works fast on some input.
Machine-independent time

What is insertion sort’s worst-case time?


BIG IDEAS:

• Ignore machine dependent constants,


otherwise impossible to verify and to compare algorithms

• Look at growth of T(n) as n → ∞ .

“Asymptotic Analysis”
Q-notation

DEF:

Q(g(n)) = { f (n) : there exist positive constants c1, c2, and n0 such that
0  c1 g(n)  f (n)  c2 g(n) for all n  n0 }

if f(n) is theta of g(n), then the value f(n) is always between c1 * g(n) and c2 *
g(n) for large values of n (n ≥ n0)

• Basic manipulations:
• Drop low-order terms; ignore leading constants.
• Example: 3n3 + 90n2 – 5n + 6046 = Q(n3)
Asymptotic performance

When n gets large enough, a Q(n2) algorithm always beats a Q(n3) algorithm.

• Asymptotic analysis is a useful


tool to help to structure our
thinking toward better algorithm
• We shouldn’t ignore
asymptotically slower algorithms,
T(n)
however.
• Real-world design situations
often call for a careful balancing

n n0
Relations Between Q, O, W

An adaptation of various ebooks and online


resources for educational purpose
Commonly used notations
Tilde notation vs. big-Oh notation
Difference between Big-oh and Tilde
Unit 1

Amortized Analysis
Amortized Analysis
Amortized Analysis

• Amortized analysis is a method used in computer science to analyze the average


performance of an algorithm over multiple operations.
• Instead of analyzing the worst-case time complexity of an algorithm, which gives an upper
bound on the running time of a single operation, amortized analysis provides an average-case
analysis of the algorithm by considering the cost of several operations performed over time.
• The key idea behind amortized analysis is to spread the cost of an expensive operation over
several operations.
• Amortized analysis is useful for designing efficient algorithms for data structures such as
dynamic arrays, priority queues, and disjoint-set data structures. It provides a guarantee that
the average-case time complexity of an operation is constant, even if some operations may be
expensive.
Amortized Analysis : Multi-pop Stack
Amortized Analysis : Multi-pop Stack
Amortized Analysis : Multi-pop Stack
Amortized Analysis : Multi-pop Stack
Amortized Analysis : Multi-pop Stack
Amortized Analysis : Multi-pop Stack
Amortized Analysis : Multi-pop Stack
Amortized Analysis : Multi-pop Stack
Amortized Analysis : Multi-pop Stack
Asymptotic Intuition Summary

• Tilde: f & g are nearly equal


• Small-oh: f is much smaller than g
• Big-oh: f is roughly <=g
• Theta: f is equal to g
Aggregate Method: Dynamic Tables
Ex: Dynamic Tables (Aggregate Analysis)
Ex: Dynamic Tables (Aggregate Analysis)
Ex: Dynamic Tables (Aggregate Analysis)
Ex: Dynamic Tables (Aggregate Analysis)
Ex: Dynamic Tables (Aggregate Analysis)
Ex: Binary Counter (Aggregate Analysis)
Ex: Binary Counter (Aggregate Analysis)
Ex: Binary Counter (Aggregate Analysis)
Accounting Method
Ex. Stack - Multipop
Ex. Dynamic Tables
Ex. Stack - Multipop
Ex. Dynamic Tables
Ex. Dynamic Tables
Ex. Dynamic Tables
Ex. Stack - Multipop
Ex. Stack - Multipop
Ex. Stack - Multipop
Ex. Stack - Multipop
Ex. Stack - Multipop
Ex. Stack - Multipop
Potential Method
Understand Potential
Bound on amortized cost
Bound on amortized cost
Bound on amortized cost
Ex. Dynamic Tables
Ex. Dynamic Tables
Ex. Multipop
Ex. Multipop
Ex. Multipop

Common questions

Powered by AI

Algorithms can solve a wide range of problems, highlighting their versatility in fields such as computer science and beyond. Examples include sorting large datasets, finding optimal data transfer routes, powering search engines by locating information efficiently, and analyzing the human genome by facilitating data storage and processing. These applications illustrate how algorithms are indispensable in handling complex computations, optimizing processes, and providing systematic approaches to solve diverse problems efficiently .

The Random Access Machine (RAM) model is a theoretical model that assumes instructions are executed sequentially with each operation taking a uniform amount of time, thereby simplifying the analysis of algorithms. This model is important because it abstracts away machine-specific constants, allowing the analysis to focus on the number of operations an algorithm performs. It helps in predicting the resource requirements of an algorithm, thus providing a machine-independent framework for comparing different algorithms based on their computational complexity .

Machine-independent time in algorithm analysis refers to the evaluation of an algorithm's performance by abstracting out machine-dependent constants, focusing instead on the growth rate of operation counts as input size increases. This approach allows for comparing algorithms based purely on their computational complexity without being influenced by the specific hardware environment. It provides a more generalized and fair basis for evaluating algorithms, ensuring that the comparison remains relevant across different computational platforms .

The growth of T(n) as n approaches infinity is crucial for comparing algorithms with different complexities. Asymptotic analysis emphasizes the scalability of algorithms, and even though algorithms like Θ(n²) and Θ(n³) may perform similarly for small n, the difference becomes significant as n grows large. A Θ(n²) algorithm ultimately outperforms a Θ(n³) algorithm for large n, as its growth rate is slower, making it more efficient for handling larger datasets .

Amortized analysis ensures efficiency in data structures like dynamic arrays by averaging out the high cost of expensive operations over a sequence of operations, effectively maintaining constant average time complexity. Methods such as the aggregate, accounting, and potential methods can compute amortized costs. For instance, in dynamic arrays, resizing may incur occasional high costs, but the actual amortized cost remains low over time because the analysis spreads this cost over multiple insertions, ensuring overall efficiency .

Amortized analysis differs from worst-case analysis by averaging the time complexity across a sequence of operations rather than focusing on a single operation's upper bound. It is useful for ensuring that the average performance remains efficient over time, even if some operations are costly. This approach is particularly beneficial in the design of dynamic data structures, such as dynamic arrays and priority queues, where it provides constant average time complexity for operations by spreading the cost of infrequent expensive operations across many cheaper ones .

In a multi-pop stack operation, amortized analysis is beneficial because it considers the total cost of multiple pop operations over time, where an expensive operation like clearing multiple elements is distributed over many cheaper operations. This contrasts with worst-case analysis that only focuses on the cost of a single maximum operation, potentially leading to an overestimation of average performance. Amortized analysis thus provides a more realistic understanding of the operation’s average time complexity, particularly useful when elements are frequently pushed and popped in bulk .

Asymptotic analysis enables the comparison of algorithms by evaluating their performance in terms of input size n, focusing on the growth rate of time or space complexity as n approaches infinity. It uses notations like Big O, Theta, and Omega to describe the behavior of an algorithm's upper, exact, or lower bounds respectively. By assessing these bounds, asymptotic analysis abstracts away machine-specific details and allows for a performance guarantee regardless of variations in hardware specifications .

Algorithm designers might consider using asymptotically slower algorithms in practical situations because real-world constraints often require balancing factors beyond theoretical performance, such as simplicity, ease of implementation, and lower constant factors in runtime, which can make a theoretically slower algorithm perform better in practice for moderate input sizes. Additionally, certain slower algorithms might have desirable properties like better space efficiency or adaptability to specific problem instances, making them viable choices under certain conditions .

In average-case analysis, assumptions about the input distribution are critical because they influence the calculated expected performance of an algorithm. If the distribution accurately reflects real-world scenarios, the analysis provides realistic performance expectations. However, incorrect distribution assumptions can lead to misleading conclusions about an algorithm’s efficiency. This impacts performance evaluation by potentially overestimating or underestimating the practical efficiency of an algorithm, thus affecting decisions on algorithm selection for specific applications .

You might also like