Asymptotic Notation, Functions, and Running Times
Asymptotic Notation, Functions, and Running Times
When input sizes large enough to make only the order of growth of the running time
relevant, here we are studying the asymptotic efficiency of algorithms. That is, we
are concerned with how the running time of an algorithm increases with the size of
the input in the limit, as the size of the input increases without bound.
Usually, an algorithm that is asymptotically more efficient will be the best choice for
all but very small inputs.
Asymptotic notation, functions, and running times
We will use asymptotic notation primarily to describe the running times of algorithms,
asymptotic notation applies to functions, the functions to which we apply asymptotic
notation will usually characterize the running times of algorithms. But asymptotic
notation can apply to functions that characterize some other aspect of algorithms
(the amount of space they use, for example), or even to functions that have nothing
whatsoever to do with algorithms.
Even when we use asymptotic notation to apply to the running time of an algorithm,
we need to understand which running time we mean. Sometimes we are interested
in the worst-case running time. Often, however, we wish to characterize the running
time no matter what the input.
In other terms, we often wish to make a blanket statement that covers all inputs, not
just the worst case. We shall see asymptotic notations that are well suited to
characterizing running times no matter what the input.
Θ-notation
The notation Θ(n) is the formal way to express both the lower bound and the upper
bound of an algorithm’s running time. For a given function g(n), we denote by Θ(g(n))
the set of functions
Θ(g(n)) = {f(n) : there exist positive constants c1, c2, and n0 such that.
Big Ο-notation
The ‚Θ-notation asymptotically bounds a function from above and below. When we
have only an asymptotic upper bound, we use O notation. For a given function
g(n) we denote by O(g(n)) (pronounced “big-oh of g of n” or sometimes just “oh of g
of n”) the set of functions
O(g(n)) = {f(n) : there exist positive constants c and n0 such that
constant O(1)
logarithmic O(log n) (logn, log n2 ϵ O(log n))
linear O(n)
log-linear O(n log n)
quadratic O(n2)
cubic O(n3)
polynomial O(nk) (k is a constant)
exponential O(c1) (c is a constant > 1)
Greedy Algorithms
Algorithms for optimization problems typically go through a sequence of steps, with a
set of choices at each step. For many optimization problems, using dynamic
programming to determine the best choices is overkill; simpler, more efficient
algorithms will do.
A greedy algorithm always makes the choice that looks best at the moment. That
is, it makes a locally optimal choice in the hope that this choice will lead to a globally
optimal solution. Greedy algorithms do not always yield optimal solutions, but for
many problems they do. The greedy method is quite powerful and works well for a
wide range of problems.
The greedy method is a general algorithm design paradigm, built on the following
elements.
Configurations: different choices, collections, and values to find.
Objective function: a score assigned to configurations, which we want to either
maximize or minimize
Applications of the Greedy Strategy
Optimal solutions:
1. Change made for “normal” coin denominations
2. Minimum spanning tree (MST)
3. Single-source shortest paths
4. Simple scheduling problems (activity selection)
5. Huffman codes
Approximations/heuristics:
1. Travelling Salesman Problem
2. Knapsack Problem
3. Other Combinatorial Optimization problems