Lec 5
Lec 5
1 Complexity Theory
1.1 Resource Consumption
Complexity theory, or more precisely, Computational Complexity theory, deals
with the resources required during some computation to solve a given problem.
The process of computing involves the consumption of different resources
like time taken to perform the computation, amount of memory used, power
consumed by the system performing the computation, etc. A theory of resource
consumption looks at how much resources are needed to solve any particular
computational problem. In order to consider such a theory, we first have to
decide what the resources are.
In this class the resources that we consider will be time and space (in terms
of memory). What does it mean to measure ‘time’ and ‘space’ ? One possibility
is that we run algorithm A on machine I and measure the time t in seconds that
the program executes for and space k in KB as the amount of memory used.
However, this method has the drawback of being machine-dependent. Thus, the
same algorithm run on two different machines may give us different results for
time and space consumption.
To overcome this limitation, we should use an abstract model of a computer.
The model we use is the Turing machine. Time is measured in terms of the
number of the steps taken by the Turing machine, and space is measured in terms
of the number of cells used. In reality, it is tedious to construct the equivalent
Turing machine for a given algorithm. Instead, we perform a rough analysis
for the algorithm itself, under the assumption that calculations involving fixed
input sizes take constant time and space. Our goal is to have an analysis that
is technology independent.
The performance of an algorithm needs to be measured over all inputs. We
aggregate all instances of a given size together and analyze performance for such
instances. For an input of size n, let tn be the maximum time consumed and
sn be the maximu space consumed. Typically, we want to find functions t and
s, such that t(n) = tn and s(n) = sn . This is worst-case analysis. We can also
do average-case analysis, but that is typically harder and less meaningful.
One might wonder why don’t simply use benchmarks. The two major prob-
lems with using benchmarking as the basis of our performance analysis are:
1
• Benchmarks do not convey any information about the scalability of the
problem.
• Algorithms may be optimized specifically for the set of benchmarks. We
will not know whether our performance analysis holds outside that set.
This is called design for benchmarking.
By focusing on all instances of a given length, and measuring scalability, we get
an analysis that is instance independent.
Asymptotic Notations
As mentioned in the earlier section, we are interested in finding the behavior
of a program without considering architectural details of a machine. We nor-
mally use asymptotic notations to compare different algorithms as stated below.
2
Table 1: Complexity Classes and Scalability
3
the binary digits 0 and 1.) Then the number of possible configurations of the
memory is dc log n = nc log d , which is polynomial in n. So we can run through
all possible configurations of the memory in polynomial time. Since a solution
to the problem must be one of those possible configurations, we can compute it
in polynomial time. Thus we have
LOGSPACE ⊆ PTIME
and similarly,
PSPACE ⊆ EXPTIME
Putting together all the relationships between the classes, we get,
An immediate question that arises is whether the containments above are strict.
It was discovered in the 1960’s that an exponential gap matters. So,
PTIME ( EXPTIME
So we know that the long chain of containments has gaps. The question is where
exactly the gaps lie. This has been the central question of complexity theory
for the last 40 years and it is still unresolved.
2 Truth Evaluation
Truth evaluation is concerned with the following question:
Given ϕ ∈ F orm and τ ∈ 2AP (ϕ) , does τ |= ϕ?
ϕ(τ ) =
case
ϕ is p ∈ P rop : ϕ(τ ) = τ (p)
ϕ is (¬θ) : ϕ(τ ) = ¬(θ(τ ))
ϕ is (θ ◦ ψ) : ϕ(τ ) = ◦(θ(τ ), ψ(τ ))
esac
4
3 Logical Extremes
In the set of all formulas there are two extreme types of formulas: those that
are always true, and those that are always false. More formally, if ϕ ∈ Form
then ϕ can be put in one of three categories:
Valid Contra-
dictions
Satisfiable
Form
5
Lemma 1
1. ψ is valid ⇐⇒ (¬ψ) is not satisfiable.
2. ψ is not valid ⇐⇒ (¬ψ) is satisfiable.
3. ψ is satisfiable ⇐⇒ (¬ψ) is not valid.
These can be easily proved simply from the definitions. We can now begin
classifying formulas according to the above definitions. Given ϕ, we would like
to know
• is ϕ satisfiable?
• is ϕ valid?
These questions are related as seen in the above lemmas and are fundamental
to logic.